Abstract: The present disclosure provides systems and methods for event correlation in the European Telecommunications Standards Institute (ETSI) reference architecture wherein Application layer, Virtual Network Function (VNF) layer and Virtual Infrastructure layers of the NFV systems are based on different technologies and thus have different monitoring systems. A unique identifier in the form of a Virtual Machine (VM) identifier is generated which along with time stamps of events serve as common attributes across the ETSI Management and Orchestration (MANO) ecosystem and hence facilitate correlation of events. The unique identifier is a 13 byte concatenated value, consisting of host name, line card number, VNF identifier and VM number.
Claims:1. A method comprising:
generating a unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system of a telecom cloud;
receiving a first set of events from application Element Management System (EMS) of the NFV system;
receiving a second set of events from Virtualized Network Function Infrastructure (VNFI) Manager of the NFV system;
receiving a third set of events from Virtual Network Function (VNF) Manager of the NFV system; and
correlating at least two of the first set of events, the second set of events and the third set of events based on the unique identifier and time stamp associated thereof.
2. The method of claim 1, wherein the unique identifier comprises 13 bytes of data corresponding to placement of the virtual machine in the NFV system.
3. The method of claim 2, wherein the unique identifier comprises first four bytes corresponding to geographical location of physical host specifying geographical datacenter identification, subsequent four bytes corresponding to physical host name in the datacenter, ninth byte corresponding to decimal specifying line card, tenth byte corresponding to decimal specifying VNF service identification, eleventh byte corresponding to decimal specifying VM identification, twelfth byte corresponding to decimal specifying VNF service vendor and thirteenth byte reserved for communication service provider.
4. The method of claim 1, wherein generating a unique identifier for each VM is performed at the time of VM instantiation.
5. The method of claim 1, wherein the unique identifier is distributed to each of the VNFI manager, the VNF manager, the EMS and a Software Defined Networking (SDN) controller of the NFV system.
6. The method of claim 1, wherein correlating at least two of the first set of events, the second set of events and the third set of events is followed by instantiating a new VM for at least one of fault management and performance management.
7. The method of claim 1, wherein correlating at least two of the first set of events, the second set of events and the third set of events is followed by computing integrated Key Performance Indicators (KPIs) related to hardware faults, Virtual Network Function (VNF) faults and application performance.
8. A system comprising:
one or more data storage devices operatively coupled to the one or more processors and configured to store instructions configured for execution by the one or more processors to:
generate a unique identifier for each virtual machine (VM) in a Network Function Virtualization (NFV) system;
receive a first set of events from application Element Management System (EMS) of the NFV system;
receive a second set of events from Virtualized Network Function Infrastructure (VNFI) Manager of the NFV system;
receive a third set of events from Virtual Network Function (VNF) Manager of the NFV system; and
correlate at least two of the first set of events, the second set of events and the third set of events based on the unique identifier and time stamp associated thereof.
9. The system of claim 8, wherein the unique identifier comprises 13 bytes of data corresponding to placement of the virtual machine in the NFV system.
10. The system of claim 9, wherein the unique identifier comprises first four bytes corresponding to geographical location of physical host specifying geographical datacenter identification, subsequent four bytes corresponding to physical host name in the datacenter, ninth byte corresponding to decimal specifying line card, tenth byte corresponding to decimal specifying VNF service identification, eleventh byte corresponding to decimal specifying VM identification, twelfth byte corresponding to decimal specifying VNF service vendor and thirteenth byte reserved for communication service provider.
11. The system of claim 8, wherein the one or more processors are further configured to distribute the unique identifier to each of the VNFI manager, the VNF manager, the EMS and a Software Defined Networking (SDN) controller of the NFV system.
12. The system of claim 8, wherein the one or more processors are further configured to generate a unique identifier for each VM at the time of VM instantiation.
13. The system of claim 8, wherein the one or more processors are further configured to instantiate a new VM for at least one of fault management and performance management.
14. The system of claim 8, wherein the one or more processors are further configured to compute integrated Key Performance Indicators (KPIs) related to hardware faults, Virtual Network Function (VNF) faults and application performance.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
EVENT CORRELATION IN NETWORK FUNCTION VIRTUALIZATION SYSTEMS
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the embodiments and the manner in which it is to be performed.
TECHNICAL FIELD
[0001] The embodiments herein generally relate to telecom cloud, particularly event correlation in Network Function Virtualization (NFV) systems of the telecom cloud.
BACKGROUND
[0002] To keep pace with growing data traffic, Communication Service Providers (CSPs) require constant network capacity enhancement. Traditional way of capacity enhancement does not scale with exponential data traffic growth. Capacity enhancement strategies are generally voice-based, where linear growth of service is supported by increasing number of network equipment. Data traffic consists of various applications and each application has different requirements, e.g. VoLTE service is carried by small data packets, while streaming Video traffic is based on large data packets. To cope with this kind of busty data growth, CSPs require elastic model of growth and service development. Apart from the capacity constraint, current network equipment are proprietary by nature, wherein their hardware and software are tightly coupled. This kind of tight coupling creates vendor lock-in e.g. CSPs have to wait for vendor’s software release for new service feature deployment. Technologies such as Network Function Virtualization (NFV) and Software Defined Networking (SDN) provide de-coupling of hardware and software and empowers CSPs to take control of software associated with their services. The NFV describes how to virtualize telecom network functions while SDN describes separate control logic and data forwarding from proprietary equipment. In accordance with by the European Telecommunications Standards Institute (ETSI), the telecom NFV environment comprises of Element Management System (EMS), Virtualized Network Function Infrastructure (VNFI) Manager and VNF manager. The ESTI Management and Orchestration (MANO) elements viz., Application layer (EMS), Virtual Network Function (VNF) layer and NFV Infrastructure layer are based on different technologies and thus have different monitoring systems. Telecom networks requires robust performance and fault management capabilities. To provide holistic performance and fault management in the NFV network, event correlation between Element Management System (EMS) managing the application layer, Virtualized Network Function Infrastructure (VNFI) Manager managing the NFV Infrastructure layer and VNF manager managing the VNF layer is essential.
SUMMARY
[0003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
[0004] Technologies such as NFV and SDN realize the concept of telecom cloud by decoupling hardware from software. The VNF Manager is responsible for application virtualization layer events; the VNFI Manager is responsible for virtual infrastructure layer events; and the EMS monitors application performance. To provide holistic performance and fault management in NFV networks, event correlation among the EMS, the VNFI manager and the VNF manager is essential. The present disclosure enables such correlation based on common correlation attributes such as a unique identifier for each virtual machine (VM) in an NFV system and event time stamps. The present disclosure also provides a format for the unique identifier which is globally unique and describes placement of the VM in a host and can be used as a common attribute among various data models of NFV Management and Orchestration (MANO) elements defined by the European Telecommunications Standards Institute (ETSI).
[0005] In an aspect, there is provided a method comprising generating a unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system of a telecom cloud; receiving a first set of events from application Element Management System (EMS) of the NFV system; receiving a second set of events from Virtualized Network Function Infrastructure (VNFI) Manager of the NFV system; receiving a third set of events from Virtual Network Function (VNF) Manager of the NFV system; and correlating at least two of the first set of events, the second set of events and the third set of events based on the unique identifier and time stamp associated thereof.
[0006] In another aspect, there is provided a system comprising: one or more data storage devices operatively coupled to the one or more processors and configured to store instructions configured for execution by the one or more processors to: generate a unique identifier for each virtual machine (VM) in a Network Function Virtualization (NFV) system; receive a first set of events from application Element Management System (EMS) of the NFV system; receive a second set of events from Virtualized Network Function Infrastructure (VNFI) Manager of the NFV system; receive a third set of events from Virtual Network Function (VNF) Manager of the NFV system; and correlate at least two of the first set of events, the second set of events and the third set of events based on the unique identifier and time stamp associated thereof.
[0007] In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: generate a unique identifier for each virtual machine (VM) in a Network Function Virtualization (NFV) system; receive a first set of events from application Element Management System (EMS) of the NFV system; receive a second set of events from Virtualized Network Function Infrastructure (VNFI) Manager; receive a third set of events fromVirtual Network Function (VNF) Manager of the NFV system; and correlate at least two of the first set of events, the second set of events and the third set of events based on the unique identifier and time stamp associated thereof.
[0008] In an embodiment of the present disclosure, the unique identifier comprises 13 bytes of data corresponding to placement of the virtual machine in the NFV system.
[0009] In an embodiment of the present disclosure, the unique identifier comprises first four bytes corresponding to geographical location of physical host specifying geographical datacenter identification, subsequent four bytes corresponding to physical host name in the datacenter, ninth byte corresponding to decimal specifying line card, tenth byte corresponding to decimal specifying VNF service identification, eleventh byte corresponding to decimal specifying VM identification, twelfth byte corresponding to decimal specifying VNF service vendor and thirteenth byte reserved for communication service provider.
[0010] In an embodiment of the present disclosure, generating a unique identifier for each VM is performed at the time of VM instantiation.
[0011] In an embodiment of the present disclosure, the unique identifier is distributed to each of the VNFI manager, the VNF manager, the EMS and a Software Defined Networking (SDN) controller of the NFV system.
[0012] In an embodiment of the present disclosure, correlating the first set of events, the second set of events and the third set of events is followed by instantiating a new VM for at least one of fault management and performance management.
[0013] In an embodiment of the present disclosure, correlating the first set of events, the second set of events and the third set of events is followed by computing integrated Key Performance Indicators (KPIs) related to hardware faults, Virtual Network Function (VNF) faults and application performance.
[0014] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the present disclosure, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0016] FIG.1 illustrates the European Telecommunications Standards Institute (ETSI) Network Function Virtualization (NFV) system as known in the art;
[0017] FIG.2A illustrates an exemplary block diagram of a traditional telecom setup versus an NFV system;
[0018] FIG.2B illustrates an exemplary block diagram of an NFV system and correlation points existing therein for event correlation;
[0019] FIG.3 illustrates an exemplary block diagram of a system for event correlation in NFV systems in accordance with an embodiment of the present disclosure;
[0020] FIG.4 illustrates an exemplary flow diagram for event correlation in accordance with an embodiment of the present disclosure;
[0021] FIG.5 illustrates an exemplary flow diagram for distribution of unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system in accordance with an embodiment of the present disclosure; and
[0022] FIG.6 illustrates an exemplary format of unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system in accordance with an embodiment of the present disclosure.
[0023] It should be appreciated by those skilled in the art that any block diagram herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computing device or processor, whether or not such computing device or processor is explicitly shown.
DETAILED DESCRIPTION
[0024] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0025] The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
[0026] It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the preferred, systems and methods are now described.
[0027] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
[0028] Before setting forth the detailed explanation, it is noted that all of the discussion below, regardless of the particular implementation being described, is exemplary in nature, rather than limiting.
[0029] Network Function Virtualization (NFV) is an architecture of telecom services defined by the European Telecommunications Standards Institute (ETSI). NFV uses generic hardware platform and software adapted for the generic hardware platform. Thus, NFV creates a network much more flexible and dynamic than a legacy communication network. In NFV-based networks, a Virtual Network Function (VNF) decouples the software implementation of the network function from the infrastructure resources it runs on by virtualization. The VNFs can be executed on almost any generic hardware processing facility. Therefore, VNFs may be installed, removed, and moved between hardware facilities, much more easily, less costly and thus, more frequently. Decoupling of the software implementation from the infrastructure resources however poses a challenge with regards to event correlation since several monitoring systems are involved. Multiple sources for Fault Management (FM) and Performance Management (PM) counters are required to be correlated for holistic performance & fault management. For instance, certain hardware faults, captured by the VNFI manager, may impact VNF performance. As described in ETSI GS NFV-SWA 001, faults from underlying NFVI that can have an impact on VNF include fault in virtualized resources that might affect a VNF's proper functioning (e.g. whole NFVI down), fault in the VNF's redundancy scheme (e.g. backup virtualized resources unavailable), fault in the Vn-Nf/SWA-5 interface (e.g. fault in the virtualization layer/hypervisor), fault in the virtualization container (e.g. VM malfunctioning), and faults concerning virtualization container connectivity. These infrastructure related faults are required to be correlated with correct VNF alerts, for fault management and route cause analysis (RCA) of NFV network, e.g. call drops per VM, due to failure of particular CPU, utilization ratio of virtual CPU/physical CPU etc. The present disclosure addresses this challenge by providing common attributes and a format for the same to enable correlating events for enhanced performance and fault management.
[0030] Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and method.
[0031] FIG.1 illustrates the European Telecommunications Standards Institute (ETSI) Network Function Virtualization (NFV) system 100 as known in the art. The European Telecommunications Standards Institute Network Function Virtualization Industry Specification Group (ETSI NFV ISG) has defined NFV architectural framework for Communication Service Provider (CSP) environments. The NFV system 100 has Management and Orchestration (MANO) components referred to as NFV Manager and Orchestrator 102 to provide Virtual Machine (VM) life cycle management capabilities. The MANO components include Virtual Network Function (VNF) manager 102-1, Virtualized Network Infrastructure (VNFI) Manager 102-2 and NFV Orchestrator (NFVO) 102-3. The NFV orchestrator 102-3 is similar to cloud orchestrator and is configured to process user service request to instantiate Virtual machine VM by interacting with the VNFI manager 102-2. OSS (Operations Support Systems)/BSS (Business Support Systems) 104 include systems/applications that a service provider uses to operate its business by coordinating with MANO components 102. Virtual function 106 includes an Element Management System (EMS) 106-1 and Virtual Network Function (VNF) 106-2. The VNF 106-2 is a collection of VMs or a single VM to realize application functionality. For instance, a router can be a VNF and the router serving as the VNF can include multiple VMs performing different functions such as packet filtering, packet receiving, and the like. The EMS 106-1 is responsible for the FCAPS (Fault, Configuration, Accounting, Performance and Security management) of the VNF 106-2. Accordingly, responsibilities of the EMS 106-1 include configuration for the network functions provided by the VNF 106-2, fault management for the network functions provided by the VNF 106-2, accounting for the usage of functions of the VNF 106-2, collecting performance measurement results for the functions provided by the VNF 106-2 and security management for the functions of the VNF 106-2. The EMS 106-1 may be aware of virtualization and collaborate with the VNF Manager 102-1 to perform those functions that require exchanges of information regarding resources of NFVI (NFV Infrastructure) 108 associated with the VNF 106-2. The NFVI 108 includes virtualization components 108-1 such as Hypervisor, Open vSwitch, etc. and hardware resources 108-2. The NFVI 108 is managed by the VNFI manager 102-2. The VNF manager 102-1 is configured to perform lifecycle management (Instantiation, update, query, scaling, termination) of VNF 106-2, instance-related collection of performance measurement results and faults/events information of NFVI 108, and correlation to instance-related events/faults of the VNF 106-2. Functions of the VNFI Manager 102-2 include resource management and allocation of resources of the NFVI 108, based on request from the NFVO 102-3; managing inventory of association of virtual resources to physical resources; supporting the management of VNF forwarding graphs (create, query, update, delete), e.g. by creating and maintaining virtual networks; collection of performance and fault information (e.g. via notifications) of hardware resources (compute, storage, and networking) software resources (e.g. hypervisors), and virtualized resources (e.g. VMs); and forwarding of performance measurement results and faults/events information relative to virtualized resources.
[0032] FIG.2A illustrates an exemplary block diagram of a traditional telecom setup 200A versus an NFV system 200B. In traditional telecom network, OSS/BSS platform captures data from downstream EMS directly. Being tightly coupled with hardware, EMS system has end to end view of underneath application and hardware. As seen in the traditional telecom setup 200A, Vendor A EMS 204A and vendor B EMS 204B coordinate with the communication service provider’s (CSP) telecom OSS/BSS and the integrated software and hardware infrastructure 202A and 202B respectively for management and orchestration of the setup 200A. As seen from the NFV system 200B, the VNF 106 and the NFVI 108 are managed by the VNF manager 102-1 and the VNFI manager 102-2 respectively. Vendor A EMS 110-1 and vendor B EMS 110-2 manage the FCAPS functionalities of VNF 106 in coordination with the VNF manager 102-1 and the VNFI manager 102-2. Correlation between CSP’s telecom OSS/BSS 104-1 AND IT’s OSS/BSS is a challenge since there are no common attributes between the events received from the VNF manager 102-1, the VNFI manager 102-2, the Vendor A EMS 110-1 and vendor B EMS 110-2 on account of the different management platforms for the virtualization and application layers. In NFV environment Application layer, VNF layer and Virtual Infrastructure layers are based on different technologies and thus have different monitoring systems. FIG.2B illustrates an exemplary block diagram of an NFV system and correlation points existing therein for event correlation. As seen there are three streams of events, viz., application related events, VNF related events and virtual infrastructure related events that need to be correlated to derive a meaningful interpretation of the large number of events occurring in a telecom cloud. Event correlation is imperative in a telecommunication network for fault management, performance management and route cause analysis (RCA).
[0033] FIG.3 illustrates an exemplary block diagram of a system 300 for event correlation in Network Function Virtualization (NFV) systems, illustrating exemplary functional modules in accordance with an embodiment of the present disclosure. In an embodiment, the system 300 includes one or more processors 304, communication interface device(s) or input/output (I/O) interface(s) 306, and one or more data storage devices or memory 302 operatively coupled to the one or more processors 304. The one or more processors 304 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 300 can be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, cloud, and the like.
[0034] The I/O interface device(s) 306 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[0035] The memory 302 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules 310 through 316 of the system 300 can be stored in the memory 302.
[0036] FIG.4 illustrates an exemplary flow diagram for event correlation in accordance with an embodiment of the present disclosure. The flow diagram illustrated in FIG.4 of the present disclosure will now be explained in detail with reference to the components of the system 300 as depicted in FIG.3 in accordance with an embodiment of the present disclosure. Event Correlation refers to combining datasets pertaining to events, using common attributes, to form a new resultant record. These common attributes are hereinafter referred to as ‘correlation keys’. The present disclosure provides the following two correlation keys for correlation of events in NFV systems:
1) Event Time stamp: time of event occurrence; and
2) Unique identifier: VM_ID (virtual machine ID): virtual machine ID, distributed in VNFD (VNF descriptor).
[0037] To utilize VM_ID as a correlation key, VM_ID is required to be unique in the entire NFV deployment. A policy of having unique In an embodiment, VM_ID may be enforced by CSPs for the entire NFV deployment including orchestration, VNF, EMS, downstream SDN controller, VNFI and all other involved tools, and systems. In accordance with the present disclosure, VM_ID is described in VNFD templates and distributed at the time of VNF instantiation.
[0038] In the exemplary flow diagram of FIG.4, fault management correlation is depicted. A correlation engine 310 is configured to capture a first set of events, referenced generally as step 1, from the EMS 106-1 pertaining to an application. In an exemplary embodiment, the first set of events can include VM_ID : ABCD, timestamp: XX:XX:XX, application ID: vMME and release code: Drop. The correlation engine 310 is further configured to capture a second set of events from the Virtualized Network Function Infrastructure (VNFI) Manager 102-2 and a third set of events from the Virtual Network Function (VNF) Manager 102-1. In the illustrated embodiment, the VNFI manager 102-2 forwards virtualization layer and hardware related alerts to the correlation engine 310 referenced generally as step 2. In an exemplary embodiment, the second set of events can include VM_ID: ABCD, timestamp: XX:XX:XX and Physical-CPU scheduler error. At step 3, the correlation engine 310 correlates EMS event and VNFI event to interpret that VM_ID: ABCD is observing physical CPU scheduler fault, which is resulting in increased drop calls. At step 4, the correlation engine 310 coordinates with Policy manager 314 for resolution. At step 5, the Policy manager 314 forwards a rule to migrate the VM to a new location for VM_ID: ABCD. In an embodiment, migrate VM instruction may be based on call drop above a pre-defined threshold. For instance, in the illustrated embodiment, the Policy manager 314 includes a rule to migrate VM, if call drop for VM_ID : ABCD > 0.01%. At step 6, the correlation engine 310 co-ordinates with an inventory manager 312 to get hardware details for a new VM. In an embodiment, the inventor manager 312 can include EMS data 312-1, VNF data 312-2, VNFI data 312-2 and VIM_ID generator 312_4. The hardware details can include new VM location (node, line card and VM number), RAM, CPU and memory details as described in VM affinity rules in VNFD. The new VM_ID will be based on the new location. At step 7, the correlation engine 310 forwards details including instruction to migrate VM for VM_ID : ABCD to VM_ID : XYWZ with new node details, RAM/CPU/memory details to the VNFI manager 102-2. At step 8, the VNFI manager 102-2 instructs a hypervisor to spawn the new VM, with VM ID as XYWZ. Thus, by correlating events between the EMS 106 and the VNFI manager 102-2, using VM_ID and timestamp as correlation keys for route cause analysis, the correlation engine 310 is able to conclude that the call drop rate is higher for the VM_ID: ABCD due to the physical CPU scheduler error. As a result, for fault management, a new VM is instantiated with a new VM_ID. Although the illustrated embodiment shows a correlation between events captured from EMS 106-1 and VNFI manager 102-2, it may be noted that correlation between at least two of the three captured events may be possible.
[0039] In an embodiment, the correlated events are further used for computing various integrated Key Performance Indicator (KPI) related to hardware faults, Virtual Network Function (VNF) faults and application performance e.g. call drops per physical CPU, VM performance degradation due to hardware scheduler failures, etc.
[0040] FIG.5 illustrates an exemplary flow diagram for distribution of the unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system in accordance with an embodiment of the present disclosure. In an embodiment, VM_ID is generated by the VM_ID generator 312-4 of the inventor manager 312 at the time of VM instantiation. The NFV orchestrator 102 obtains the generated VM_ID and distributes the VM_ID to each of the VNFI manager 102-2, the VNF manager 102-1, the EMS 106-1 and the Software Defined Networking (SDN) controller 112 of the NFV system, thereby enabling the NFV elements to use the unique identifier VM_ID during the entire VM lifecycle management and facilitating correlation of events for fault and performance management in the NFV system.
[0041] FIG.6 illustrates an exemplary format of unique identifier (VM_ID) for each virtual machine (VM) in a Network Function Virtualization (NFV) system in accordance with an embodiment of the present disclosure. The present disclosure provides generation of the unique identifier (VM_ID) having a specific format such that the VM_ID is unique, globally recognizable and node specific to facilitate correlation of events in NFV systems. Each data center has number of hosts. Each host has number of line cards. Each line card hosts number of VNFs. Each VNF has number of VMs. In accordance with the present disclosure, the generated VM_ID format includes data pertaining to VM’s placement in the telecom cloud. The VM_ID of the present disclosure is 13 octet / byte concatenated value consisting of host name, line card number, VNF identification and VM number. In an embodiment, the unique identifier VM_ID comprises first four bytes corresponding to geographical location of physical host specifying geographical datacenter identification, subsequent four bytes corresponding to physical host name in the datacenter, ninth byte corresponding to decimal specifying line card, tenth byte corresponding to decimal specifying VNF service identification, eleventh byte corresponding to decimal specifying VM identification, twelfth byte corresponding to decimal specifying VNF service vendor and thirteenth byte reserved for communication service provider to indicate further details like VM flavors (such as VM CPU, RAM and storage), code version, etc. In the exemplary format of VM_ID illustrated in FIG.7, VM_ID NYM1101231340 reads as Load Balancer (LB) VM 13 is a part of Nokia vMME, hosted at line card number 12, at Physical host number 110, located at New York Manhattan data center. The VM_ID thus serves as a unique identifier which along with time stamp of an event facilitates event correlation in NFV systems that can further be utilized for fault / performance management.
[0042] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments of the present disclosure. The scope of the subject matter embodiments defined here may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language.
[0043] It is, however to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments of the present disclosure may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[0044] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules comprising the system of the present disclosure and described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The various modules described herein may be implemented as software and/or hardware modules and may be stored in any type of non-transitory computer readable medium or other storage device. Some non-limiting examples of non-transitory computer-readable media include CDs, DVDs, BLU-RAY, flash memory, and hard disk drives.
[0045] Further, although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[0046] The preceding description has been presented with reference to various embodiments. Persons having ordinary skill in the art and technology to which this application pertains will appreciate that alterations and changes in the described structures and methods of operation can be practiced without meaningfully departing from the principle, spirit and scope.
| # | Name | Date |
|---|---|---|
| 1 | 201621018522-IntimationOfGrant04-12-2023.pdf | 2023-12-04 |
| 1 | Form 3 [30-05-2016(online)].pdf | 2016-05-30 |
| 2 | 201621018522-PatentCertificate04-12-2023.pdf | 2023-12-04 |
| 2 | Form 20 [30-05-2016(online)].jpg | 2016-05-30 |
| 3 | Form 18 [30-05-2016(online)].pdf_82.pdf | 2016-05-30 |
| 3 | 201621018522-CLAIMS [19-06-2020(online)].pdf | 2020-06-19 |
| 4 | Form 18 [30-05-2016(online)].pdf | 2016-05-30 |
| 4 | 201621018522-COMPLETE SPECIFICATION [19-06-2020(online)].pdf | 2020-06-19 |
| 5 | Drawing [30-05-2016(online)].pdf | 2016-05-30 |
| 5 | 201621018522-FER_SER_REPLY [19-06-2020(online)].pdf | 2020-06-19 |
| 6 | Description(Complete) [30-05-2016(online)].pdf | 2016-05-30 |
| 6 | 201621018522-OTHERS [19-06-2020(online)].pdf | 2020-06-19 |
| 7 | Other Patent Document [21-07-2016(online)].pdf | 2016-07-21 |
| 7 | 201621018522-FER.pdf | 2019-12-19 |
| 8 | Form 26 [21-07-2016(online)].pdf_21.pdf | 2016-07-21 |
| 8 | 201621018522-Correspondence-250716.pdf | 2018-08-11 |
| 9 | 201621018522-Form 1-250716.pdf | 2018-08-11 |
| 9 | Form 26 [21-07-2016(online)].pdf | 2016-07-21 |
| 10 | 201621018522-Power of Attorney-250716.pdf | 2018-08-11 |
| 10 | abstract1.jpg | 2018-08-11 |
| 11 | 201621018522-Power of Attorney-250716.pdf | 2018-08-11 |
| 11 | abstract1.jpg | 2018-08-11 |
| 12 | 201621018522-Form 1-250716.pdf | 2018-08-11 |
| 12 | Form 26 [21-07-2016(online)].pdf | 2016-07-21 |
| 13 | 201621018522-Correspondence-250716.pdf | 2018-08-11 |
| 13 | Form 26 [21-07-2016(online)].pdf_21.pdf | 2016-07-21 |
| 14 | 201621018522-FER.pdf | 2019-12-19 |
| 14 | Other Patent Document [21-07-2016(online)].pdf | 2016-07-21 |
| 15 | 201621018522-OTHERS [19-06-2020(online)].pdf | 2020-06-19 |
| 15 | Description(Complete) [30-05-2016(online)].pdf | 2016-05-30 |
| 16 | 201621018522-FER_SER_REPLY [19-06-2020(online)].pdf | 2020-06-19 |
| 16 | Drawing [30-05-2016(online)].pdf | 2016-05-30 |
| 17 | 201621018522-COMPLETE SPECIFICATION [19-06-2020(online)].pdf | 2020-06-19 |
| 17 | Form 18 [30-05-2016(online)].pdf | 2016-05-30 |
| 18 | Form 18 [30-05-2016(online)].pdf_82.pdf | 2016-05-30 |
| 18 | 201621018522-CLAIMS [19-06-2020(online)].pdf | 2020-06-19 |
| 19 | Form 20 [30-05-2016(online)].jpg | 2016-05-30 |
| 19 | 201621018522-PatentCertificate04-12-2023.pdf | 2023-12-04 |
| 20 | Form 3 [30-05-2016(online)].pdf | 2016-05-30 |
| 20 | 201621018522-IntimationOfGrant04-12-2023.pdf | 2023-12-04 |
| 1 | search_19-12-2019.pdf |