Abstract: Abstract SYSTEM AND METHOD FOR MONITORING MULTI-DOMAIN NETWORK AND MULTI-VENDOR NETWORK Disclosed is a system (100) for monitoring multi-domain network and multi-vendor network includes a management server (104). The management server (104) includes a processing circuitry (106) that is configured to collect performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements (102) across one or more domain/vendor networks, correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified, identify and reconcile delta changes between the configuration data/logs to update network topology, map the categorized performance data, alarm data, configuration data/logs, and signaling traces of one or more network elements, and superimpose the mapped data to topology layered visualization. The present disclosure also relates to a method (200) for monitoring multi-domain network and multi-vendor network. Figure 1 will be the reference.
DESC:Technical Field
The present disclosure relates to monitoring multiple communication networks. More particularly, the present disclosure relates to system and method for monitoring multi-domain network and multi-vendor network.
Background
Generally, telecommunication network consists of devices belonging to different Domains (CS, PS, IP, Radio and Transmission) and Vendors in a multi-layered environment. A multi-domain network architecture consists of several devices such as Core Routers, Core Network Elements (MME, PGW, SGW, SGSN, GGSN, MSC, MGW, HLR etc.), Radio Nodes (RNC, BSC, eNodeB etc.), Transmission devices (Optical Fiber, Microwave) belonging to different layers (Layer 1, Layer 2, Layer 3, and Layer 7). The nodes shall be interconnected to each other. The nodes are managed by Operation Supports System (OSS). The OSS plays a role in carrying out functions such as network configuration, service provisioning and network inventory management. Outage in any of the above nodes of the OSS of the network may have an impact to the connected nodes belonging to other domain, for example a fiber cut in a transmission will impact performance metrics (KPIs) belonging to one or more Radio nodes (BSC, RNC) connected to it and further it will have an impact in the core devices connected to it. To identify the exact root cause in a particular domain is always a challenging task.
One of the recent developments discloses a method for efficient multi-layer optical networking, comprising a controller which receives traffic packets from a device. Each one of said traffic packets including a VLAN tag indicates a destination of a second device. A first monitoring unit analyzes the bytes content of said received packets during a period of time and reporting said analysis to said controller.
One of the recent developments discloses a communications network which has multiple resource-allocation layers and incorporates a management structure for allocating resources to allocate resources requested by the first layer of said layers from the second of said layers. At the first layer, the management structure provides an indication to the second layer of the required resources that are to be allocated from the second layer. The second layer automatically offers the required resource together with a condition for use of those resources.
One of the recent developments discloses a network management system for a multi-layer network having multiple architectural or technological domains including an inter domain configuration manager arranged between a set of one or more network service management applications and a set of network element domain managers, each of the domain managers being associated with a particular domain of the multi-layer network.
One of the recent developments discloses a real-time network-analysis system that comprises a network appliance and a plurality of management devices. The network appliance continuously monitors an object network and synthesizes a current network image comprising contemporaneous indicators of connectivity, occupancy, and performance of the object network. A management-client device may gain access to the network image for timely control and for use in producing long - term network - evolution plans.
One of the recent developments discloses a fault detection system comprising means for sensing faults occurring in particular components of the network and generating fault alarm data there from; the alarm data propagated downstream through the network for collection at a fault management end point; a database that characterizes the topology of the network, located at the fault management end point, and containing entries that define the routing of circuits and trunks through the network.
One of the recent developments discloses a method for determining errors and metrics in a computer network. The method includes positioning an analyzer in communication with the network, capturing a data trace of the network with the analyzer, determining a network device topology from a first processing of the data trace, building user layer protocols using a second processing of the data trace and the determined device topology, determining errors in the network device topology using protocol experts applied to the user layer protocols in conjunction with the determined device topology.
One of the recent developments discloses a system for providing visualization and analysis of performance data. The system may comprise one or more processors communicatively coupled to a mobile communications network. The one or more processors may be configured to provide a user interface at a mobile device for a user to view network performance data associated with the mobile communications network. The one or more processors may further be con figured provide one or more user-selectable options to a user at a mobile device to view the network performance data.
One of the recent developments discloses a network monitoring system probe, which is coupled to network interfaces and captures data packets. A monitoring system processor identifies messages specific to S1 - MME interfaces and identifies GUMMEI parameters in the S1-MME interface messages. The monitoring system creates MME node entries in a network topology list, each of the MME nodes corresponding to a unique GUMMEI value.
However, in the prior art layered approaches it's not possible to isolate and identify the exact layer in which the outage has occurred and also to see its impact across other connected domains.
One of the recent developments discloses a system and method for determining a root cause of error activity in a network is described herein. Root cause analysis includes the correlation between reported error activity for path, line and section entities along a provisioned channel in the network. Root causes can also be identified based upon the correlation of simultaneous error activity on various signal transport levels. Finally, root cause analysis can correlate error activity along various path entities. However, data is collected only from element manager acting as the passage leading to isolating a group of nodes having the impact. Hence it is not possible to isolate and identify the exact layer in which the outage has occurred and also to see its impact across other connected domains.
One of the recent developments discloses an inter domain congestion management architecture having the ability to analyze and correlate congestion problems across multiple domains, provide integrated network maps, tabular displays and/or reports and allow network managers, in appropriate circumstances, to navigate to a domain to implement corrective actions.
However, the inter domain congestion management architecture cannot provide the solution to find the exact location of the problem as it does not consider the functional wise responsibilities of each network elements.
Therefore, there is a need to overcome the limitations associated with monitoring of multi-domain network and multi-vendor network.
Summary
In one aspect of the present disclosure, a system for monitoring multi-domain network and multi-vendor network. The system for monitoring multi-domain network and multi-vendor network includes a management server. The management serve includes a processing circuitry. The processing circuitry is configured to (i) collect performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements across one or more domain/vendor networks, (ii) correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified, (iii) identify and reconcile delta changes between the configuration data/logs to update network topology, (iv) map the categorized performance data, alarm data, configuration data/logs, and signaling traces of one or more network elements, and (v) superimpose the mapped data to topology layered visualization.
In some aspects of the present disclosure, the management server that is communicatively coupled with the processing circuitry and an output unit. The management server is configured to (i) collect performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements by way of a processing circuitry, (ii) correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry, (iii) identify and reconcile delta changes between the configuration data to update network topology, (iv) map the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry, (v) superimpose the mapped data with layered visualization, and (vi) identify root cause of network performance degradation in one or more network elements and display it by way of an output unit.
In some aspects of the present disclosure, the delta changes are a connectivity/configuration changes associated with the one or more network elements between current information and historic discovered information.
In some aspects of the present disclosure, correlation engine categorizes event occurrences based on network behavior at each instance to the network's general functionality of the one or more network elements to isolate the root cause of the issue identified.
In some aspects of the present disclosure, the performance data includes measurements having statistical count on every action encountered in the one or more networks.
In some aspects of the present disclosure, the alarm data comprising fault event occurrences within the one or more networks.
In some aspects of the present disclosure, the configuration data/logs are collected to analyze hardware components across one or more network elements.
In some aspects of the present disclosure, the performance data, the alarm data, the configuration data, and the signal traces are collected from Operation Supports System (OSS) and the one or more networks elements by way of group includes Network Management System (NMS), File Transfer Protocol (FTP), Secure Shell File Transfer Protocol (SFTP), Simple Network Management Protocol (SNMP), and Telnet.
In some aspects of the present disclosure, visualization is achieved for performance degradation observed in the one or more network elements in first layer, which impacts connected node performance in second layer, third layer, and seventh layer.
In some aspects of the present disclosure, the layers of the layered visualization selected from group includes Core, IP, Radio, Optical, and Microwave Transmission.
In second aspect of the present disclosure, a method for monitoring multi-domain network and multi-vendor network is provided.
The method includes collecting performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements by way of a processing circuitry. The method further includes correlating and categorizing () the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry. The method further includes identifying and reconciling delta changes between the configuration data to update network topology. The methos further includes mapping the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry. The method further includes superimposing the mapped categorized performance data, alarm data, configuration data/logs, and signaling traces with layered visualization. The methos further includes identifying root cause of network performance degradation in one or more network elements and display it by way of an output unit.
Brief description of drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, show certain aspects of the subject matter disclosed herein and, together with the description, help explain some of the principles associated with the disclosed implementations. In the drawing,
Figure 1 illustrates a block diagram of multi-domain network and multi-vendor network monitoring system, in accordance with an aspect of the present disclosure;
Figure 1A illustrates an exemplary block diagram of the multi-domain network and multi-vendor network monitoring system in accordance with an aspect of the present disclosure;
Figure 1B illustrates an exemplary block diagram of the multi-domain network and multi-vendor network monitoring system in accordance with an aspect of the present disclosure;
Figure 2 illustrates a flowchart that depicts a method of monitoring multi-domain network and multi-vendor network, in accordance with an aspect of the present disclosure;
Figure 3 illustrates exemplary system for real time monitoring of multi-domain network by way of layered visualization, in accordance with aspect of the present disclosure; and
Figure 4 illustrates a system flow chart for identifying the potential root cause of any network issue, in accordance with an aspect of the present disclosure.
Detailed description of the preferred embodiments
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure. Thus, the following description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding of the disclosure. However, in certain instances, known details are not described in order to avoid obscuring the description.
References to one or an embodiment in the present disclosure can be references to the same embodiment or any embodiment; and, such references mean at least one of the embodiments.
Reference to "one embodiment", "an embodiment", “one aspect”, “some aspects”, “an aspect” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, various features are described which may be exhibited by some embodiments and not by others.
The terms used in this specification generally have their ordinary meanings in the art, within the context of the disclosure, and in the specific context where each term is used. Alternative language and synonyms may be used for any one or more of the terms discussed herein, and no special significance should be placed upon whether or not a term is elaborated or discussed herein. In some cases, synonyms for certain terms are provided.
A recital of one or more synonyms does not exclude the use of other synonyms.
The use of examples anywhere in this specification including examples of any terms discussed herein is illustrative only, and is not intended to further limit the scope and meaning of the disclosure or of any example term. Likewise, the disclosure is not limited to various embodiments given in this specification. Without intent to limit the scope of the disclosure, examples of instruments, apparatus, methods and their related results according to the embodiments of the present disclosure are given below. Note that titles or subtitles may be used in the examples for convenience of a reader, which in no way should limit the scope of the disclosure. Unless otherwise defined, technical and scientific terms used herein have the meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains. In the case of conflict, the present document, including definitions will control.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
As mentioned above, there is a need to overcome the limitations associated with monitoring of multi-domain network and multi-vendor network. The present disclosure, therefore: provides a system for monitoring of multi-domain network and multi-vendor network to identify root cause of network element issues.
Figure 1 illustrates a a block diagram of multi-domain network and multi-vendor network monitoring system 100 (hereinafter referred and inferred as “system 100”), in accordance with an aspect of the present disclosure. The system 100 may include one or more network elements (of which network elements is shown and represented as “102”) and a management server 104.
Figure 1A illustrates an exemplary block diagram of the multi-domain network and multi-vendor network monitoring system 100 in accordance with an aspect of the present disclosure.
The network elements 102 may be selected from group includes a Mobility Management Entity (MME), a Packet Data Network Gateway (PDN gateway/PGW), a Serving Gateway (SGW), a Serving GPRS Support Node (SGSN), a Gateway GPRS Support Node (GGSN), a Mobile Switching Centre (MSC), a Media Gateway (MGW), a Home Location Register (HLR) and the like. Aspects of the present disclosure are intended to include known, well established and later-developed network elements.
The management server 104 may be configured to collect performance data, alarm data, configuration data/logs, and signaling traces from network elements 102 across one or more domain/vendor networks. The Management server 104 may be further configured to correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified. The Management server 104 may be further configured identify and reconcile delta changes between the configuration data/logs to update network topology. The Management server 104 may be further configured to map the categorized performance data, alarm data, configuration data/logs, and signaling traces of network elements 102. The Management server 104 may be further configured to superimpose the mapped data to topology layered visualization.
The management server 104 may include a processing circuitry 106 and a storage unit 108. The processing circuitry 106 may be communicatively coupled with the storage unit 108.
The processing circuitry 106 may be configured to collect performance data, alarm data, configuration data/logs, and signaling traces from network elements 102 across one or more domain/vendor networks. The processing circuitry 106 may be further configured to correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified. The processing circuitry 106 may be further configured identify and reconcile delta changes between the configuration data/logs to update network topology. The processing circuitry 106 may be further configured to map the categorized performance data, alarm data, configuration data/logs, and signaling traces of network elements 102. The processing circuitry 106 may be further configured to superimpose the mapped data to topology layered visualization.
The management server 104 further includes an output unit 110 that may be adapted to display an identified root cause of network performance degradation in network elements 102.
The management server 104 may be configured to collect performance data, alarm data, configuration data/logs, and signaling traces from network elements 102 by way of a processing circuitry 106. The management server 104 may be further configured to correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry 106. The management server 104 may be further configured to identify and reconcile delta changes between the configuration data to update network topology. The management server 104 may be further configured to map the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry 106. The management server 104 may be further configured to superimpose the mapped data with layered visualization. The management server 104 may be further configured to identify root cause of network performance degradation in network elements 102 and display it by way of the output unit 110.
In some aspects of the present disclosure, the delta changes may be a connectivity/configuration changes associated with the one or more network elements (102) between current information and historic discovered information.
In some aspects of the present disclosure, the output unit 110 may be selected from group include display device, cell phone, mobile phone, personal computer, computer, tablet computer, user interface, user device, and the like. Aspects of the present disclosure are intended to include known, well established and later-developed display devices.
The processing circuitry 106 may include a collection engine 112, a correlation engine 114, and a mapping engine 116. The collection engine 112 may be communicatively coupled with the correlation engine 114 and the mapping engine 116 by way of communication bus 120.
The collection engine 112 may be adapted to collect performance data, alarm data, configuration data/logs, and signaling traces from network elements 102.
The collection engine 112 may include a Performance Management (PM) module, a Configuration Management (CM) module and a Fault Management (FM) module. The data from these modules may be collected from the Element Management Systems (EMS) or Operation Supports System (OSS) or directly from the network elements 102 by way of Northbound Interface (NBI) or Simple Network Management Protocol (SNMP) interface. The collected metrics may be ingested into the correlation module and visualized as part of the end-to-end topology.
The collection engine 112 may be adapted to collect multi-domain and multi-vendor data and may be adapted to segregate the collected data based on the data type such as performance management data, fault management data and configuration management data, and the like.
The correlation engine 114 may be adapted to correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified.
In some aspects of the present disclosure, the correlation engine 114 may be adapted to categorize the collected performance data, alarm data, configuration data/logs and signaling traces based on the network element functionalities, alarm severity and network element configuration parameters respectively on every event occurrence.
In some aspects of the present disclosure, the correlation engine 114 may be adapted to categorize event occurrences based on network behavior at each instance to the network's general functionality of the network elements 102 to isolate the root cause of the issue identified.
The mapping engine 116 may be adapted to map the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity.
The storage unit 108 may be configured to store logic, instructions, circuitry, interfaces, and/or codes of the processing circuitry 106 to enable the processing circuitry 106 to execute the one or more operations associated with the system 100. The storage unit 108 may be further configured to store therein, data associated with the system 100, and the like. It will be apparent to a person having ordinary skill in the art that the storage unit 108 may be configured to store various types of data associated with the system 100, without deviating from the scope of the present disclosure. Examples of the storage unit 108 may include but are not limited to, a Relational database, a NoSQL database, a Cloud database, an Object-oriented database, and the like. Further, the storage unit 108 may include associated memories that may include, but is not limited to, a Read-Only Memory (ROM), a Random Access Memory (RAM), a flash memory, a removable storage drive, a hard disk drive (HDD), a solid-state memory, a magnetic storage drive, a Programmable Read Only Memory (PROM), an Erasable PROM (EPROM), and/or an Electrically EPROM (EEPROM). Embodiments of the present disclosure are intended to include or otherwise cover any type of the storage unit 108 including known, related art, and/or later developed technologies. In some embodiments of the present disclosure, a set of centralized or distributed networks of peripheral memory devices may be interfaced with the management server 104, as an example, on a cloud server.
In some aspects of the present disclosure, the delta changes may refer to any connectivity/configuration changes that have taken place in the network elements (102) between latest discovered information in comparison to previously discovered information. An automated discovery function in the collection engine 112 may observes and captures the real time changes in the network. The connectivity/configuration changes may be stored in the storage unit 108 and may be used as an input for identifying the network elements 102 interconnectivity. The collected configuration data may be scanned at regular intervals to find the differences in the result thus having the network structure updated.
In some aspects of the present disclosure, the performance data may include measurements having statistical count on every action encountered in the one or more networks.
In some aspects of the present disclosure, the alarm data may include fault event occurrences within the one or more networks.
In some aspects of the present disclosure, the configuration data/logs may be collected to analyze hardware components across network elements 102.
In some aspects of the present disclosure, the performance data, the alarm data, the configuration data, and the signal traces may be collected from Operation Supports System (OSS) and the networks elements 102 by way of group may include Network Management System (NMS), File Transfer Protocol (FTP), Secure Shell File Transfer Protocol (SFTP), Simple Network Management Protocol (SNMP), and Telnet.
In some aspects of the present disclosure, visualization may be achieved for performance degradation observed in the network elements 102 in first layer, which impacts connected node performance in second layer, third layer, and seventh layer.
In some aspects of the present disclosure, the layers of the layered visualization may be selected from group include Core, IP, Radio, Optical, Microwave Transmission, and the like.
In some aspects of the present disclosure, the system 100 may be adapted to monitoring multi-domain and multi-vendor network by way of end-to-end layered visualization by leveraging performance metrics and configuration data towards stitching topological correlation.
In some aspects of the present disclosure, inter-domain correlation of the categorized performance data, alarm data, configuration data/logs, and signal traces may be used to identify root cause of the network element causing degradation in the network performance.
In some aspects of the present disclosure, the inter-domain correlation may categorize the event occurrences based on the network behavior at each instance to the network' s general functionality of the network elements 102.
In some aspects of the present disclosure, the data collected by way of the collection engine 112 may be ingested as part of correlation engine 114 to define the multi-domain and multi-vendor correlation/interconnectivity across the network elements 102. The analysis may be visualized as part of the topology layered visualization.
In some aspects of the present disclosure, topological correlation focuses on leveraging the information about the network configuration received from the network either through EMS/NMS or directly from the network elements 102 for identification of network connectivity. The network topology may include the physical, logical and virtualized elements may be loaded into graph database where the connectivity relationship across neighboring network elements is maintained and used for processing of performance metrics and events raised in the network.
In some aspects of the present disclosure, the topology correlation may map the network events such as performance degradation, network quality impact, poor customer experience and network alarms with the topology of the affected network elements 102 to make it easier for a subscriber to visualize the network incidents in context to the network topology.
Figure 1B illustrates an exemplary block diagram of the multi-domain network and multi-vendor network monitoring system 100 in accordance with an aspect of the present disclosure.
In an exemplary scenario, the system 100 may monitor multidomain networks across General Packet Radio Services (GPRS) network interfaces (G), Long-Term Evolution (LTE) system interfaces (S) and IP Multimedia Subsystem (IMS)/Voice over LTE (VoLTE) interfaces (M) in real time. These Interfaces may be further classified based on the connectivity established between the network elements.
Figure 2 illustrates a flowchart that depicts a method 200 of monitoring multi-domain network and multi-vendor network, in accordance with an aspect of the present disclosure. The method 200 may include the following steps:
At step 202, the management server 104 may be configured to collect performance data, alarm data, configuration data/logs, and signaling traces from the network elements 102 by way of a processing circuitry 106.
At step 204, the management server 104 may be adapted to correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry 106.
At step 206, the management server 104 may be adapted to identify and reconcile delta changes between the configuration data to update network topology.
At step 208, the management server 104 may be adapted to map the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry 106.
At step 210, the management server 104 may be adapted to superimpose the mapped categorized performance data, alarm data, configuration data/logs, and signaling traces with layered visualization.
At step 212, the management server 104 may be adapted to identify root cause of network performance degradation in the network elements (102) and display it by way of an output unit.
Figure 3 illustrates an exemplary system for real time monitoring of multi-domain network by way of layered visualization, in accordance with aspect of the present disclosure.
In exemplary scenario, the system 100 may collects the data from multiple vendors (Vendor 1, Vendor 2, and Vendor 3) across multiple domains (Core, IP, Radio, Transmission). The Data across various domains and vendors may be collected by the data collector and may be segregated based on the data type such as performance management data, fault management data and configuration management data, and the like.
Figure 4 illustrates a system flow chart for identifying the potential root cause of any network issue, in accordance with an aspect of the present disclosure. In some aspects of the present disclosure, KPIs may refer to the network performance parameters that are monitored, along with the alarms and configuration parameters.
In some aspects of the present disclosure, the network anomalies may refer to any abnormal deviations such as performance degradation/threshold breaches, critical alarms in the network, network configuration changes determined by the system 100.
In some aspects of the present disclosure, the data may be correlated with topology relationship covering Layer 1, Layer 2, Layer 3 and Layer 7 connectivity along with the Performance Management (PM), Fault Management (FM) and Configuration Management (CM) overlay. The correlation may identify probable root cause and reporting it to users via email notification.
In an exemplary scenario, the network performance is monitored continuously in real time and the system checks if the performance has degraded or not degraded.
In another exemplary scenario, when the system performance is degraded, the system checks the performance, fault and configuration parameters.
In another exemplary scenario, when the system performance is degraded, the system continues monitoring the network performance.
Advantages:
• The present disclosure provides a system to identify exact root cause of network element causing degradation in the network performance.
• The present disclosure provides a layered topology that visualizes the performance metrics to identify probable root cause of the problem specific to a user.
• The present disclosure provides a reduced Mean Time to Repair (MTTR) in case of outage in the network by isolate the root cause of the issue identified.
The implementation set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detain above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementation described can be directed to various combinations and sub combinations of the disclosed features and/or combinations and sub combinations of the several further features disclosed above. In addition, the logic flows depicted in the accompany figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims. ,CLAIMS:I/We claim (s):
1. A system (100) for monitoring multi-domain network and multi-vendor network, comprising:
a management server (104), comprising:
a processing circuitry (106), that is configured to:
a. collect performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements (102) across one or more domain/vendor networks;
b. correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified;
c.identify and reconcile delta changes between the configuration data/logs to update network topology;
d.map the categorized performance data, alarm data, configuration data/logs, and signaling traces of the one or more network elements (102); and
e.superimpose the mapped data to topology layered visualization.
2.The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the management server (104) is communicatively coupled with the processing circuitry (106) and output unit (110), and the management server (102) is configured to:
i. collect the performance data, the alarm data, the configuration data/logs, and the signaling traces from the one or more network elements (102) by way of a processing circuitry (106);
ii. correlate and categorize the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry (106);
iii. identify and reconcile delta changes between the configuration data to update network topology;
iv. map the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry (106);
v. superimpose the mapped data with layered visualization; and
vi. identify root cause of network performance degradation in one or more network elements (102) and display it by way of an output unit (110).
3. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the delta changes is a connectivity/configuration changes associated with the one or more network elements (102) between current information and historic discovered information.
4. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 2, wherein correlation engine (114) categorizes event occurrences based on network behavior at each instance to the network's general functionality of the one or more network elements (102) to isolate the root cause of the issue identified.
5. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the performance data comprising measurements having statistical count on every action encountered in the one or more networks.
6. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the alarm data comprising fault event occurrences within the one or more networks.
7. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the configuration data/logs are collected to analyze hardware components across one or more network elements (102).
8. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein the performance data, the alarm data, the configuration data, and the signal traces are collected from Operation Supports System (OSS) and the one or more networks elements by way of group comprising Network Management System (NMS), File Transfer Protocol (FTP), Secure Shell File Transfer Protocol (SFTP), Simple Network Management Protocol (SNMP), and Telnet.
9. The system (100) for monitoring multi-domain network and multi-vendor network as claimed in claim 1, wherein visualization is achieved for performance degradation observed in the one or more network elements (102) in first layer, which impacts connected node performance in second layer, third layer, and seventh layer.
10. A method (200) for monitoring multi-domain network and multi-vendor network, comprising:
i. collecting (202) performance data, alarm data, configuration data/logs, and signaling traces from one or more network elements (102) by way of a processing circuitry (106);
ii. correlating and categorizing (204) the collected performance data, alarm data, configuration data/logs, and signaling traces to isolate root cause of an issue identified by way of the processing circuitry (106);
iii. identifying and reconciling (206) delta changes between the configuration data to update network topology;
iv. mapping (208) the categorized performance data, alarm data, configuration data/logs, and signaling traces to generate topology connectivity by way of the processing circuitry (106);
v. superimposing (210) the mapped categorized performance data, alarm data, configuration data/logs, and signaling traces with layered visualization; and
vi. identifying (212) root cause of network performance degradation in one or more network elements (102) and display it by way of an output unit (110).
| # | Name | Date |
|---|---|---|
| 1 | 202241036080-STATEMENT OF UNDERTAKING (FORM 3) [23-06-2022(online)].pdf | 2022-06-23 |
| 2 | 202241036080-PROVISIONAL SPECIFICATION [23-06-2022(online)].pdf | 2022-06-23 |
| 3 | 202241036080-FORM-26 [23-06-2022(online)].pdf | 2022-06-23 |
| 4 | 202241036080-FORM FOR SMALL ENTITY(FORM-28) [23-06-2022(online)].pdf | 2022-06-23 |
| 5 | 202241036080-FORM FOR SMALL ENTITY [23-06-2022(online)].pdf | 2022-06-23 |
| 6 | 202241036080-FORM 1 [23-06-2022(online)].pdf | 2022-06-23 |
| 7 | 202241036080-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [23-06-2022(online)].pdf | 2022-06-23 |
| 8 | 202241036080-DRAWINGS [23-06-2022(online)].pdf | 2022-06-23 |
| 9 | 202241036080-DECLARATION OF INVENTORSHIP (FORM 5) [23-06-2022(online)].pdf | 2022-06-23 |
| 10 | 202241036080-PostDating-(23-06-2023)-(E-6-209-2023-CHE).pdf | 2023-06-23 |
| 11 | 202241036080-APPLICATIONFORPOSTDATING [23-06-2023(online)].pdf | 2023-06-23 |
| 12 | 202241036080-Information under section 8(2) [22-09-2023(online)].pdf | 2023-09-22 |
| 13 | 202241036080-DRAWING [22-09-2023(online)].pdf | 2023-09-22 |
| 14 | 202241036080-CORRESPONDENCE-OTHERS [22-09-2023(online)].pdf | 2023-09-22 |
| 15 | 202241036080-COMPLETE SPECIFICATION [22-09-2023(online)].pdf | 2023-09-22 |