Sign In to Follow Application
View All Documents & Correspondence

Method And System For Unified Data Ingestion In A Network Performance Management System

Abstract: The present disclosure relates to a method and a system for unified data ingestion in a network performance management system, the method comprising configuring, by a configuration unit [302], one or more source systems [502] of one or more vendors, wherein each source system [502] corresponds to a separate vendor, storing, by the ingestion layer [504], a set of meta data associated with the one or more source systems [502]; fetching, by a transceiver unit [306] via the ingestion layer [504], a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502]; processing, by a processing unit [308] via the ingestion layer [504], the set of data to store the data; providing, by the transceiver unit [306] via the ingestion layer [504], the stored data to a normalisation layer [100b]. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Mohit Bhanwria
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Durgesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Zenith Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR UNIFIED DATA INGESTION IN A NETWORK PERFORMANCE MANAGEMENT SYSTEM”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
2
METHOD AND SYSTEM FOR UNIFIED DATA INGESTION IN A NETWORK PERFORMANCE MANAGEMENT SYSTEM
TECHNICAL FIELD
5
[0001]
Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to method and system for unified data ingestion in a network performance management system.
BACKGROUND 10
[0002]
The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with 15 respect to the present disclosure, and not as admissions of the prior art.
[0003]
Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems 20 provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues and/or failures in the network and take 25 precautionary measures accordingly.
[0004]
Typically, in a mobile network, a network node or network element, such as a base station, an access point (AP), a router, etc. collects event statistics in the form of one or more performance counters and sends them to network performance management system for 30 diagnostic purposes. The one or more performance counters may be logged and maintained by the management system in order to assess the performance of network nodes. Due to the complexity of a typical network comprising multiple vendors, there can be a large number of
3
performance counters. Each vendor may utilize distinct protocol, format and
/or mechanism for collecting and storing data.
[0005]
Further, a network performance management system is utilized for aggregating and analysing the one or more performance counters. Also, the network performance management 5 system maintains a network integrity, identify potential issues, and optimize network performance. However, the current versions of the network performance management systems fails to handle various vendor-specific data formats and protocols. Furthermore, laborious, and time-consuming code level amendments are required for integrating one or more new vendors into the network performance management system. 10
[0006]
Hence, the current solutions for data ingestion in the network performance management system face several challenges such as lack of uniformity in data format and protocol due to diversity in vendor, coding overhead, complexity, more time-consumptions, manual errors. Overall, the current solutions lacks efficiency and scalability in integrating new 15 vendors in the network performance management system which results in operational challenged and resource-intensive processes.
[0007]
Thus, there exists an imperative need in the art to provide a system for unified data ingestion, which reduces the manpower, manual intervention by the manpower and help in 20 reducing the time to onboard a new vendor which the present disclosure aims to address.
SUMMARY
25
[0008]
This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0009]
An aspect of the present disclosure may relate to a method for unified data ingestion 30 in a network performance management system. The method comprises configuring, by a configuration unit via an ingestion layer from a user interface, one or more source systems of one or more vendors, wherein each source system corresponds to a separate vendor. The method comprises storing, by the ingestion layer in a database unit, a set of meta data associated
4
with the one or more source systems, wherein the set of meta data is related to the one or more
vendors. The method comprises fetching, by a transceiver unit via an ingestion layer, a set of data from the one or more source systems based on the set of metadata associated with the one or more source systems. The method comprises processing, by a processing unit via the ingestion layer, the set of data to store the data. The method comprises providing, by the 5 transceiver unit via the ingestion layer, the stored data to a normalisation layer.
[00010]
In an exemplary aspect of the present disclosure, the processing, by the processing unit via the ingestion layer, the set of data to store the data comprises calculating, by the processing unit, a set of changes in the set of data using a trained model and storing, by the 10 processing unit, a set of delta files based on the calculation of the set of changes in the set of data.
[00011]
In an exemplary aspect of the present disclosure, the trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML) 15 technique.
[00012]
In an exemplary aspect of the present disclosure, the ingestion layer comprises one or more of a fault management ingestion microservice, a performance management ingestion microservice, a configuration management ingestion microservice, a charging data records 20 ingestion microservice, an infra metric broker, a log metric microservice, and an inventory ingestion microservice.
[00013]
In an exemplary aspect of the present disclosure, the set of data, fetched by the transceiver unit via the ingestion layer, comprises one or more performance counters, wherein 25 the one or more performance counters comprises one or more of success hits and failure hits for one or more request messages and one or more response messages corresponding to data ingestion.
[00014]
In an exemplary aspect of the present disclosure, the set of metadata associated 30 with the one or more source systems comprises one or more of a format of data, a pull frequency, a protocol information, and a source data location information.
5
[00015]
In an exemplary aspect of the present disclosure, the set of data is fetched from the one or more source systems at one of a predefined periodic interval, and an adaptive periodic interval.
In an exemplary aspect of the present disclosure, the stored data is provided to the normalisation 5 layer for subsequent processing of the stored data.
[00016]
Another aspect of the present disclosure may relate to a system for unified data ingestion in a network performance management system. The system comprise a configuration unit adapted to configure, via the ingestion layer, from a user interface, one or more source 10 systems of one or more vendors, wherein each source system corresponds to a separate vendor. The system comprise a database unit connected at least to the configuration unit, the database unit configured to store, in, via the ingestion layer, a set of meta data associated with the one or more source systems, wherein the set of meta data is related to the one or more vendors. The system comprises a transceiver unit connected at least to the database unit, the transceiver unit 15 configured to fetch a set of data from the one or more source systems based on the set of metadata associated with the one or more source systems. The system comprises a processing unit connected at least to the transceiver unit, the processing unit configured to process via the ingestion layer the set of data to store the data. The transceiver unit is further configured to provide the stored data to a normalisation layer. 20
[00017]
Another aspect of the present disclosure may relate to a user equipment (UE) comprising a processor. The processor is configured to configure via a configuration unit, one or more source systems of one or more vendors, wherein each source system corresponds to a separate vendor. The processor is further configured to store via a database unit, a set of meta 25 data associated with the one or more source systems, wherein the set of meta data is related to the one or more vendors. The processor is further configured to fetch via a transceiver unit, a set of data from the one or more source systems based on the set of metadata associated with the one or more source systems. The processor is further configured to process via a processing unit, the set of data to store the data. The processor is configured to provide via the transceiver 30 unit, the stored data to a normalisation layer.
[00018]
Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for unified data ingestion in a network
6
performance management system
, the instructions include executable code which, when executed by a one or more units of a system, causes: a configuration unit to configure, via a user interface, one or more source systems of one or more vendors, wherein each source system corresponds to a separate vendor, a database unit to store, in, a set of meta data associated with the one or more source systems, wherein the set of meta data is related to the one or more 5 vendors, a transceiver unit to fetch a set of data from the one or more source systems based on the set of metadata associated with the one or more source systems, a processing unit to process the set of data to store the data and the transceiver unit to provide the stored data to a normalisation layer.
10
OBJECTS OF THE INVENTION
[00019]
Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
15
[00020]
It is an object of the present disclosure to provide a system and a method for unified data ingestion in which for onboarding a new vendor, such that no or minimal code level changes are needed.
[00021]
It is another object of the present disclosure to provide a solution that comes with 20 an approach to streamline the on boarding process with different vendors seamlessly by using its artificial intelligence (AI)/ machine learning (ML) algorithms through user friendly user interface.
[00022]
It is another object of the present disclosure to provide a solution that remotely 25 fetches and pull data from one or more source systems without impacting one or more internal processes.
[00023]
It is yet another object of the present disclosure to provide a solution where no downtime is required to onboard a new vendor/ source systems as the network performance 30 management system does not have to go through the software life cycle process (development, testing, integration testing, and deployment).
7
DESCRIPTION OF THE DRAWINGS
[00024]
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. 5 Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings 10 includes disclosure of electrical components or circuitry commonly used to implement such components.
[00025]
FIG. 1 illustrates an exemplary block diagram of a network performance management system, in accordance with the exemplary embodiments of the present disclosure. 15
[00026]
FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
20
[00027]
FIG. 3 illustrates an exemplary block diagram of a system for unified data ingestion in a network performance management system, in accordance with exemplary implementations of the present disclosure.
[00028]
FIG. 4 illustrates a method flow diagram for a method for unified data ingestion in 25 a network performance management system, in accordance with exemplary implementations of the present disclosure.
[00029]
FIG. 5 illustrates an exemplary flow diagram for unified data ingestion in a network performance management system, in accordance with exemplary implementations of the 30 present disclosure is shown.
The foregoing shall be more apparent from the following more detailed description of the disclosure.
8
DETAILED DESCRIPTION
[00030]
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present 5 disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. 10
[00031]
The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various 15 changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[00032]
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in 20 the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[00033]
Also, it is noted that individual embodiments may be described as a process which 25 is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. 30
[00034]
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as
9
“exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or
advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar 5 to the term “comprising” as an open transition word—without precluding any additional or other elements.
[00035]
As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing 10 instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding 15 data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
[00036]
As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a 20 smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital 25 assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure. 30
[00037]
As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only
10
memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical
storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
5
[00038]
As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called. 10
[00039]
All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a 15 controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[00040]
As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information, or 20 a combination thereof between units/components within the system and/or connected with the system.
[00041]
As discussed in the background section, that during onboarding of a new vendor in a network performance management system, several time-code-level amendments are required 25 which consumes a lot of resource and time and the current solutions for data ingestion of the vendor into the network performance management system face several challenges such as lack of uniformity in data format and protocol due to diversity in vendor, coding overhead, complexity, more time-consumptions, manual errors. Overall, the current solutions lack efficiency and scalability in integrating new vendors in the network performance management 30 system which results in operational challenged and resource-intensive processes and the current known solutions have several shortcomings. The present disclosure discloses a solution that overcomes the above-mentioned and other existing problems in this field of technology by providing a method and a system for unified data ingestion in the network performance
11
management system which follows an approach to streamline an onboarding process with
different vendors seamlessly by using novel approach through a user-friendly interface. Also, while streamlining the onboarding process with different vendors in future the solution of the present disclosure no code-level amendments are required, and all the configurations related to vendor are done with the help of the user interface which reduces manpower, manual 5 intervention and time to onboard the new vendor.
[00042]
FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to FIG. 1, the network performance management system [100] comprises 10 various sub-systems such as: integrated performance management system [100a], normalisation layer [100b], computation layer [100d], anomaly detection layer [100o], streaming engine [100l], load balancer [100k], operations and management system [100p], API gateway system [100r], analysis engine [100h], parallel computing framework [100i], forecasting engine [100t], distributed file system, mapping layer [100s], distributed data lake 15 [100u], scheduling layer [100g], reporting engine [100m], message broker [100e], graph layer [100f], caching layer [100c], service quality manager [100q] and correlation engine[100n]. Exemplary connections between these subsystems is also as shown in FIG. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that 20 are needed to realise the effects are within the scope of this disclosure.
[00043]
Following are the various components of the system [100], the various components may include:
25
[00044]
Integrated performance management system [100a] comprise of a 5G performance engine [100v] and a 5G Key Performance Indicator (KPI) Engine [100u].
[00045]
5G Performance Management Engine [100v]: The 5G Performance Management engine [100v] is a crucial component of the integrated system, responsible for collecting, 30 processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data transfer rates, and many others. This raw data is then processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in
12
a Distributed Data Lake [100u], a centralized, scalable, and flexible storage solution, allowing
for easy access and further analysis. The 5G Performance Management engine [100v] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, 5 and make informed decisions to enhance network efficiency and reliability.
[00046]
5G Key Performance Indicator (KPI) Engine [100u]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the one or more performance counters, which are collected and 10 processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [100u] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. 15 The processed KPI data is then stored in the Distributed Data Lake [100u], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [100u] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed 20 decision-making and efficient network management.
[00047]
Ingestion layer: The Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration 25 parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the Normalisation layer, Streaming Engine, Streaming Analytics, and 30 Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
13
[00048]
Normalisation layer [100b]: The Normalisation Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that's been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalisation" reduces redundancy and improves data integrity. Upon completion of 5 normalisation, the data is stored in various databases like the Distributed Data Lake [100u], Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalisation Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the 10 exchange of data messages. Moreover, the Normalisation Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [100n] for detecting relationships among various data elements, the Service Quality Manager for maintaining and improving the quality of services, and the Streaming Engine for processing real-time data streams. These subsystems depend on 15 the normalized data to perform their operations effectively and accurately, demonstrating the Normalisation Layer's [100b] critical role in the entire system.
[00049]
Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization. During the 20 initial phase, the Normalisation Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing 25 frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine. The Normalisation Layer [100b] is responsible for providing these sub-30 systems with the necessary data from the Caching Layer [100c].
[00050]
Computation layer [100d]: The Computation Layer [100d] in the Integrated Performance Management system serves as the main hub for complex data processing tasks.
14
In the initial stages, raw data is gathered, normalized, and enriched by the
Normalisation Layer [100b]. The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [100u], Caching Layer [100c], and Graph Layer, and also feeds it to the Message Broker. Within the Computation Layer [100d], several powerful sub-systems such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and 5 Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine [100n] identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In 10 essence, the Computation Layer [100d] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalisation Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
15
[00051]
Message broker [100e]: The Message Broker [100e], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [100e] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for 20 contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [100e] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [100e] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and 25 consistency of the data. With its robust design and capabilities, the Message Broker [100e] forms a critical component in managing and delivering real-time data in the system.
[00052]
Graph layer [100f]: The Graph Layer [100f], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety 30 of data types, including alarm, counter, configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their
15
interrelationships. Moreover, the Modeler should be adept at processing steps provided in the
model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [100n], 5G Performance Management Engine, or 5G KPI Engine [100u]. With its powerful modelling and processing capabilities, the Graph Layer [100f] forms an essential part of the system, enabling the 5 processing and analysis of complex relationships between various types of network data.
[00053]
Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity 10 performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake [100u] or Distributed File System or sending it to another micro-service. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various 15 tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
20
[00054]
Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [100h], users can perform queries on data sourced from various subsystems or external 25 gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [100h] provides a robust analytical environment for systematic data interrogation, 30 facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
16
[00055]
Parallel Computing Framework [100i]: The Parallel Computing Framework [100i] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [100j] locations or Distributed Data Lake (DDL) 5 indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [100i] is an essential 10 tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
[00056]
Distributed File System [100j]: The Distributed File System (DFS) [100j] is a critical component of the Integrated Performance Management System, enabling multiple 15 clients to access and interact with data seamlessly. This file system is designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [100j] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [100j] also supports diverse operations, 20 facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
[00057]
Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the 25 Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request 30 dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol
17
(HTTP) requests. Context
-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance 5 management system.
[00058]
Streaming Engine [100l]: The Streaming Engine [100l], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its 10 core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker [100e], and 15 Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's 20 [100l] ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
[00059]
Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the 25 Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data 30 from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this
18
performance management system. Furthermore, the Reporting Engine [100m
] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
5
[00060]
FIG. 2 illustrates an exemplary block diagram of a computing device [1000] upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [1000] may also implement a method for unified data ingestion in a network performance management system utilising a system. In another implementation, the computing device [1000] itself 10 implements the method for unified data ingestion in the network performance management system using one or more units configured within the computing device [1000], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
15
[00061]
The computing device [1000] may include a bus [1002] or other communication mechanism for communicating information, and a hardware processor [1004] coupled with bus [1002] for processing information. The hardware processor [1004] may be, for example, a general purpose microprocessor. The computer system [1000] may also include a main memory [1006], such as a random access memory (RAM), or other dynamic storage device, coupled to 20 the bus [1002] for storing information and instructions to be executed by the processor [1004]. The main memory [1006] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [1004]. Such instructions, when stored in non-transitory storage media accessible to the processor [1004], render the computer system [1000] into a special-purpose machine that is 25 customized to perform the operations specified in the instructions. The computer system [1000] further includes a read only memory (ROM) [1008] or other static storage device coupled to the bus [1002] for storing static information and instructions for the processor [1004].
[00062]
A storage device [1010], such as a magnetic disk, optical disk, or solid-state drive 30 is provided and coupled to the bus [1002] for storing information and instructions. The computer system [1000] may be coupled via the bus [1002] to a display [1012], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input
19
device [1014], including alphanumeric and other keys, touch screen input means, etc. may be
coupled to the bus [1002] for communicating information and command selections to the processor [1004]. Another type of user input device may be a cursor control [1016], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [1004], and for controlling cursor movement on the 5 display [1012]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[00063]
The computer system [1000] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic 10 which in combination with the computer system [1000] causes or programs the computer system [1000] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computer system [1000] in response to the processor [1004] executing one or more sequences of one or more instructions contained in the main memory [1006]. Such instructions may be read into the main memory [1006] from another 15 storage medium, such as the storage device [1010]. Execution of the sequences of instructions contained in the main memory [1006] causes the processor [1004] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
20
[00064]
The computer system [1000] also may include a communication interface [1018] coupled to the bus [1002]. The communication interface [1018] provides a two-way data communication coupling to a network link [1020] that is connected to a local network [1022]. For example, the communication interface [1018] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication 25 connection to a corresponding type of telephone line. As another example, the communication interface [1018] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [1018] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of 30 information.
[00065]
The computer system [1000] can send messages and receive data, including program code, through the network(s), the network link [1020] and the communication
20
interface [1018]. In the Internet example, a server [1030] might transmit a requested code for
an application program through the Internet [1028], the ISP [1026], the Host [1024], the local network [1022] and the communication interface [1018]. The received code may be executed by the processor [1004] as it is received, and/or stored in the storage device [1010], or other non-volatile storage for later execution. 5
[00066]
Referring to FIG. 3, an exemplary block diagram of a system [300] for unified data ingestion in a network performance management system, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one configuration unit [302], at least one database unit [304], at least one transceiver unit [306], 10 and at least one processing unit [308]. The system [300] may be connected to the network performance management system [100]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 1 only a few units are shown, however, the system [300] may comprise multiple 15 such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred herein as a UE). In another implementation, the system 20 [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
[00067]
The system [300] is configured to perform unified data ingestion in the network performance management system with the help of the interconnection between the 25 components/units of the system [300].
[00068]
Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these 30 units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the
21
intended functionality described herein, are considered to be encompassed within the scope of
the present disclosure.
[00069]
In order to achieve unified data ingestion in the network performance management system, the configuration unit [302] adapted to configure, via an ingestion layer [504], from a 5 user interface [506], one or more source systems [502] of one or more vendors, wherein each source system corresponds to a separate vendor.
[00070]
The present disclosure encompasses that the one or more source systems [502] are systems or device from which a data such as a network performance data is collected for 10 analysis and monitoring purposes. The one or more source systems [502] may include but are not limited to one or more servers, one or more data collection application, one or more firewalls.
[00071]
The present disclosure encompasses that the one or more vendors refers to users 15 such as body corporates and/or individual users.
[00072]
Further, the database unit [304] is connected at least to the configuration unit [302] and the database unit [304] configured to store, by the ingestion layer [504], a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related 20 to the one or more vendors.
[00073]
The present disclosure encompasses that the set of metadata associated with the one or more source systems [502] comprises one or more of a format of data, a pull frequency, a protocol information, and a source data location information. 25
[00074]
The present disclosure encompasses that the format of data refers to a structure in which the data i.e., the structure in which the set of meta data associated with the one or more source systems is encoded and/or stored. The format of the data may include but not limited to a JavaScript Object Notation (JSON) format, an eXtensible Markup Language (XML) format, 30 a Comma-Separated Values (CSV) format, a Plaintext format, and/or a Binary format.
[00075]
The present disclosure encompasses that the pull frequency is a value that indicated the frequency of data retrieval by the network performance management system i.e., i.e., the
22
frequency of the set of metadata retrieval by the network performance management system
. The pull frequency may be defined in terms of seconds, minutes, hours, or any other relevant unit of time based on requirements for monitoring and/or analysis of network elements by the network performance management system.
5
[00076]
The present disclosure encompasses that the protocol Information refers to one or more communication protocols used to transfer the set of metadata between the one or more source systems [502] and the network performance management system such as a SNMP (Simple Network Management Protocol), a FTP (File Transfer Protocol), a SSH (Secure Shell), or any other communication protocol that may be obvious to the person skilled in the art to 10 implement the solution of the present disclosure.
[00077]
The present disclosure encompasses that the source data location information indicates a physical and/or a logical location from which the network performance management system retrieves the set of metadata. The source data location information includes but not 15 limited to an address associated with the set of metadata, a hostname associated with the set of metadata, a file path associated with the set of metadata, a database connection string associated with the set of metadata, a Uniform Resource Locator (URL) associated with the set of metadata, or any other identifier which indicates the address of the set of metadata and that may be obvious to the person skilled in the art to implement the solution of the present 20 disclosure.
[00078]
Further, the transceiver unit [306] is connected at least to the database unit [304] and the transceiver unit [306] configured to fetch via the ingestion layer [504], a set of data from the one or more source systems [502] based on the set of metadata associated with the 25 one or more source systems [502].
[00079]
The present disclosure encompasses that the set of data is fetched from the one or more source systems [502] at one of a predefined periodic interval, and an adaptive periodic interval. The predefine periodic interval is a predetermined duration of time at which the set of 30 data is fetched from the one or more source systems [502]. For instance, the predefine periodic interval may range from 5 minutes to 60 minutes and/or 1 week to 4 weeks. The adaptive periodic interval dynamically adjusts a time at which the set of data is fetched from the one or
23
more source systems [502]
based on one or more condition such as a time associated with the peak network utilisation activity.
[00080]
The present disclosure encompasses that the set of data fetched by the transceiver unit [306] via the ingestion layer [504] comprises one or more performance counters. The one 5 or more performance counters comprises one or more of success hits and failure hits for one or more request messages and one or more response messages corresponding to data ingestion.
[00081]
The present disclosure encompasses that the one or more performance counters refers to one or more indicators used to measure a performance corresponding to the data 10 ingestion such as one or more of success hits and failure hits for one or more request messages. Also, the one or more performance counters encompassing metrics such as the success hits and the failure hits are organized within a CSV (Comma-Separated Values) file. Further, the CSV file is packaged either a tape archive (TAR) archive and/or a zip format.
15
[00082]
Further, the term data ingestion refers to a process of collecting, importing, and importing the set of data in a raw format from various sources i.e., the one or more source systems [502] into the network performance management system for further processing, analysis, and utilization.
20
[00083]
The present disclosure encompasses that the ingestion layer [504] comprises one or more of a fault management ingestion microservice, a performance management ingestion microservice, a configuration management ingestion microservice, a charging data records ingestion microservice, an infra metric broker, a log metric microservice, and an inventory ingestion microservice. 25
[00084]
The present disclosure encompasses that the fault management ingestion microservice is a service that is responsible for receiving, processing, and storing data related to fault management. The fault management ingestion microservice handles the ingestion of information about one or more errors, failures, alarms, and/or other abnormal conditions in a 30 network infrastructure.
[00085]
The present disclosure encompasses that the performance management ingestion microservice is responsible for handling the ingestion of data related to performance
24
monitoring and analysis
in the network infrastructure. The performance management ingestion microservice receives and processes one or more performance metrics such as a latency metric, a throughput metric, a response time metric, and other indicators of the network infrastructure associated with the network performance.
5
[00086]
The present disclosure encompasses that the configuration management ingestion microservice manages the ingestion of data related to configuration changes within the network infrastructure. The configuration management ingestion microservice captures information about modifications to a hardware, a software, network settings, policies, and other configuration parameters in the network infrastructure. 10
[00087]
The present disclosure encompasses that the charging data records ingestion microservice is a microservice that is responsible for one or more ingesting charging data records in the network infrastructure. The charging data records ingestion microservice handles the collection and processing of a usage data, service activations, billing events, and other 15 transactional records used for charging users for services rendered in the network infrastructure.
[00088]
The present disclosure encompasses that the infra metric broker serves as an intermediary or broker for one or more infrastructure metrics in the network infrastructure. The 20 infra metric broker facilitates the exchange of a metric data between different components or services within the network infrastructure.
[00089]
The present disclosure encompasses that the log metric microservice handles the ingestion of a log data generated by various components, applications, and/or services in the 25 network infrastructure. The log metric microservice collects and processes in the network infrastructure one or more log entries, extracting relevant information and storing for analysis, troubleshooting, auditing, and/or compliance purposes.
[00090]
The present disclosure encompasses that the inventory ingestion microservice 30 handles ingesting and managing an inventory data related to assets, resources, or components in the network infrastructure.
25
[00091]
Further a processing unit [308] is connected at least to the transceiver unit [306]. The processing unit [308] configured to process, via the ingestion layer [504], the set of data to store the data. The present disclosure encompasses that the processing unit [308] via the ingestion layer [504], for processing the set of data to store the data, is configured to calculate a set of changes in the set of data using a trained model and store a set of delta files based on 5 the calculation of the set of changes in the set of data.
[00092]
Further, the set of changes refers to one or more alterations and/or modification done on original set of data i.e., the fetched set of data. The one or more alterations and/or modification may include additions, deletions, or modifications performed on the set of data. 10
[00093]
The present disclosure encompasses that the trained model refers to a machine learning model that has been trained on a dataset to calculate the set of changes. The trained model may be trained to recognize one or more patterns and one or more anomalies in the set of data to identify differences between the changes set of data and the original set of data. 15
[00094]
The present disclosure encompasses that the trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique. The training may be done by one or more techniques which may be known to the person skilled in the art. The one or more techniques may include the steps of data collection, data preprocessing, 20 feature extraction, model selection and training.
[00095]
The present disclosure encompasses that the set of delta files are the files which store the changes and/or differences associated with the fetched set of data.
25
[00096]
Further, the transceiver unit [306] is further configured to provide the stored data to a normalisation layer [100b]. The present disclosure encompasses that the stored data is provided to the normalisation layer [100b] for subsequent processing of the stored data.
[00097]
Referring to FIG. 4, an exemplary method flow diagram [400] for unified data 30 ingestion in a network performance management system, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present
26
in a server device to implement the features of the present disclosure. Also, as shown in F
IG. 4, the method [400] starts at step [402].
[00098]
At step [404], the method [400] comprises configuring, by a configuration unit [302] via an ingestion layer [504] via a user interface [506], one or more source systems [502] 5 of one or more vendors, wherein each source system [502] corresponds to a separate vendor.
[00099]
The present disclosure encompasses that the one or more source systems [502] are systems or device from which a data such as a network performance data is collected for analysis and monitoring purposes. The one or more source systems [502] may include but are 10 not limited to one or more servers, one or more data collection application, one or more firewalls.
[000100]
The present disclosure encompasses that the one or more vendors refers to users such as body corporates and/or individual users. 15
[000101]
At step [406], the method [400] comprises storing, by the ingestion layer [504] in a database unit [304], a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related to the one or more vendors.
20
[000102]
The present disclosure encompasses that the set of metadata associated with the one or more source systems [502] comprises one or more of a format of data, a pull frequency, a protocol information, and a source data location information.
[000103]
The present disclosure encompasses that the format of data refers to a structure in 25 which the data i.e., the structure in which the set of meta data associated with the one or more source systems is encoded and/or stored. The format of the data may include but not limited to a JavaScript Object Notation (JSON) format, an eXtensible Markup Language (XML) format, a Comma-Separated Values (CSV) format, a Plaintext format, and/or a Binary format.
30
[000104]
The present disclosure encompasses that the pull frequency is a value that indicated the frequency of data retrieval by the network performance management system i.e., i.e., the frequency of the set of metadata retrieval by the network performance management system. The pull frequency may be defined in terms of seconds, minutes, hours, or any other relevant
27
unit of time based on requirements for monitoring and/or analysis of network elements by the
network performance management system.
[000105]
The present disclosure encompasses that the protocol Information refers to one or more communication protocols used to transfer the set of metadata between the one or more 5 source systems [502] and the network performance management system such as a SNMP (Simple Network Management Protocol), a FTP (File Transfer Protocol), a SSH (Secure Shell), or any other communication protocol that may be obvious to the person skilled in the art to implement the solution of the present disclosure.
10
[000106]
The present disclosure encompasses that the source data location information indicates a physical and/or a logical location from which the network performance management system retrieves the set of metadata. The source data location information includes but not limited to an address associated with the set of metadata, a hostname associated with the set of metadata, a file path associated with the set of metadata, a database connection string associated 15 with the set of metadata, a Uniform Resource Locator (URL) associated with the set of metadata, or any other identifier which indicates the address of the set of metadata and that may be obvious to the person skilled in the art to implement the solution of the present disclosure.
20
[000107]
The present disclosure encompasses that the ingestion layer [504] comprises one or more of a fault management ingestion microservice, a performance management ingestion microservice, a configuration management ingestion microservice, a charging data records ingestion microservice, an infra metric broker, a log metric microservice, and an inventory ingestion microservice. 25
[000108]
The present disclosure encompasses that the fault management ingestion microservice is a service that is responsible for receiving, processing, and storing data related to fault management. The fault management ingestion microservice handles the ingestion of information about one or more errors, failures, alarms, and/or other abnormal conditions in a 30 network infrastructure.
[000109]
The present disclosure encompasses that the performance management ingestion microservice is responsible for handling the ingestion of data related to performance
28
monitoring and analysis in the network infrastructure. The performance management ingestion
microservice receives and processes one or more performance metrics such as a latency metric, a throughput metric, a response time metric, and other indicators of the network infrastructure associated with the network performance.
5
[000110]
The present disclosure encompasses that the configuration management ingestion microservice manages the ingestion of data related to configuration changes within the network infrastructure. The configuration management ingestion microservice captures information about modifications to a hardware, a software, network settings, policies, and other configuration parameters in the network infrastructure. 10
[000111]
The present disclosure encompasses that the charging data records ingestion microservice is a microservice that is responsible for one or more ingesting charging data records in the network infrastructure. The charging data records ingestion microservice handles the collection and processing of a usage data, service activations, billing events, and other 15 transactional records used for charging users for services rendered in the network infrastructure.
[000112]
The present disclosure encompasses that the infra metric broker serves as an intermediary or broker for one or more infrastructure metrics in the network infrastructure. The 20 infra metric broker facilitates the exchange of a metric data between different components or services within the network infrastructure.
[000113]
The present disclosure encompasses that the log metric microservice handles the ingestion of a log data generated by various components, applications, and/or services in the 25 network infrastructure. The log metric microservice collects and processes in the network infrastructure one or more log entries, extracting relevant information and storing for analysis, troubleshooting, auditing, and/or compliance purposes.
[000114]
The present disclosure encompasses that the inventory ingestion microservice 30 handles ingesting and managing an inventory data related to assets, resources, or components in the network infrastructure.
29
[000115]
At step [408], the method [400] comprises fetching, by a transceiver unit [306] via the ingestion layer [504], a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502].
[000116]
The present disclosure encompasses that the set of data, fetched by the transceiver 5 unit [306] via the ingestion layer [504], comprises one or more performance counters, wherein the one or more performance counters comprise one or more of success hits and failure hits for one or more request messages and one or more response messages corresponding to data ingestion.
10
[000117]
The present disclosure encompasses that the one or more performance counters refers to one or more indicators used to measure a performance corresponding to the data ingestion such as one or more of success hits and failure hits for one or more request messages. Also, the one or more performance counters encompassing metrics such as the success hits and the failure hits are organized within a CSV (Comma-Separated Values) file. Further, the CSV 15 file is packaged either a tape archive (TAR) archive and/or a zip format.
[000118]
Further, the term data ingestion refers to a process of collecting, importing, and importing the set of data in a raw format from various sources i.e., the one or more source systems [502] into the network performance management system for further processing, 20 analysis, and utilization.
[000119]
The present disclosure encompasses that the set of data is fetched from the one or more source systems [502] at one of a predefined periodic interval, and an adaptive periodic interval. The predefine periodic interval is a predetermined duration of time at which the set of 25 data is fetched from the one or more source systems [502]. For instance, the predefine periodic interval may range from 5 minutes to 60 minutes and/or 1 week to 4 weeks. The adaptive periodic interval dynamically adjusts a time at which the set of data is fetched from the one or more source systems [502] based on one or more condition such as a time associated with the peak network utilisation activity. 30
[000120]
At step [410], the method [400] comprises processing, by a processing unit [308] via the ingestion layer [504], the set of data to store the data.
30
[000121]
The present disclosure encompasses that the processing, by the processing unit [308] via the ingestion layer [504], the set of data to store the data comprises calculating, by the processing unit [308], a set of changes in the set of data using a trained model and storing, by the processing unit [308], a set of delta files based on the calculation of the set of changes in the set of data. 5
[000122]
Further, the set of changes refers to one or more alterations and/or modification done on original set of data i.e., the fetched set of data. The one or more alterations and/or modification may include additions, deletions, or modifications performed on the set of data.
10
[000123]
The present disclosure encompasses that the trained model refers to a machine learning model that has been trained on a dataset to calculate the set of changes. The trained model may be trained to recognize one or more patterns and one or more anomalies in the set of data to identify differences between the changes set of data and the original set of data.
15
[000124]
The present disclosure encompasses that the trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique.
[000125]
The trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML). The training may be done by one or more techniques which 20 may be known to the person skilled in the art. The one or more techniques may include the steps of data collection, data preprocessing, feature extraction, model selection and training.
[000126]
The present disclosure encompasses that the set of delta files are the files which store the changes and/or differences associated with the fetched set of data. 25
[000127]
At step [412], the method [400] comprises providing, by the transceiver unit [306] via the ingestion layer [504], the stored data to a normalisation layer [100b].
[000128]
The present disclosure encompasses that the stored data is provided to the 30 normalisation layer [100b] for subsequent processing of the stored data.
[000129]
The method [400] terminates at step [414].
31
[000130]
Referring to FIG. 5, an exemplary flow [500] diagram for unified data ingestion in a network performance management system, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the flow [500] is performed by the system [300]. Also, as shown in FIG. 5, at step S1, one or more source systems [502] are configured and based on the configuration, a ingestion layer [504] fetches a data. 5
[000131]
Next, at step S2 of the flow [500], at least one of source system [502] is onboarded by an user interface [506].
[000132]
Next. at step S3 of the flow [500], the data is further transferred for one or more 10 operations to a normalisation layer [100b].
[000133]
Next, at step S4 of the flow [500], the user interface [506] also transmits a policy configuration for normalizing the data to the normalisation layer [100b].
15
[000134]
Next, at step S5 of the flow [500], a meta data related information is also stored in the database unit [304].
[000135]
Thereafter, at step S6 of the flow [500], the normalisation layer stores the data in the database unit [304]. Additionally, the sequence or series of steps in the method may be 20 adjusted, interchanged or skipped according to the requirements.
[000136]
The present disclosure further discloses a user equipment (UE) comprising a processor. The processor is configured to configure via a configuration unit [302], one or more source systems [502] of one or more vendors, wherein each source system [502] corresponds 25 to a separate vendor. The processor is further configured to store via a database unit [304], a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related to the one or more vendors. The processor is further configured to fetch via a transceiver unit [306], a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502]. The processor is further 30 configured to process via a processing unit [308], the set of data to store the data. The processor is configured to provide via the transceiver unit [306], the stored data to a normalisation layer [100b].
32
[000137]
The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for unified data ingestion in a network performance management system, the instructions include executable code which, when executed by a one or more units of a system, causes: a configuration unit [302] to configure, via a user interface [506], one or more source systems [502] of one or more vendors, wherein each source system [502] 5 corresponds to a separate vendor, a database unit [304] configured to store, in, a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related to the one or more vendors, a transceiver unit [306] to fetch a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502], a processing unit [308] to process the set of data to store the data and the 10 transceiver unit [306] to provide the stored data to a normalisation layer [100b].
[000138]
As is evident from the above, the present disclosure provides a technically advanced solution for unified data ingestion in a network performance management system. The present solution onboards a new vendor during which no or minimal code level changes 15 are needed. Further, the present solution follows an approach to streamline the on boarding process with different vendors seamlessly by using novel techniques and an user friendly user interface. The present solution remotely fetches and pull data from one or more source systems without impacting one or more internal processes. Further, the technical advantage of the present solution lies that the network performance management system upon based on the 20 implementation of the present solution requires no downtime to onboard a new vendor and/or source systems as the network performance management system does not have to go through the software life cycle process (development, testing, integration testing, and deployment).
[000139]
While considerable emphasis has been placed herein on the disclosed 25 implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
We Claim:
1. A method for unified data ingestion in a network performance management system, the
method comprising:
configuring, by a configuration unit [302] via an ingestion layer [504] from a user interface
[506], one or more source systems [502] of one or more vendors, wherein each source
system corresponds to a separate vendor;
storing, by the ingestion layer [504] in a database unit [304], a set of meta data associated
with the one or more source systems [502], wherein the set of meta data is related to the
one or more vendors;
fetching, by a transceiver unit [306] via the ingestion layer [504], a set of data from the one
or more source systems [502] based on the set of metadata associated with the one or more
source systems [502];
processing, by a processing unit [308] via the ingestion layer [504], the set of data to store
the data;
providing, by the transceiver unit [306] via the ingestion layer [504], the stored data to a
normalisation layer [100b].
2. The method as claimed in claim 1, wherein processing, by the processing unit [308] via the
ingestion layer [504], the set of data to store the data comprises:
calculating, by the processing unit [308], a set of changes in the set of data using a trained model, and
storing, by the processing unit [308], a set of delta files based on the calculation of the set of changes in the set of data.
3. The method as claimed in claim 2, wherein the trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique.
4. The method as claimed in claim 1, wherein the ingestion layer [504] comprises one or more of a fault management ingestion microservice, a performance management ingestion microservice, a configuration management ingestion microservice, a charging data records

ingestion microservice, an infra metric broker, a log metric microservice, and an inventory ingestion microservice.
5. The method as claimed in claim 1, wherein the set of data, fetched by the transceiver unit [306] via the ingestion layer [504], comprises one or more performance counters, wherein the one or more performance counters comprises one or more of success hits and failure hits for one or more request messages and one or more response messages corresponding to data ingestion.
6. The method as claimed in claim 1, wherein the set of metadata associated with the one or more source systems [502] comprises one or more of a format of data, a pull frequency, a protocol information, and a source data location information.
7. The method as claimed in claim 1, wherein the set of data is fetched from the one or more source systems [202] at one of a predefined periodic interval, and an adaptive periodic interval.
8. The method as claimed in claim 1, wherein the stored data is provided to the normalisation layer [100b] for subsequent processing of the stored data.
9. A system [300] for unified data ingestion in a network performance management system, the system [300] comprising:
a configuration unit [302] adapted to configure, via an ingestion layer [504], from a user interface [506], one or more source systems [502] of one or more vendors, wherein each source system corresponds to a separate vendor;
a database unit [304] connected at least to the configuration unit [302], the database unit [304] configured to store, by the ingestion layer [504], a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related to the one or more vendors;

a transceiver unit [306] connected at least to the database unit [304], the transceiver unit [306] configured to fetch via the ingestion layer [504], a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502]; and
a processing unit [308] connected at least to the transceiver unit [306], the processing unit [308] configured to process, via the ingestion layer [504] the set of data to store the data; wherein the transceiver unit [306] is further configured to provide via the ingestion layer [504] the stored data to a normalisation layer [100b].
10. The system [300] as claimed in claim 9, wherein the processing unit [308] via the ingestion
layer [504], for processing the set of data to store the data, is configured to:
calculate a set of changes in the set of data using a trained model; and
store a set of delta files based on the calculation of the set of changes in the set of data.
11. The system [300] as claimed in claim 10, wherein the trained model is trained using one of an Artificial Intelligence (AI) technique and a Machine Learning (ML) technique.
12. The system [300] as claimed in claim 9, wherein the ingestion layer [504] comprises one or more of a fault management ingestion microservice, a performance management ingestion microservice, a configuration management ingestion microservice, a charging data records ingestion microservice, an infra metric broker, a log metric microservice, and an inventory ingestion microservice.
13. The system [300] as claimed in claim 9, wherein the set of data fetched by the transceiver unit [306] via the ingestion layer [504] comprises one or more performance counters, wherein the one or more performance counters comprises one or more of success hits and failure hits for one or more request messages and one or more response messages corresponding to data ingestion.

14. The system [300] as claimed in claim 9, wherein the set of metadata associated with the one or more source systems [502] comprises one or more of a format of data, a pul frequency, a protocol information, and a source data location information.
15. The system [300] as claimed in claim 9, wherein the set of data is fetched from the one or more source systems [502] at one of a predefined periodic interval, and an adaptive periodic interval.
16. The system [300] as claimed in claim 9, wherein the stored data is provided to the normalisation layer [100b] for subsequent processing of the stored data.
17. A user equipment (UE) comprising a processor, configured to:
configure via a configuration unit, one or more source systems [502] of one or more vendors, wherein each source system [502] corresponds to a separate vendor;
store via a database unit [304], a set of meta data associated with the one or more source systems [502], wherein the set of meta data is related to the one or more vendors;
fetch, via a transceiver unit [306], a set of data from the one or more source systems [502] based on the set of metadata associated with the one or more source systems [502] and
process, via a processing unit [308], the set of data to store the data; and
provide, via the transceiver unit [306], the stored data to a normalisation laye [100b].

Documents

Application Documents

# Name Date
1 202321047800-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf 2023-07-15
2 202321047800-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf 2023-07-15
3 202321047800-FORM 1 [15-07-2023(online)].pdf 2023-07-15
4 202321047800-FIGURE OF ABSTRACT [15-07-2023(online)].pdf 2023-07-15
5 202321047800-DRAWINGS [15-07-2023(online)].pdf 2023-07-15
6 202321047800-FORM-26 [18-09-2023(online)].pdf 2023-09-18
7 202321047800-Proof of Right [23-10-2023(online)].pdf 2023-10-23
8 202321047800-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf 2023-12-08
9 202321047800-ENDORSEMENT BY INVENTORS [31-05-2024(online)].pdf 2024-05-31
10 202321047800-DRAWING [31-05-2024(online)].pdf 2024-05-31
11 202321047800-CORRESPONDENCE-OTHERS [31-05-2024(online)].pdf 2024-05-31
12 202321047800-COMPLETE SPECIFICATION [31-05-2024(online)].pdf 2024-05-31
13 Abstract1.jpg 2024-06-28
14 202321047800-FORM 3 [01-08-2024(online)].pdf 2024-08-01
15 202321047800-Request Letter-Correspondence [09-08-2024(online)].pdf 2024-08-09
16 202321047800-Power of Attorney [09-08-2024(online)].pdf 2024-08-09
17 202321047800-Form 1 (Submitted on date of filing) [09-08-2024(online)].pdf 2024-08-09
18 202321047800-Covering Letter [09-08-2024(online)].pdf 2024-08-09
19 202321047800-CERTIFIED COPIES TRANSMISSION TO IB [09-08-2024(online)].pdf 2024-08-09