Sign In to Follow Application
View All Documents & Correspondence

Method And System For Real Time Network Data Monitoring

Abstract: ABSTRACT METHOD AND SYSTEM FOR REAL-TIME NETWORK DATA MONITORING The present disclosure relates to a system (120) and a method (500) for real-time network data monitoring. The method (500) includes the step of creating a dashboard for on-demand dynamic monitoring of Key Performance Indicators (KPIs) based on a request received from a User Equipment (UE) (110). The method (500) includes the step of computing the KPIs for the network aggregation based on attributes selected or defined in the request. The method (500) includes the step of obtaining data or results for the attributes associated with the KPIs. The method (500) includes the step of notifying the data or results to the user and displaying the data or results. The method (500) includes the step of providing access to the data or results associated with the KPIs by drilling down or rolling up. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
3. Jugal Kishore Kolariya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
4. Gaurav Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
5. Kishan Sahu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
6. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
7. Sunil Meena
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
8. Gourav Gurbani
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
9. Sanjana Chaudhary
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
10. Chandra Kumar Ganveer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
11. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
12. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
13. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
14. Kalikivayi Srinath
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
15. Vitap Pandey
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
16. Yogesh Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
17. Kunal Telgote
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
18. Dharmendra Kumar Vishwakarma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR REAL-TIME NETWORK DATA MONITORING
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system for real-time network data monitoring.
BACKGROUND OF THE INVENTION
[0002] Networks may be monitored using various factors and variables forming Key Performance Indicators (KPIs), these metrics provide valuable insights into various aspects of network behavior. Collected network data may be classified on the basis of network architecture, mainly HNA (Hierarchical Network Architecture), SNA (Social Network Analysis) and CNA (Converged Network Architecture). The data is typically presented in the form of dashboards to enable monitoring of the KPI, when creating the dashboards to monitor KPIs either of the classifications may be selected on the basis of grouped KPIs.
[0003] The issue here is that if one classification is required to be switched to another or if classifications are to be merged or further segmented, or deeper analysis of fields within the created dashboard is required for the purpose of flexible data analysis, the entire dashboard must be modified as per the revised preference of classification and execution of the actions must be performed all over again in order to view revised KPI values with the newly selected classifications. Therefore, it is not possible to use current technology to transition network data from one network architectural classification to another in real time.
[0004] Hence there is a need for efficient methods and systems for real-time network data monitoring.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a method and a system for real-time network data monitoring.
[0006] In one aspect of the present invention, the method for real-time network data monitoring is disclosed. The method includes the step of creating, by one or more processors, a dashboard for on-demand monitoring of Key Performance Indicators (KPIs) based on a request received from a User Equipment (UE). The method includes the step of computing, by the one or more processors, the KPI indicator for the network aggregation based on attributes selected or defined in the request. The method includes the step of obtaining, by the one or more processors, data or results for the attributes associated with the KPIs. The method includes the step of notifying, by the one or more processors, the data or results to the user and displaying the data or results. The method includes the step of providing, by the one or more processors, access to the data or results associated with the KPIs by drilling down or rolling up.
[0007] In one embodiment, the method includes the step of determining, by the one or more processors, if a time frame is greater than a retention period in the request.
[0008] In another embodiment, the step of determining if the time frame is greater than the retention period in the request includes defining and selecting, by the one or more processors, the time frame. If the selected time frame is greater than the retention period, the request is forwarded to a computation layer.
[0009] In yet another embodiment, the step of obtaining and notifying includes processing, by the one or more processors, the request.
[0010] In yet another embodiment, the method includes the step of populating, by the one or more processors, dynamic data associated with a network hierarchy during drilling down or rolling up.
[0011] In yet another embodiment, the method includes the step of selecting, by the one or more processors, a dynamic drill down or dynamic roll up. The method includes the step of enabling uploading of mapped dataset or defining an action for the attributes associated with the network hierarchy.
[0012] In yet another embodiment, defining a network hierarchy to lowest level allows to perform concatenation, mapping and combination of both at any level and allows navigation from lowest level to highest level.
[0013] In another aspect of the present invention, the system for real-time network data monitoring is disclosed. The system includes a user interface, configured to capture dashboard demand created by a user for monitoring Key Performance Indicator (KPI). The system includes an integrated performance management, configured to receive the request from the user interface. The system includes a computation layer, configured to further receive the request and queries from the integrated performance management. The system further includes the user interface, is further configured to enable selecting the KPI for a network aggregation and attributes defined in the request. The system further includes the computation layer is further configured to compute the KPI to obtain results/data for the network aggregation received from the user interface.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for real-time network data monitoring, according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for real-time network data monitoring, according to one or more embodiments of the present disclosure;
[0018] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to one or more embodiments of the present disclosure;
[0019] FIG. 4 is a signal flow diagram illustrating the system for real-time network data monitoring, according to one or more embodiments of the present disclosure; and
[0020] FIG. 5 is a flow diagram illustrating a method for real-time network data monitoring, according to one or more embodiments of the present disclosure.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] The system and method of the present invention enables real time transition of network data from one network architectural classification to another network architectural classification, enabling the Key Performance Indicators (KPIs) analysis. In an aspect, providing access to data or results associated with the KPIs by drilling down, or rolling up to solve the issues efficiently, which saves time. The “drilling down” and “rolling-up” features enable analysis and comparative viewing in real-time, making data monitoring much more efficient.
[0026] FIG. 1 illustrates an exemplary block diagram of an environment 100 for real-time network data monitoring, according to one or more embodiments of the present disclosure. The environment 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 to transmit a request to generate a dashboard to dynamically monitor Key Performance Indicators (KPIs). The KPIs are used to evaluate performance, track progress over time, and make informed decisions based on quantifiable data.
[0027] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0028] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0029] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0030] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0031] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity.
[0033] The system 120 is further configured to employ Transmission Control Protocol (TCP) connection to identify any connection loss in the network 105 and thereby improving overall efficiency. The TCP connection is a communication standard enabling applications and the system 120 to exchange information over the network 105.
[0034] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0035] FIG. 2 illustrates an exemplary block diagram of the system 120 for real-time network data monitoring, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, a display unit 215, a user interface 220, a load balancer 225, a distributed data lake 240, and a distributed file system 245. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0036] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 210. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0037] The information related to the request to generate the dashboard to dynamically monitor the Key Performance Indicators (KPIs) is provided or stored in the memory 210. Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0038] The distributed data lake 240 is a data repository providing storage and computing for structured and unstructured data, such as for machine learning, streaming, or data science. The distributed data lake 240 allows users and/or organizations to ingest and manage large volumes of data in an aggregated storage solution for business intelligence or data products. The distributed data lake 240 enables flexible data ingestion, storage, and analysis, making them valuable for big data analytics, data science, and machine learning applications. Effective data governance practices are essential for ensuring data quality, security, and compliance within a distributed data lake environment.
[0039] The user interface 220 is configured to capture the dashboard for on-demand monitoring of the KPIs. The on-demand monitoring of the KPIs refers to the capability of the system 120 to dynamically create and display the dashboard for monitoring the KPIs based on the request from the UE 110. The user interface 220 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 220 facilitates communication of the system 120. In one embodiment, the user interface 220 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110 and the distributed data lake 235. The user interface 220 is rendered on the display unit 215, implemented using LCD display technology, OLED display technology, and/or other types of conventional display technology.
[0040] Upon creation of the dashboard for monitoring the KPIs based on the request received from the user, the request is transmitted to the load balancer 220. The load balancer 220 is positioned between the user interface 220 and the integrated performance management 225. The load balancer 220 distributes incoming network traffic across multiple servers or resources. The primary purpose of the load balancer 220 is to improve the performance, reliability, and availability of applications and services by evenly distributing the request among the server 115. The even distribution of the request helps prevent any single server from becoming overwhelmed with requests, thereby optimizing resource usage and ensuring that users experience faster response times and higher availability of the services which are accessing. The load balancer 220 is configured to control flow of request between the user interface 220 and multiple instances of integrated performance management 225. Further, the load balancer 220 is configured to transmit the request to the integrated performance management 225.
[0041] In order for the system 120 to monitor real-time network data, the processor 205 includes the integrated performance management 225, and a computation layer 230, communicably coupled to each other for real-time network data monitoring. The operations and functionalities of the integrated performance management 225, and the computation layer 230 can be used in combination or interchangeably. Further, the integrated performance management 225 includes processing network performance data by using a performance management engine and managing the KPI by using a KPI engine.
[0042] The integrated performance management 225, and the computation layer 230, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0043] The integrated performance management 230 is configured to receive the request from the user interface 220. In an embodiment, the request includes, but not limited to, a data computation request. The integrated performance management 230 is configured to determine if a time frame is greater than a retention period based on the received request. In an embodiment, the time frame pertains to the waiting period of the request. The integrated performance management 230 is configured to determine whether the request can be fulfilled within the specified time frame or if the data should be retained or discarded based on the retention period. The integrated performance management 230 of the one or more processors 205 is configured to define and select the time frame. In an exemplary embodiment, real-time data is directly fetched from the distributed file system 245 for hourly, daily basis. If the time frame is more than the week, then the request is transmitted to the computation layer 235, which provides the already computed data in a range for weekly, monthly and yearly basis. Further, the integrated performance management 230 is configured to transmit the request to the computation layer 235 if the selected time frame is greater than the retention period.
[0044] The computation layer 235 is configured to further receive the request and queries from the integrated performance management 230. In an embodiment, the queries are related to the selected time frame is greater than the retention period. The queries are involved in the integrated performance management 230, and then transmitted to the computation layer 235 for further processing. On receipt of the request, the user interface 220 is further configured to enable selecting the KPIs for the network aggregation and attributes defined in the request. The integrated performance management 230 is configured to gather and process the network performance data from different data sources and, based on the network aggregation required, store the network performance data in the distributed data lake 240 by using the performance management unit. The KPI engine is responsible for managing all the KPIs of all the network elements. Counters collected and processed by the performance management engine through different data sources will be used by the KPI engine to calculate the KPIs and segregate it based on the aggregation required and store the KPIs in the distributed data lake 240. The KPI engine is responsible for all the reporting and visualization of the KPIs. The integrated performance management 230 can store the KPIs and counters aggregated output data in the distributed data lake 240 for further processing.
[0045] Further, the KPIs are computed for the network aggregation and attributes mentioned in the request. The network aggregation for the KPIs refers to the process of combining and summarizing performance metrics from multiple network elements or components into higher-level metrics that provide a holistic view of network performance. The network aggregation is essential for network management and monitoring purposes, allowing operators to assess the overall health, efficiency, and quality of the network infrastructure. The attributes in the KPIs refer to specific characteristics or parameters that are measured to assess the performance of the system 120, process, or activity. These attributes provide detailed information about various aspects of performance and are essential for defining, monitoring, and analyzing the KPIs effectively.
[0046] On selection of the KPIs for the network aggregation and the attributes defined in the request, the computation layer 235 is further configured to compute the KPIs to obtain results/data for the network aggregation received from the user interface 220. The data obtained by the computation layer 235 is further sent to the integrated performance management 230. The integrated performance management 230 is configured to transmit the KPI data to the load balancer 225. The load balancer 225 is configured to transmit the received KPI data to the user interface 220.
[0047] The distributed file system 245 is configured to store the computed KPIs data. The distributed file system 245 spans across multiple file servers or multiple locations, such as file servers that are situated in different physical places. The files are accessible just as if they were stored locally, from any device and from anywhere on the network 105. The distributed file system 245 facilitates the sharing information and files among the users on the network 105 in a controlled and authorized way.
[0048] Upon receipt of the computed KPI data from the load balancer 225, the user interface 220 is configured to generate a notification for the user and enable rendering of the data via the display unit 215 for the user to access. The user interface 220 is further configured to enable the user to access the attributes in the data, and dynamically drill down or dynamically roll up a network hierarchy. For example, the time range for dynamically drill down and dynamically roll up to access the attributes in the data can be set to 15 min, 1 hour, 6 hours, daily, weekly, monthly and yearly basis. The network hierarchy refers to the organizational structure or arrangement of interconnected devices and systems within the network 105. The network hierarchy outlines the relationships and levels of authority or functionality among different network components. In an embodiment, the network hierarchy includes but is not limited to, Converged Network Architecture (CNA), Static Network Architecture (SNA), and Hierarchical Network Architecture (HNA).
[0049] The CNA refers to creating a new attribute, possibly derived from an existing raw attribute or a combination of attributes, under specific conditions. These conditions involve operations like splitting, concatenating multiple attributes, or extracting sub attributes from the existing attribute of the same network element. The SNA involves creating a new static attribute that corresponds directly to the existing raw attribute within the same network element. A mapping between the new static attribute and the existing raw attribute is straightforward. The HNA involves creating a new attribute that is formed from two or more raw attributes, or potentially from a combination of SNA and CNA attributes across one or more network elements. In this regard, the attributes are aggregated or structured in layers of a hierarchical structure based on their relationships or dependencies. The network hierarchy is configured to define from lowest level which allows to perform concatenation, mapping and combination of both at any layer and allows navigation from lowest level to highest level on the fly without any pre-defined rules or configurations.
[0050] In an embodiment, the user interface 220 is further configured to enable uploading of mapped dataset or defining an action for the attributes associated with the network hierarchy during dashboard computation. The user interface 220 is configured to allow the user to choose either the drill down or the roll up of the network hierarchy and the resultant values will be produced in real time after dashboard computation. For example, considering that mapping is provided in a spreadsheet format. The new attribute is created by using the existing attributes. Following the mapping, enrichment is done. In another embodiment, the action or operation is selected for mapping. For an example, the number of attributes such as 100 attributes to perform the field operation during drilling down and rolling up. Let us consider an example of concatenation of multiple attributes to construct a new attribute= attribute 1 + attribute 2 + ….. +attribute N. The concatenation operation performed at pair levels, result in outputs which are merged with another pair.
[0051] As per one embodiment, the attribute 1 and the attribute 2 are concatenated to create a cluster. The cluster typically refers to a group of interconnected elements or attributes that work together to perform tasks, manage workloads, and provide redundancy. Let’s consider for an example, the attribute 1 is M and the attribute 2 is 101. The new attribute “M101” is created and required to be inserted as well. Then, the cluster “MUM101” is created and displayed in the spreadsheet. While merging the values of the attributes, the data computation request is created along with it. The created data computation request is transmitted to the computation layer 235.
[0052] Upon receiving the data computation request, the user interface 220 is further configured to enable selecting the KPIs for the network aggregation and attributes defined in the request. The attributes in the KPIs refer to specific characteristics or parameters that are measured to assess the performance of the system 120, process, or activity. These attributes provide detailed information about various aspects of performance and are essential for defining, monitoring, and analyzing the KPIs effectively. The computation layer 235 is configured to compute the KPI. The KPI is computed to obtain results/data for the network aggregation received from the user interface 220. The computed KPI data is transmitted to the load balancer 225. The KPI data along with a notification is transmitted to the user interface 220.
[0053] As per one embodiment, the mapped dataset or while defining the operation for the attributes associated with the network hierarchy, the user interface 220 is configured to allow the user to choose either the drill down or the roll up of the network hierarchy. The user interface 220 is configured to generate an option to upload the mapped dataset or define the operation to be applied to the attributes associated with the network hierarchy. The operation details and the mapped dataset is provided, to the dashboard on the user interface 220, and then again sent in the same flow described above. After computation, the result is shown to the user via the user interface 220. By doing so, the system 120 enables transitioning of the network data using a single/same dashboard, analyzing and comparing views in real time which makes data monitoring much more efficient. Further, the system 120 facilitates the user to change the network hierarchy of the execution and dynamically drill up or roll down on the hierarchy level which is provided on the single/same dashboard.
[0054] FIG. 3 is a schematic representation of the system 120 in which various entities operations are explained, according to one or more embodiments of the present disclosure. Referring to FIG. 3, describes the system 120 for real-time network data monitoring. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0055] As mentioned earlier in FIG.1, In an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, personal computers, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each user equipment 110 is communicatively coupled with the processor 205 via the network 105.
[0056] The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit the request to generate the dashboard to dynamically monitor the KPIs.
[0057] Furthermore, the one or more primary processors 305 within the UE 110 are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to real-time network data monitoring. The operational synergy between the one or more primary processors 305 and the additional processors, guided by the executable instructions stored in the memory unit 310, facilitates a seamless real-time network data monitoring.
[0058] As mentioned earlier in FIG.2, the system 120 includes the one or more processors 205, the memory 210, the display unit 215, the user interface 220, the load balancer 225, the distributed data lake 240, and the distributed file system 245. The operations and functions of the one or more processors 205, the memory 210, the display unit 215, the user interface 220, the distributed data lake 240, and the distributed file system 245 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0059] Further, the processor 205 includes the integrated performance management 230, and the computation layer 235. The operations and functions of the integrated performance management 230, and the computation layer 235 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0060] FIG. 4 is a signal flow diagram illustrating the system 120 for real-time network data monitoring, according to one or more embodiments of the present disclosure.
[0061] At step 402, the request is transmitted pertaining to real-time monitoring of the KPIs by the user via the UI 220. In an embodiment, the request includes but not limited to, data computation request. The user includes at least one of, but not limited to, a network operator. The user selects the KPIs which are required to perform real-time network data monitoring. The received request is transmitted to the load balancer 225.
[0062] At step 404, positioning the load balancer 220 between the user interface 220 and the integrated performance management 230 based on the transmitted request from the user. The load balancer 220 is configured to distribute incoming network traffic across multiple servers or resources. The load balancer 220 is configured to evenly distribute the request which helps to prevent the integrated performance management 230 from becoming overwhelmed with the requests. The load balancer 220 is configured to control the flow of request between the user interface 220 and the integrated performance management 225. Further, the load balancer 220 is configured to transmit the request to the integrated performance management 225.
[0063] At step 406, in one or more alternate embodiments, the load balancer 225 is configured to transmit the request to the integrated performance management 230. Once the request is received at the integrated performance management 230, the integrated performance management 230 is configured to determine if the time frame is greater than the retention period in the request. In an embodiment, the time frame pertains to the waiting period of the request. The integrated performance management 230 of the one or more processors 205 is configured to define and select the time frame. Further, the request is forwarded from the integrated performance management 230 to the computation layer 235 if the selected time frame is greater than the retention period. Once the request is received by the computation layer 235, the computation layer 235 is configured to transmit acknowledgement of the received request to the integrated performance management 230. Further, the KPI data is computed in the computation layer 235. Upon computing, the computed KPI data is transmitted to the integrated performance management 230.
[0064] At step 408, in one or more alternate embodiments, the computation layer 235 is configured to receive the request and queries from the integrated performance management 230 to the distributed data lake 240. The distributed data lake 240 is configured to fetch the required KPI data to the integrated performance management 230.
[0065] At step 410, transmitting the KPI data to the load balancer 225 from the integrated performance management 230 along with the notification. Further, the load balancer 225 is configured to transmit the received KPI data to the user interface 220.
[0066] The display unit 215 is configured to render the data for the user to access by the user interface 220 on receipt of the computed KPI data from the load balancer 225. The user interface 220 is configured to allow the user to choose either the drill down or the roll up the network hierarchy and the resultant values will be produced in real time. The user interface 220 is further configured to enable uploading of mapped dataset or defining the operation for the attributes associated with the network hierarchy. The user interface 220 is configured to generate an option to upload the mapped dataset or define the action to be applied to the attributes associated with the network hierarchy. The action details and the mapped dataset is provided, to the dashboard on the user interface 220, and then again sent in the same flow described above. After computation the result is shown to the user via the user interface 220.
[0067] FIG. 5 is a flow diagram illustrating a method 500 for real-time network data monitoring, according to one or more embodiments of the present disclosure.
[0068] At step 505, the method 500 includes the step of creating the dashboard for on-demand monitoring of the KPIs based on the request received from the UE 110 via the user interface 220. In an embodiment, the request includes but not limited to, data computation request. The user includes at least one of, but not limited to, the network operator. The user selects the KPIs for which real-time network data monitoring is required to be performed. The received request is transmitted to the load balancer 225.
[0069] At step 510, the method 500 includes the step of computing the KPIs for the network aggregation based on the attributes selected or defined in the request by the computation layer 235. The user interface 220 is further configured to enable/allow the user to select the KPIs for the network aggregation and attributes defined in the request. The attributes in the KPIs refer to specific characteristics or parameters that are measured to assess the performance of the system 120, process, or activity.
[0070] At step 515, the method 500 includes the step of obtaining results/data for the network aggregation received from the user interface 220 based on selection of the KPIs for the network aggregation and the attributes defined in the request. The data obtained by the computation layer 235 is further sent to the integrated performance management 230.
[0071] At step 520, the method 500 includes the step of notifying the data or results to the user and enabling rendering of the data or results for the user to access via the display unit 215 on receipt of the computed KPI data from the load balancer 225. The user interface 220 is further configured to enable the user to access the attributes in the data, and dynamically drill down or dynamically roll up the network hierarchy.
[0072] At step 525, the method 500 includes the step of providing access to the data or results associated with the KPIs by drilling down or rolling up. The user will be able to choose either drill down or roll up the network hierarchy and accordingly the resultant values will be produced in real time. The user interface 220 is further configured to enable uploading of mapped dataset or defining an action for the attributes associated with the network hierarchy. The user interface 220 is configured to generate an option to upload the mapped dataset or define the action to be applied to the attributes associated with the network hierarchy. The action details and the mapped dataset is provided to the dashboard on the user interface 220, and then again sent in the same flow described above. After computation, the result is shown to the user via the user interface 220.
[0073] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to create a dashboard for on-demand dynamic monitoring of Key Performance Indicators (KPIs) based on a request received from a User Equipment (UE) 110. The processor 205 is configured to compute the KPIs for the network aggregation based on attributes selected or defined by the user in the request. The processor 205 is configured to obtain data or results for the attributes associated with the KPIs. Further, the processor 205 is configured to notify the data or results to the user and display the data or results. Further, the processor 205 is configured to provide access to the data or results associated with the KPIs by drilling down or rolling up.
[0074] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0075] The present disclosure incorporates technical advancement of classifying the network data into network architecture, thereby being beneficial to visualize the KPIs aggregated on the network architectures. The present invention enables transitioning of the network data using the single/same dashboard, analyzing and comparing views in real time which makes data monitoring much more efficient. Further, the present invention facilitates in changing the network hierarchy of the execution and dynamically drill up or roll down the hierarchy level which is provided on the single/same dashboard, and also prevents receiving of multiple data from the distributed data lake thereby improving processing speed of the processor 205 and reducing requirement of memory space.
[0076] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0077] Environment – 100
[0078] Network – 105
[0079] User Equipment – 110
[0080] Server – 115
[0081] System – 120
[0082] Processor -205
[0083] Memory – 210
[0084] Display unit- 215
[0085] User Interface– 220
[0086] Load balancer – 225
[0087] Integrated performance management– 230
[0088] Computation layer– 235
[0089] Distributed data lake- 240
[0090] One or more primary processors – 305
[0091] Memory– 310


,CLAIMS:
CLAIMS
We Claim:
1. A method (500) for real-time network data monitoring, the method (500) comprising the steps of:
creating (505), by one or more processors (205), a dashboard for on-demand monitoring of Key Performance Indicators (KPIs) based on a request received from a user Equipment (UE) (110);
computing (510), by the one or more processors (205), the KPIs for network aggregation based on attributes selected or defined in the request;
obtaining (515), by the one or more processors (205), data or results for the attributes associated with the KPIs;
notifying (520), by the one or more processors (205), the data or results to the UE and displaying the data or results; and
providing (525), by the one or more processors (205), access to the data or results associated with the KPIs by drilling down or rolling up.

2. The method (500) as claimed in claim 1, comprises determining, by the one or more processors (205), if a time frame is greater than a retention period in the request.

3. The method (500) as claimed in claim 2, wherein determining comprises defining and selecting, by the one or more processors (205), the time frame, wherein if the selected time frame is greater than the retention period, the request is forwarded to a computation layer (235).

4. The method (500) as claimed in claim 1, wherein obtaining and notifying comprises processing, by the one or more processors (205), the request.
5. The method (500) as claimed in claim 1, comprises populating, by the one or more processors (205), dynamic data associated with the network hierarchy during drilling down, or rolling up.

6. The method (500) as claimed in claim 1, comprises selecting, by the one or more processors (205) a dynamic drill down or a dynamic roll up, wherein selecting further enables uploading of mapped dataset, or defining an action for the attributes associated with the network hierarchy.

7. The method (500) as claimed in claim 1, wherein drilling down or rolling up comprises defining a network hierarchy to a lowest level which allows to perform concatenation, mapping and combination of both at any level and allowing navigation from the lowest level to a highest level.

8. A system (120) for real-time network data monitoring, the system (120) comprises:
a user interface (220), configured to capture dashboard demand created by a user for monitoring Key Performance Indicators (KPIs);
an integrated performance management (230), configured to receive the request from the user interface (220); and
a computation layer (235), configured to further receive the request and queries from the integrated performance management (230);
wherein, the user interface (220), is further configured to enable selecting the KPIs for network aggregation and attributes defined in the request;
wherein, the computation layer (235) is further configured to compute the KPIs to obtain results/data for the network aggregation received from the user interface (220).

9. The system (120) as claimed in claim 8, comprises a load balancer (225) positioned between the user interface (220) and the integrated performance management (230), and further configured to control flow of request between the user interface (220) and the integrated performance management (230).

10. The system (120) as claimed in claim 8, wherein the data obtained by the computation layer (235) is further sent to the integrated performance management (230).

11. The system (120) as claimed in claim 8, wherein the user interface (220) is configured to generate a notification for the user and enable rendering of the data for the user to access.

12. The system (120) as claimed in claim 11, wherein the user interface (220) is further configured to enable the user to access the attribute in the data, and dynamically drill down or dynamically roll up the network hierarchy.

13. The system (120) as claimed in claim 12, wherein the user interface (220) is further configured to enable uploading of mapped dataset, or defining an action for the attributes associated with the network hierarchy.

14. The system (120) as claimed in claim 8, wherein defining a network hierarchy to lowest level which allows to perform concatenation, mapping and combination of both at any level and allows navigation from lowest level to highest level.

15. A User Equipment (UE) (110) comprising:
one or more primary processors (305) communicatively coupled to the one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
transmit a request to generate a dashboard to dynamically monitor Key Performance Indicators (KPIs), wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321048154-STATEMENT OF UNDERTAKING (FORM 3) [17-07-2023(online)].pdf 2023-07-17
2 202321048154-PROVISIONAL SPECIFICATION [17-07-2023(online)].pdf 2023-07-17
3 202321048154-FORM 1 [17-07-2023(online)].pdf 2023-07-17
4 202321048154-FIGURE OF ABSTRACT [17-07-2023(online)].pdf 2023-07-17
5 202321048154-DRAWINGS [17-07-2023(online)].pdf 2023-07-17
6 202321048154-DECLARATION OF INVENTORSHIP (FORM 5) [17-07-2023(online)].pdf 2023-07-17
7 202321048154-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321048154-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321048154-DRAWING [16-07-2024(online)].pdf 2024-07-16
10 202321048154-COMPLETE SPECIFICATION [16-07-2024(online)].pdf 2024-07-16
11 Abstract-1.jpg 2024-09-04
12 202321048154-Power of Attorney [05-11-2024(online)].pdf 2024-11-05
13 202321048154-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf 2024-11-05
14 202321048154-Covering Letter [05-11-2024(online)].pdf 2024-11-05
15 202321048154-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf 2024-11-05
16 202321048154-FORM 3 [03-12-2024(online)].pdf 2024-12-03
17 202321048154-FORM 18 [20-03-2025(online)].pdf 2025-03-20