Sign In to Follow Application
View All Documents & Correspondence

Method And System For Live Monitoring Of One Or More Key Performance Indicators

Abstract: ABSTRACT METHOD AND SYSTEM FOR LIVE MONITORING OF ONE OR MORE KEY PERFORMANCE INDICATORS The present disclosure relates to a system (120) and a method (500) for live monitoring of one or more Key Performance Indicators (KPIs). The method (500) includes the step of receiving one or more requests pertaining to live monitoring of the one or more KPIs from one or more users via a User Interface (UI) (215). The method (500) includes the step of executing the one or more KPIs within a specified time range extracted from the one or more requests. The method (500) includes the step of performing data computation pertaining to the one or more executed KPIs. The method (500) includes the step of compiling the computed data pertaining to the one or more executed KPIs. Ref. Fig. 6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Jugal Kishore Kolariya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
2. Gaurav Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Sunil Meena
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. Gourav Gurbani
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Kishan Sahu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Sanjana Chaudhary
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Chandra Kumar Ganveer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
12. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
13. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
14. Yogesh Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
15. Kunal Telgote
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
16. Niharika Patnam
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
17. Avinash Kushwaha
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
18. Dharmendra Kumar Vishwakarma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
19. Kalikivayi Srinath
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
20. Vitap Pandey
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR LIVE MONITORING OF ONE OR MORE KEY PERFORMANCE INDICATORS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system for live monitoring of one or more Key Performance Indicators (KPIs) of the networks.
BACKGROUND OF THE INVENTION
[0002] Executing dashboards includes a one-time activity performed by a user whenever needed. The user would need to click the execute button every time the results need to be updated. Furthermore, there is a provision for checking compiled reports at a later point in time whenever the user wants. Sometimes, continuous monitoring becomes essential to understand the trend in real-time.
[0003] In some cases, computation involving trends is required for time spans extending to past months or years. In conventional techniques, the number of computational steps required is higher. Additionally, it is challenging to perform computations for longer durations. Therefore, in the above cases, it becomes necessary to optimize computation efficiency while maintaining real-time data delivery. However, computational requirements and data delivery efficiency pose significant obstacles when dealing with extended monitoring periods. The current solutions are not able to offer the required optimization.
[0004] Therefore, there is a need for an optimization solution that substantially reduces effort and time consumed during network monitoring. In particular, there is a need to provide solutions that require performing only one computation when the same requests are received from multiple users. In other words, there is a need for a solution to avoid extra computations for similar reports.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a method and a system for live monitoring of one or more Key Performance Indicators (KPIs).
[0006] In one aspect of the present invention, the method for live monitoring of the one or more KPIs is disclosed. The method includes the step of receiving, by one or more processors, one or more requests pertaining to live monitoring of the one or more KPIs from one or more users via a User Interface (UI). The method includes the step of executing, by the one or more processors, the one or more KPIs within a specified time range extracted from the one or more requests. The method includes the step of performing, by the one or more processors, data computation pertaining to the one or more executed KPIs. The method includes the step of compiling, by the one or more processors, the computed data pertaining to the one or more executed KPIs.
[0007] In one embodiment, the one or more processors performs data computation pertaining to the one or more executed KPIs at a computation layer by transmitting one or more data computation requests to the computation layer for data computation.
[0008] In another embodiment, the method further includes step of performing the data computation pertaining to the one or more executed KPIs, includes the step of identifying, by the one or more processors, utilizing the computation layer, similarities between the one or more requests received from the one or more users. Further includes the step of initiating, by the one or more processors, the data computation process based on the identified similarities between the one or more requests. Further includes the step of, determining, by the one or more processors, execution time range pertaining to data computation process. Furthermore, includes the step of determining, by the one or more processors, utilizing the computation layer, based on the execution time range of the data computation, whether to utilize a caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct a trained model to generate one or more new KPI values.
[0009] In yet another embodiment, the method includes the step of, determining, utilizing the computation layer, based on the execution time range of the data computation, whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct the trained model to generate one or more new KPI values, include the steps of comparing, by the one or more processors, a current execution time range of the data computation with a predefined execution time range for the data computation. Further includes the step of, in response to comparison, determining, by the one or more processors, whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct the trained model to generate the one or more new KPI values.
[0010] In yet another embodiment, the method includes the step of instructing, the trained model to generate one or more new KPI values, includes the steps of predicting, by the one or more processors, utilizing the trained model, the one or more new KPIs values based on trends stored in the trained model. Further includes the step of generating, by the one or more processors, utilizing the trained model, the one or more new KPIs values based on the prediction performed.
[0011] In yet another embodiment, the one or more processors identifies one or more similar requests based on a unique identifier associated with each dashboard of the user interface, thereby avoiding redundant computations.
[0012] In yet another embodiment, the one or more processors, by utilizing the catching layer allows multiple users to request the same dashboard utilizing the precomputed stored data in the database.
[0013] In yet another embodiment, the compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the user interface of a User Equipment (UE) as a report.
[0014] In another aspect of the present invention, the system for live monitoring of the one or more Key Performance Indicators (KPIs) is disclosed. The system includes a transceiver, configured to receive, one or more requests pertaining to live monitoring one or more KPIs from one or more users via a User Interface (UI). The system further includes an execution unit configured to execute, the one or more KPIs within a specified time range extracted from the one or more requests. The system further includes a computation unit configured to perform data computation pertaining to the one or more executed KPIs. The system further includes a compilation unit configured to compile, the computed data pertaining to the one or more executed KPIs.
[0015] In another aspect of the present invention, a User Equipment (UE) is disclosed. One or more primary processors communicatively coupled to one or more processors. The one or more primary processors coupled with a memory. The memory stores instructions which when executed by the one or more primary processors causes the UE to transmit one or more requests pertaining to live monitoring on selected one or more Key Performance Indicators (KPIs) via a User Interface (UI).
[0016] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0018] FIG. 1 is an exemplary block diagram of an environment for live monitoring of one or more Key Performance Indicators (KPIs), according to one or more embodiments of the present disclosure;
[0019] FIG. 2 is an exemplary block diagram of a system for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure;
[0020] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2, according to one or more embodiments of the present disclosure;
[0021] FIG. 4 is an architecture that can be implemented in the system of FIG. 2, according to one or more embodiments of the present disclosure;
[0022] FIG. 5 is a signal flow diagram illustrating the system for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure; and
[0023] FIG. 6 is a flow diagram illustrating a method for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure.
[0024] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0026] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0027] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0028] The present disclosure provides a method and a system for the optimization of live monitoring processes and failure caches. When multiple users perform live monitoring on the same dashboard, it is important to ensure smooth operation, minimize failures, and reduce computation requirements. Traditionally, each user's monitoring would require separate computations, resulting in redundant efforts and increased network connections. To address this, the optimization aims to reduce computation efforts and introduce dynamic correlating features in reports. By correlating previous data with live data during monitoring, the system provides computed outputs that users can observe.
[0029] FIG. 1 illustrates an exemplary block diagram of an environment 100 for live monitoring of one or more Key Performance Indicators (KPIs), according to one or more embodiments of the present disclosure. The environment 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 to transmit one or more requests pertaining to live monitoring on selected Key Performance Indicators (KPIs) via a User Interface (UI) 215 (as shown in FIG.2). The live monitoring of KPIs refers to the real-time tracking, analysis, and reporting of data related to the performance of the network 105 and ensures that network operators and service providers can continuously observe and respond to a current state of the network, addressing issues as they arise and maintaining the performance of the network 105. The one or more requests include one or more performance counters, and aggregation type selected for predicting live monitoring of the KPI values including data range or time range.
[0030] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0031] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0032] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0033] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0034] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defence facility side, or any other facility that provides service.
[0035] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 120 is illustrated as remotely coupled with the server 115, without deviating from the scope of the present disclosure.
[0036] The system 120 is further configured to employ Transmission Control Protocol (TCP) connection to identify any connection loss in the network 105 and thereby improving overall efficiency. The TCP connection is a communication standard enabling applications and the system 120 to exchange information over the network 105.
[0037] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0038] FIG. 2 illustrates an exemplary block diagram of the system 120 for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, the user interface 215, and a database 240. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0039] The information related to the one or more requests pertaining to live monitoring of the one or more KPIs is provided or stored in the memory 210. Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0040] The information related to the one or more requests pertaining to live monitoring of the one or more KPIs is rendered on the user interface 215. The user interface 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110 and the database 240.
[0041] The database 130 is configured to store the one or more requests pertaining to live monitoring of the one or more KPIs. Further, the database 130 provides structured storage, support for complex queries, and enables efficient data retrieval and analysis. The database 130 is one of, but is not limited to, one of a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc. The term “database” and “distributed file system” are used interchangeably herein without limiting the scope of the disclosure.
[0042] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 210. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0043] In order for the system 120 to live monitor the one or more KPIs, the processor 205 includes a transceiver 220, an execution unit 225, a computation unit 230, and a compilation unit 235 communicably coupled to each other for live monitoring of the one or more KPIs.
[0044] The transceiver 220, the execution unit 225, the computation unit 230, and the compilation unit 235, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0045] The transceiver 220 is configured to receive the one or more requests pertaining to live monitoring of the one or more KPIs from the one or more users via the UI 215. The one or more users includes at least one of, but not limited to, a network operator. In an embodiment, the one or more requests include one of, but not limited to, data computation requests. The one or more KPIs are quantifiable metrics used to evaluate success or performance of an organization, or individual in achieving specific objectives or strategic goals, which provides a way to measure progress, assess performance against targets or benchmarks, and make data-driven decisions. The one or more KPIs are typically selected based on their relevance to the organization's objectives, measurability, achievability, and ability to provide actionable insights. The one or more users selects the one or more KPIs on which requires to perform live monitoring on the UI 215. In an exemplary embodiment, the one or more KPIs, includes at least one of server utilization, Central Processing Unit (CPU) usage, memory usage, disk I/O rates, response times, call success, error rates, and KPI related to performance counters. Further, in one embodiment, if the one or more KPIs exceeds the predefined thresholds, one or more alerts is one of transmitted to the UE 110 and displayed on the UI 215 to notify at least one of the one or more users and an administrator.
[0046] Upon receiving the one or more requests pertaining to live monitoring of the one or more KPIs from the one or more users, the execution unit 225 is configured to execute the one or more KPIs within a specified time range. In this regard, the execution unit 225 is configured to execute the one or more KPIs from the one or more requests utilizing a cache layer 415. In an exemplary embodiment, the specified time range is at least 15 mins. If the one or more KPIs are not present in the cache layer 415, in owing to this, the execution unit 225 is configured to transmit the one or more requests to the computation unit 230. The computation unit 230 is configured to perform the data computation process to execute the one or more KPIs from the one or more requests. In the process of data computation, the computation unit 230 is configured to identify similarities between the one or more requests received from the one or more users.
[0047] On performing identification of the one or more similar requests in the computation unit 230 based on a unique identifier associated with each dashboard of the UI 215, thereby avoiding redundant computations. In an embodiment, the unique identifier refers to a hash code. Hashing is a process that takes input data (such as a file, a text, or any data) and produces a fixed-size string of characters, which is the hash code or a hash value. The unique identifier is associated with each dashboard of the UI 215. The dashboard refers to a graphical user interface (GUI) or a web-based interface that provides an overview and real-time status of various network components, systems, and services. The dashboard displays the KPIs, metrics, and statistics related to network traffic, bandwidth utilization, latency, packet loss, device uptime, security events, and the like.
[0048] On identification of the one or more similar requests based on the unique identifier, the computation unit 230 is configured to initiate the data computation (data aggregation) process based on the identified similarities between the one or more requests. Upon initiating the data computation process, the computation unit 230 is configured to transmit and execute the one or more KPIs in the database 240. In one embodiment, if the one or more KPI values are not present in the database 240 for a current interval, the computation unit 230 fetches the one or more KPI values from a Machine Learning (ML) model 505 (shown in FIG.5). Owing to this, monitoring of the one or more KPI values is, advantageously, continued even in the case of data loss due to hardware issue at counter collectors. Further, the computation unit 230 is configured to create a report in the ML model 505. Further, the computation unit 230 is configured to determine an execution time range pertaining to the data computation process. The execution time range refers to the time range for creation of the required report in the ML model 505. The execution time range is determined by checking the cache layer 415 and the computation unit 230 for execution of the one or more KPIs.
[0049] On determination of the execution time range pertaining to the data computation process, the computation unit 230 is configured to compare a current execution time range of the data computation with a predefined execution time range for the data computation. The current execution time range refers to the actual period of time taken to complete the data computation task or process during its execution. In an example, if the data computation task started at 9:00 AM and is completed at 10:30 AM, the current execution time range for that the data computation task would be from 9:00 AM to 10:30 AM. Monitoring the current execution time range helps in, advantageously, assessing performance and efficiency of the data computation processes in real-time. The predefined execution time range refers to a time period within which the data computation task or process is expected to be completed. In an exemplary embodiment, the data computation task has the predefined execution time range of 60 to 90 minutes. As such, the data computation task is required to be completed within the predefined execution time range under normal operating conditions.
[0050] Based on the comparison, in one embodiment, the integrated performance management 410 is configured to determine whether to utilize a caching layer to fetch a precomputed data associated with the identified one or more similar requests from the database 240. The precomputed data refers to information or results that have been calculated or generated in advance and stored for future use. The precomputed data includes the generated one or more KPI values within the specified time range. The computation unit 230 allows the one or more users to request the same dashboard without repeating computations utilizing the precomputed data stored in the database 240.
[0051] Based on the comparison, in another embodiment, the computation unit 230 is configured to instruct a trained model to generate one or more new KPI values based on the execution time range of the data computation. The model is trained by using the historical data that includes the stored one or more KPI values along with other relevant features to generate a new value of the one or more KPIs. The trained model includes at least the ML model 505. The computation unit 230 is configured to instruct the trained model to generate the one or more new KPI values.
[0052] On instruction of the trained model to generate the one or more new KPI values, the computation unit 230 is configured to predict the one or more new KPIs values based on trends stored in the trained model. When the one or more new requests are received by the computation unit 230, the computation unit 230 passes the one or more new requests to the trained model. The trained model, thereafter, utilizes the stored trends and patterns to predict the one or more new KPI values based on the one or more new requests. The trend refers to the general direction or tendency in which data points are moving over time and describes overall movement or pattern of change in a dataset.
[0053] The created report is trained and stored in the ML model 505 on a daily basis. The trends are created based on understanding of historical data. In an exemplary embodiment, with respect to the server utilization, the one or more KPI values is indicated as 99.8 for a present day. Further, after two days, due to an increase in network traffic, the one or more KPI values corresponding to the server utilization also increases. As such, the one or more KPI values are trained on the ML model 505 on the daily basis and based on the trends identified in the fluctuation of the one or more KPI values with respect to the server utilization, the one or more new KPIs values with respect to the server utilization is predicted. In one embodiment, if there is an increase in the one or more new KPI values as per the prediction, then resources corresponding to the server utilization is also increased. The trained model, thereafter, utilizes the stored trends and patterns to predict the one or more new KPI values based on the one or more new requests. The trend refers to the general direction or tendency in which data points are moving over time and describes overall movement or pattern of change in a dataset. In this regard, the computation unit 230 is configured to generate the one or more new KPIs values.
[0054] On generation of the one or more new KPIs values, the compilation unit 235 is configured to compile the computed data pertaining to the one or more executed KPIs. In one embodiment, the compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the UI 215 of the UE 110 as a report. The report refers to a structured presentation or display of the compiled computed data related to the one or more KPIs. The report displays both actual KPI values, representing historical data, and the predicted values based on the trends stored in the trained model. The report utilizes visual elements like charts, graphs, tables, and diagrams to visually represent the compiled data.
[0055] By doing so, the system 120 provides optimal resource utilization and reduced execution time, improved troubleshooting capabilities through real-time fault detection and notifications. Notifications and alarms are generated to aid in identifying the root cause quickly during the live monitoring of the one or more KPIs. Further, the system 120 provides enhanced service quality by identifying peak hours and potential bottlenecks, and the ability to predict performance indicators using the trained model forecasting, particularly in cases of network delays.
[0056] FIG. 3 is a schematic representation of the system 120 in which various entities operations are explained, according to one or more embodiments of the present disclosure. Referring to FIG. 3, describes the system 120 for live monitoring of the one or more KPIs. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0057] As mentioned earlier in FIG.1, In an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, personal computers, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each user equipment 110 is communicatively coupled with the processor 205 via the network 105.
[0058] The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120. The one or more primary processors 305 are coupled with a memory unit 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit the one or more requests pertaining to live monitoring on selected KPIs via the UI 215.
[0059] Furthermore, the one or more primary processors 305 within the UE 110 are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor 205 capability to live monitoring of the one or more KPIs. The operational synergy between the one or more primary processors 305 and the additional processors, guided by the executable instructions stored in the memory unit 310, facilitates a seamless live monitoring of the one or more KPIs.
[0060] As mentioned earlier in FIG.2, the one or more processors 205 of the system 120 is configured to receive the one or more requests pertaining to live monitoring of the one or more KPIs from the one or more users via the UI 215. The one or more processors 205 of the system 120 is configured to execute the one or more KPIs within the specified time range extracted from the one or more requests. The one or more processors 205 of the system 120 is configured to perform the data computation pertaining to the one or more executed KPIs, and compile the computed data pertaining to the one or more executed KPIs.
[0061] As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 240. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 240 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0062] Further, the processor 205 includes the transceiver 220, the execution unit 225, the computation unit 230, and the compilation unit 235. The operations and functions of the transceiver 220, the execution unit 225, the computation unit 230, and the compilation unit 235 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0063] FIG.4 is an architecture 400 that can be implemented in the system 120 of FIG.2, according to one or more embodiments of the present disclosure. The architecture 400 of the system 120 includes the user interface 215, a load balancer 405, an integrated performance management 410, the computation unit 230, a cache layer 415, and the distributed file system 240.
[0064] The user interface 215 is configured to transmit the one or more requests pertaining to live monitoring of the one or more KPIs from the one or more users by using the UE 110.
[0065] The load balancer 405 is configured to distribute network or application traffic across multiple servers. This process ensures that no single server becomes overwhelmed, thereby optimizing resource use, maximizing throughput, minimizing response time, and avoiding overload. The load balancer 405 is configured for maintaining the performance and reliability of the networks and services.
[0066] The integrated performance management 410 is configured to execute the one or more KPIs from the one or more requests within the specified time range. Further, the checking the one or more KPI values in the cache layer 410. If the one or more KPI values are not present in the cache layer 410, then the integrated performance management 410 is configured to transmit the one or more requests to the computation unit 230 for performing data computation. In one embodiment, the computation unit 230 is configured to check if the one or more KPI values are present in the database 240 for current interval. If the one or more KPI values are not present in the database 240, then the computation unit 230 is configured to fetch the precomputed data from the ML model 505.
[0067] Upon fetching the precomputed data, the computation unit 230 is configured to perform the data computation pertaining to the one or more executed KPIs. In the process of data computation, the computation unit 230 is configured to identify similarities between the one or more requests received from the one or more users. The computation unit 230 is configured to compare the current execution time range of the data computation with the predefined execution time range for the data computation. Based on the comparison, the computation unit 230 is configured to generate the one or more new KPI values.
[0068] On generation of the one or more new KPIs values, the cache layer 415 is configured to compile the computed data pertaining to the one or more executed KPIs. In one embodiment, the compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the UI 215 of the UE 110 as a report. The report displays both actual KPI values, representing historical data, and the predicted values based on the trends stored in the trained model. The report utilizes visual elements like charts, graphs, tables, and diagrams to visually represent the compiled data.
[0069] FIG. 5 is a signal flow diagram illustrating a system 120 for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure.
[0070] At step 402, the one or more users are transmitting the one or more requests pertaining to live monitoring of the one or more KPIs via the UI 215 by the transceiver 220. The one or more users includes at least one of, but not limited to, the network operator. The one or more users selects the one or more KPIs on which requires to perform live monitoring of the one or more KPIs on the UI 215. The received one or more requests are transmitted to request the load balancer 405 for distributing network traffic across multiple servers.
[0071] At step 404, the load balancer 405 is configured to transmit the requests to an available instance of the integrated performance management 410. At step 406, the integrated performance management 410 is configured to determine whether to utilize the cache layer 415 to fetch the precomputed data associated with the identified one or more similar requests from the distributed file system 240. The precomputed data includes the generated one or more KPI values within the specified time range. The computation unit 230 allows the one or more users to request the same dashboard without repeating computations utilizing the precomputed data stored in the distributed file system 240.
[0072] At step 406, if the one or more KPI values are not present in the cache layer 415, then the one or more requests are transmitted to the computation unit 230. At step 408, In one embodiment, the computation unit 230 is configured to determine whether if the one or more KPI values are present in the database 240 for current interval. If the one or more KPI values are not present in the database 240, then the computation unit 230 is configured to fetch the precomputed data from the ML model 505.
[0073] The computation unit 230 is configured to instruct the ML model 505 to generate the one or more new KPI values based on the execution time range of the data computation. The model is trained by using the historical data that includes the stored one or more KPI values along with other relevant features to generate the new value of the one or more KPIs. Further, the computation unit 230 is configured to create the report in the ML model 505. The created report is trained and stored in the ML model 505 on a daily basis. On instruction of the ML model 505 to generate the one or more new KPI values, the computation unit 230 is configured to predict the one or more new KPIs values based on trends stored in the ML model 505. Based on the prediction, the computation unit 230 is configured to generate the one or more new KPI values.
[0074] At step 410, upon generation of the one or more new KPI values, the ML model 505 is configured to compile the generated one or more new KPI values. When the generated one or more new KPI values are compiled in the ML model 505, the ML model 505 is configured to transmits acknowledgement of the compilation of the one or more new KPI values to the integrated performance management 410.
[0075] At step 412, the integrated performance management 410 is configured to transmit the compiled computed data to the UI 215. The compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the UI 215 of the UE 110 as the report. The report displays both actual KPI values, representing historical data, and the predicted values based on the trends stored in the trained model. The report utilizes visual elements like charts, graphs, tables, and diagrams to visually represent the compiled data. The UI 215 of the UE 110 displays acknowledgment of the report to the one or more users.
[0076] FIG. 6 is a flow diagram illustrating a method 600 for live monitoring of the one or more KPIs, according to one or more embodiments of the present disclosure.
[0077] At step 605, the method 600 includes the step of receiving the one or more requests pertaining to live monitoring of the one or more KPIs from the one or more users via the UI 215 by the transceiver 220. The one or more users includes at least one of, but not limited to, the network operator. The one or more users selects the one or more KPIs on which requires, to perform real-time monitoring on the UI 215.
[0078] At step 610, the method 600 includes the step of executing the one or more KPIs within the specified time range extracted from the one or more requests by the execution unit 225. The execution unit 225 further requests the computation unit 230 for data computation based on the executed one or more KPI values. In an exemplary embodiment, the specified time range is at least 15 mins.
[0079] At step 615, the method 600 includes the step of performing data computation pertaining to the one or more executed KPIs at the computation layer by the computation unit 230. The one or more requests are transmitted to the computation layer for data computation. In the process of data computation, the computation unit 230 is configured to identify similarities between the one or more requests received from the one or more users.
[0080] Further, the computation unit 230 is configured to identify the one or more similar requests based on the unique identifier associated with each dashboard of the UI 215, thereby avoiding redundant computations. Further, the computation unit 230 is configured to compare the current execution time range of the data computation with the predefined execution time range for the data computation.
[0081] Based on the comparison, in one embodiment, the computation unit 230 is configured to determine whether to utilize the caching layer to fetch the precomputed data associated with the identified one or more similar requests from the database 240. The precomputed data refers to information or results that have been calculated or generated in advance and stored for future use. The computation unit 230 allows the one or more users to request the same dashboard without repeating computations utilizing the precomputed data stored in the database 240.
[0082] Based on the comparison, in another embodiment, the computation unit 230 is configured to instruct the trained model to generate one or more new KPI values based on the execution time range of the data computation by utilizing the computation layer. The trained model includes at least the Machine Learning (ML) model. The computation unit 230 is configured to instruct the trained model to generate the one or more new KPI values.
[0083] Further the computation unit 230 is configured to predict the one or more new KPIs values based on trends stored in the trained model. When a new one or more requests are received by the computation unit 230, it passes the new one or more requests to the trained model. The trained model utilizes the stored trends and patterns to predict the one or more new KPI values based on the new one or more requests. Based on the prediction, the computation unit 230 is configured to generate the one or more new KPIs values.
[0084] At step 620, the method 600 includes the step of compiling the computed data pertaining to the one or more executed KPIs by the compilation unit 235. In one embodiment, the compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the UI 215 of the UE 110 as the report. The report displays both actual KPI values, representing historical data, and the predicted values based on the trends stored in the trained model. The report utilizes visual elements like charts, graphs, tables, and diagrams to visually represent the compiled data.
[0085] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to receive one or more requests pertaining to live monitoring one or more KPIs from one or more users via a User Interface (UI). The processor 205 is configured to execute the one or more KPIs within a specified time range extracted from the one or more requests. Further, the processor 205 is configured to perform data computation pertaining to the one or more executed KPIs. Further, the processor 205 is configured to compile the computed data pertaining to the one or more executed KPIs.
[0086] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0087] The present disclosure incorporates technical advancement of optimization of live monitoring processes and failure caches. When one or more users perform live monitoring on the same dashboard, it is important to ensure smooth operation, minimize failures, and reduce computation requirements. Each users monitoring requires separate computations, resulting in redundant efforts and increased network connections. By correlating previous data with live data during monitoring, the system provides pre-computed outputs that the one or more users can observe.
[0088] The present invention provides various advantages, including optimal resource utilization and reduced execution time. When live-monitoring of the dashboard is done by multiple users, less execution time is needed as the computation is done only once for similar dashboards executed by multiple users. Furthermore, the present invention provides improved troubleshooting capabilities through real-time fault detection and notifications. The method and system further provide enhanced service quality by identifying peak hours and potential bottlenecks, and the ability to predict performance indicators using ML model forecasting, particularly in cases of network delays. In a nutshell, the proposed invention provides an optimized approach to live monitoring, focusing on reducing computations, leveraging machine learning, and utilizing caching techniques.
[0089] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0090] Environment – 100
[0091] Network – 105
[0092] User Equipment – 110
[0093] Server – 115
[0094] System – 120
[0095] Processor -205
[0096] Memory – 210
[0097] User Interface– 215
[0098] Transceiver – 220
[0099] Execution unit – 225
[00100] Computation unit – 230
[00101] Compilation unit – 235
[00102] Database- 240
[00103] One or more primary processors – 305
[00104] Memory– 310
[00105] Load balancer- 405
[00106] Integrated Performance Management- 410
[00107] Cache layer-415
[00108] ML model-505

,CLAIMS:

CLAIMS
We Claim:
1. A method (500) for live monitoring of one or more Key Performance Indicators (KPIs), the method (500) comprising the steps of:
receiving, by one or more processors (205), one or more requests pertaining to live monitoring of the one or more KPIs from one or more users via a User Interface (UI) (215);
executing, by the one or more processors (205), the one or more KPIs within a specified time range extracted from the one or more requests;
performing, by the one or more processors (205), data computation pertaining to the one or more executed KPIs; and
compiling, by the one or more processors (205), the computed data pertaining to the one or more executed KPIs.

2. The method (500) as claimed in claim 1, wherein the one or more processors (205), performs data computation pertaining to the one or more executed KPIs at a computation layer by transmitting one or more data computation requests to the computation layer for data computation.

3. The method (500) as claimed in claim 1, wherein the step of, performing, data computation pertaining to the one or more executed KPIs, includes the steps of:
identifying, by the one or more processors (205), utilizing the computation layer, similarities between the one or more requests received from the one or more users;
initiating, by the one or more processors (205), the data computation process based on the identified similarities between the one or more requests;
determining, by the one or more processors (205), execution time range pertaining to data computation process; and
determining, by the one or more processors (205), utilizing the computation layer, based on the execution time range of the data computation, whether to utilize a caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct a trained model to generate one or more new KPI values.

4. The method (500) as claimed in claim 3, wherein the step of, determining, utilizing the computation layer, based on the execution time range of the data computation, whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct the trained model to generate one or more new KPI values, include the steps of:
comparing, by the one or more processors (205), a current execution time range of the data computation with a predefined execution time range for the data computation; and
in response to comparison, determining, by the one or more processors (205), whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct the trained model to generate the one or more new KPI values.

5. The method (205) as claimed in claim 3, wherein the step of, instructing, the trained model to generate one or more new KPI values, includes the steps of:
predicting, by the one or more processors (205), utilizing the trained model, the one or more new KPIs values based on trends stored in the trained model; and
generating, by the one or more processors (205), utilizing the trained model, the one or more new KPIs values based on the prediction performed.

6. The method (500) as claimed in claim 3, wherein the one or more processors (205) identifies one or more similar requests based on a unique identifier associated with each dashboard of the user interface (215), thereby avoiding redundant computations.

7. The method (500) as claimed in claim 3, wherein the one or more processors (205), by utilizing the caching layer allows multiple users to request the same dashboard utilizing the precomputed stored data in the database (240).

8. The method (500) as claimed in claim 1, wherein the compiled computed data pertaining to the one or more executed KPIs are displayed on the dashboard of the user interface (215) of a User Equipment (UE) (110) as a report.

9. A system (120) for live monitoring of one or more Key Performance Indicators (KPIs), the system (120) comprising:
a transceiver (220), configured to, receive, one or more requests pertaining to live monitoring of the one or more KPIs from one or more users via a User Interface (UI) (215);
an execution unit (225), configured to, execute, the one or more KPIs within a specified time range extracted from the one or more requests;
a computation unit (230), configured to, perform, data computation pertaining to the one or more executed KPIs; and
a compilation unit (235), configured to, compile, the computed data pertaining to the one or more executed KPIs.

10. The system (120) as claimed in claim 10, wherein the computation unit (235), performs data computation pertaining to the one or more executed KPIs at a computation layer by transmitting one or more data computation requests to a computation layer for data computation.

11. The system (120) as claimed in claim 10, wherein the computation unit (235), performs, data computation pertaining to the one or more executed KPIs, by:
identifying, utilizing the computation layer, similarities between the one or more requests received from the one or more users;
initiating, the data computation process based on the identified similarities between the one or more requests;
determining, execution time range pertaining to data computation process; and
determining, utilizing the computation layer, based on the execution time range of the data computation, whether to utilize a caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct a trained model to generate one or more new KPI values.

12. The system (120) as claimed in claim 12, wherein the computation unit (235), determines, utilizing the computation layer, based on the execution time range of the data computation, whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct a trained model to generate one or more new KPI values, by:
comparing, a current execution time range of the data computation with a predefined execution time range for the data computation; and
in response to comparison, determining, whether to utilize the caching layer to fetch the pre-computed data associated with the identified one or more similar requests from the database or instruct the trained model to generate the one or more new KPI values.

13. The system (120) as claimed in claim 12, wherein the computation unit (235) instructs, the trained model to generate the one or more new KPI values, by:
predicting, utilizing the trained model, the one or more new KPIs values based on trends stored in the trained model; and
generating, utilizing the trained model, the one or more new KPIs values based on the prediction performed.

14. The system (120) as claimed in claim 12, wherein the computation unit (235) identifies one or more similar requests based on a unique identifier associated with each dashboard of the user interface (215), thereby avoiding redundant computations.

15. The system (120) as claimed in claim 12, wherein the computation unit (235), by utilizing the catching layer allows multiple users to request the same dashboard utilizing the precomputed stored data in the database.

16. The system (120) as claimed in claim 10, wherein the compiled computed data pertaining to the one or more executed KPIs are displayed on a dashboard of a user interface (215) of a User Equipment (UE) (110) as a report.

17. A User Equipment (UE) (110), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (310) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
transmit, one or more requests pertaining to live monitoring on selected one or more Key Performance Indicators (KPIs) via a User Interface (UI) (215); and
wherein the one or more processors is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321048728-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf 2023-07-19
2 202321048728-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf 2023-07-19
3 202321048728-FORM 1 [19-07-2023(online)].pdf 2023-07-19
4 202321048728-FIGURE OF ABSTRACT [19-07-2023(online)].pdf 2023-07-19
5 202321048728-DRAWINGS [19-07-2023(online)].pdf 2023-07-19
6 202321048728-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf 2023-07-19
7 202321048728-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321048728-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321048728-DRAWING [17-07-2024(online)].pdf 2024-07-17
10 202321048728-COMPLETE SPECIFICATION [17-07-2024(online)].pdf 2024-07-17
11 Abstract-1.jpg 2024-09-05
12 202321048728-Power of Attorney [05-11-2024(online)].pdf 2024-11-05
13 202321048728-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf 2024-11-05
14 202321048728-Covering Letter [05-11-2024(online)].pdf 2024-11-05
15 202321048728-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf 2024-11-05
16 202321048728-FORM 3 [03-12-2024(online)].pdf 2024-12-03
17 202321048728-FORM 18 [20-03-2025(online)].pdf 2025-03-20