Sign In to Follow Application
View All Documents & Correspondence

Method And System For Processing Data In A Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR PROCESSING DATA IN A NETWORK The present disclosure relates to a system (108) and a method (600) for processing data in a network (106). The system (108) includes a transceiver unit (210) configured to receive a plurality of dashboards from a user equipment (UE) (102). The system (108) further includes an analyzer unit (212) configured to analyze a usage pattern of the plurality of dashboards for a predefined time period. The system (108) further includes a determination unit (214) configured to determine at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. The system (108) further includes a pre-computing unit (218) configured to pre-compute utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network (106). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Inventors

1. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
2. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Jugal Kishore Kolariya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Gaurav Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Kishan Sahu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. Sunil Meena
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Gourav Gurbani
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Sanjana Chaudhary
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Chandra Kumar Ganveer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
12. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
13. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
14. Yogesh Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
15. Kunal Telgote
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
16. Niharika Patnam
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
17. Avinash Kushwaha
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
18. Dharmendra Kumar Vishwakarma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR PROCESSING DATA IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to data processing, more particularly relates to a system and a method of processing data in a network.
BACKGROUND OF THE INVENTION
[0002] In computing systems, there are a large number of tasks such as dashboards and reports in different domains which are pre-computed on a daily basis and take huge resources and time. Out of all these dashboards and reports, only a few dashboards and reports are used. Many of the dashboards or reports are not useful but are still pre-computed daily. This increases the processing time of the computing system and also increases the load on the computing system.
[0003] Therefore, there is a need for a load efficient computing system which reduces the processing time for creation of dashboards and reports, and utilizes the computing resources efficiently, thereby reducing load on the computing system.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and system for processing data in a network.
[0005] In one aspect of the present invention, the system for processing data in the network is disclosed. The system includes a transceiver unit configured to receive a plurality of dashboards from a user equipment. The system further includes an analyzer unit configured to analyze a usage pattern of the plurality of dashboards for a predefined time period. The system further includes a determination unit configured to determine at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. The system further includes a pre-computing unit configured to pre-compute utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network.
[0006] In an embodiment, the system includes a prediction unit configured to predict, via an Artificial Intelligence/ Machine Learning (AI/ML) unit if the at least one or more dashboards from the plurality of dashboards are functional or not upon analysis of the usage pattern.
[0007] In an embodiment, the plurality of dashboards includes data patterns corresponding to operations of multiple components in the network.
[0008] In an embodiment, the transceiver unit is further configured to store the plurality of dashboards received from the user equipment in a data lake.
[0009] In an embodiment, the predefined time period is defined by at least a service provider.
[0010] In an embodiment, the system includes a storage unit configured to store the at least one or more functional dashboards and results achieved utilizing the at least one or more functional dashboards therein.
[0011] In another aspect of the present invention, the method of processing data in the network is disclosed. The method includes the step of receiving a plurality of dashboards from a user equipment. The method further includes the step of analyzing a usage pattern of the plurality of dashboards for a predefined time period. The method further includes the step of determining at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. The method further includes the step of pre-computing utilizing at least one or more dashboards which are determined to be functional, and thereby processing data in the network.
[0012] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive, a plurality of dashboards from a user equipment. The processor is configured to analyze, a usage pattern of the plurality of dashboards for a predefined time period. The processor is configured to determine, at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. The processor is configured to pre-compute, utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network.
[0013] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to transmit plurality of dashboards to the one or more processers.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for processing data in a network, according to one or more embodiments of the present invention;
[0017] FIG. 2 is an exemplary block diagram of a system for processing data in the network, according to one or more embodiments of the present invention;
[0018] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0019] FIG. 4 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0020] FIG. 5 is a signal flow diagram for processing data in the network, according to one or more embodiments of the present invention; and
[0021] FIG. 6 is a schematic representation of a method of processing data in the network, according to one or more embodiments of the present invention.
[0022] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0024] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0025] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0026] FIG. 1 illustrates an exemplary block diagram of an environment 100 for processing data in a network, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for processing data in the network 106.
[0027] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0028] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0029] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0030] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0031] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0032] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured for processing data in the network 106. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0033] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0034] FIG. 2 is an exemplary block diagram of the system 108 for processing data in the network 106, according to one or more embodiments of the present invention.
[0035] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0036] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0037] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0038] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0039] In order for the system 108 for processing data in the network 106, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a transceiver unit 210, an analyzer unit 212, a determination unit 214, a prediction unit 216, a pre-computing unit 218, and a storage unit 220 communicably coupled to each other for processing data in the network 106.
[0040] In one embodiment, the one or more modules includes, but not limited to, the transceiver unit 210, the analyzer unit 212, the determination unit 214, the prediction unit 216, the pre-computing unit 218, and the storage unit 220 can be used in combination or interchangeably for processing data in the network 106.
[0041] The transceiver unit 210, the analyzer unit 212, the determination unit 214, the prediction unit 216, the pre-computing unit 218, and the storage unit 220 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0042] In one embodiment, the transceiver unit 210 is configured to receive a plurality of dashboards from the UE 102. The plurality of dashboards includes data patterns corresponding to operations of multiple components in the network 106. The plurality of dashboards refers to multiple user interfaces or visual displays that present data and information related to the operations and performance of various components within the network 106. The plurality of dashboards includes, but is not limited to charts, graphs, tables and other visual tools that help users monitor analyze and understand data patters. The data patterns refer to the various trends, behaviors and anomalies observed in the data collected from different parts of the network 106. The data patterns include, but are not limited to, network traffic patterns, performance metrics, component specific data, security patterns, user activity, Quality of Service (QoS).
[0043] Upon receiving the plurality of dashboards, the transceiver unit 210 is further configured to store the plurality of dashboards received from the UE 102 in a data lake. The data lake is a centralized repository designed to store, process, and secure large amounts of structured, semi-structured, and unstructured data. In particular, the data lake stores all the details of the plurality of dashboards.
[0044] Subsequently, the analyzer unit 212 is configured to analyze a usage pattern of the plurality of dashboards for a predefined time period. The predefined time period is defined by at least a service provider. The usage pattern of the plurality of dashboards includes, but is not limited to, access frequency such as login counts, session durations etc., user interactions such as click rates, navigation paths etc., feature utilization such as popular features, underutilized features etc., data queries such as common queries, query frequency etc., temporal patterns such as peak usage times, idle periods etc., customization and preferences such as user settings, saved reports etc. The predefined time period refers to a specific duration of time that is determined and set in advance by the service provider for the purpose of monitoring, analysing or reporting data. The predefined time period could be at least one of hours, days, weeks, months or any other relevant interval.
[0045] Upon analysing the usage pattern of the plurality of dashboards, the determination unit 214 is configured to determine the at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. In an embodiment, the prediction unit 216 is configured to predict via an Artificial Intelligence/ Machine Learning (AI/ML) unit, if the at least one or more dashboards from the plurality of dashboards are functional or not upon analysis of the usage pattern.
[0046] Upon predicting whether the at least one or more dashboards from the plurality of dashboards are functional or not, the pre-computing unit 218 is configured to pre-compute utilizing the at least one or more dashboards which are determined to be functional.
[0047] In an embodiment, the pre-computing unit 218 uses the data and the usage patterns of the functional dashboard for calculating or processing tasks ahead of time. In particular, the pre- computing unit 218 processes data before it is actually needed in real-time operations. For example, the user wants the data of the dashboard by 2:00 PM, then the pre-computing unit 218 precomputes the data of the dashboard by utilizing the data of the dashboard of the previous day. The pre-computing unit 218 pre-computes based on the historical data and the usage patterns of the dashboards.
[0048] Upon pre-computing the at least one or more dashboards which are determined to be functional, the storage unit 220 is configured to store the at least one or more functional dashboards and results achieved utilizing the at least one or more functional dashboards therein.
[0049] Therefore, the system 108 saves time and resources as there is a decrease in load on the system 108. Further, by leveraging the forecasting capabilities of AI /ML unit, the system 108 forecast non-functional dashboards/reports and proactively prevent the execution of those jobs that are either unused or predicted to be unused in the near future.
[0050] FIG. 3 describes a preferred embodiment of the system 108 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 112a and the system 108 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0051] As mentioned earlier in FIG. 1, each of the first UE 112a, the second UE 112b, and the third UE 112c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 112a without deviating from the scope of the present disclosure and the limiting the scope of the present disclosure. The first UE 112a includes one or more primary processors 302 communicably coupled to the one or more processors 202 of the system 108.
[0052] The one or more primary processors 302 are coupled with a memory 304 storing instructions which are executed by the one or more primary processors 302. Execution of the stored instructions by the one or more primary processors 302 enables the first UE 112a to transmit the plurality of dashboards to the one or more processers 202.
[0053] As mentioned earlier in FIG. 2, the one or more processors 202 of the system 108 is configured for processing data in the network 106. As per the illustrated embodiment, the system 108 includes the one or more processors 202, the memory 204, the user interface 206, and the database 208. The operations and functions of the one or more processors 202, the memory 204, the user interface 206, and the database 208 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0054] Further, the processor 202 includes the transceiver unit 210, the analyzer unit 212, the determination unit 214, the prediction unit 216, the pre-computing unit 218, and the storage unit 220. The operations and functions of the transceiver unit 210, the analyzer unit 212, the determination unit 214, the prediction unit 216, the pre-computing unit 218, and the storage unit 220 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description as provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0055] FIG. 4 is an exemplary block diagram of an architecture 400 of the system 108 for processing data in the network 106, according to one or more embodiments of the present invention.
[0056] The architecture 400 includes Graphical User Interface (GUI) 402 operated by a user, distributed data lake 404, dashboards 406, computation engine 408 which further includes an AI/ML predictor 410, pre-computation engine 414, distributed file system 416, and distributed computation cluster 418 which further includes compute master 420 and worker 1, worker 2… worker n.
[0057] The plurality of dashboards is received from the user via the GUI 402. Upon receiving the plurality of dashboards, the received plurality of dashboards is stored in the distributed data lake 404.
[0058] Further, all the dashboards 406 which are stored in the distributed data lake 404 are transmitted to the computation engine 408. The computation engine 408 includes AI/ML predictor 410 for prediction of dashboards by analyzing the previous usage pattern for the predefined time period (e.g., the usage pattern for past 15-30 days is analyzed). Further, the AI/ML predictor 410 predicts which dashboards are functional or will be used in the near future.
[0059] Subsequently, the predicted functional dashboards are transmitted to the pre-computation engine 414 for pre-computing the dashboards. Further, for pre-computing the dashboards, the pre-computation engine 414 transmits the functional dashboards to the distributed computation cluster 418. The distributed computation cluster 418 includes the compute master 420, which computes the functional dashboards with the help of the plurality of workers such as worker 1, worker 2, ... worker n. The compute master 420 orchestrates the work among the plurality of workers such as worker 1, worker 2, ... worker n. The compute master 420 assigns tasks to each worker, ensuring that the computational load is balanced, and tasks are completed efficiently. Each worker in the distributed computation cluster performs specific tasks assigned by the compute master 420. The specific tasks include, but is not limited to, processing data, running AI/ML models, generating dashboards, and performing pre-computation tasks. Upon pre-computing the functional dashboards, the results are stored in the distributed file system 416 for accessing it in future. In an embodiment, on user request via the GUI 402 the results stored in the distributed file system 416 are accessed directly.
[0060] FIG. 5 is a signal flow diagram for processing data in the network 106, according to one or more embodiments of the present invention.
[0061] At step 502, the plurality of dashboards is received from the user. The plurality of dashboards includes data patterns corresponding to operations of multiple components in the network 106.
[0062] At step 504, upon receiving the plurality of dashboards from the user, the received plurality of dashboards is stored at the data lake.
[0063] At step 506, upon storing the plurality of dashboards, the AI/ML predictor predicts the at least one or more dashboard from the plurality of dashboards by analyzing the usage pattern of the plurality of dashboards for the predefine time period. Further, upon analysis of the usage pattern of the plurality of dashboards, the AI/ML predicts if the at least one or more dashboards from the plurality of dashboards are functional or not.
[0064] At step 508, upon predicting the one or more functional dashboards from the plurality of dashboards, the computation engine 408 pre-computes the one or more functional dashboards from the plurality of dashboards.
[0065] At step 510, subsequently, the results achieved by the pre-computing of the one or more functional dashboards from the plurality of dashboards are stored in the distributed file system.
[0066] FIG. 6 is a flow diagram of a method 600 for processing data in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0067] At step 602, the method 600 includes the step of receiving the plurality of dashboards from the UE 102 by the transceiver unit 210. The plurality of dashboards includes data patterns corresponding to operations of multiple components in the network 106.
[0068] At step 604, the method 600 includes the step of analyzing the usage pattern of the plurality of dashboards for the pre-defined time period by the analyzer unit 212.
[0069] At step 606, the method 600 includes the step of determining at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards by the determination unit 214. In an embodiment, the prediction unit 216 via the AI/ML unit predicts if the at least one or more dashboards from the plurality of dashboards are functional or not.
[0070] At step 608, the method 600 includes the step of precomputing the functional dashboard by utilizing the at least one or more dashboards which are determined to be functional by the pre-computing unit 218. Further, the at least one or more functional dashboards and the results achieved by utilizing the at least one or more dashboards are stored in the storage unit 220.
[0071] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the plurality of dashboards from the user equipment. The processor 202 is further configured to analyze the usage pattern of the plurality of dashboards for the predefined time period. The processor 202 is further configured to determine at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards. The processor 202 is further configured to pre-compute utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network 106.
[0072] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0073] The present disclosure incorporates technical advancement of saving time and resources and decreasing the load on cluster. Further, by leveraging the forecasting capabilities of AI/ML, the non-functional dashboards are predicted and the execution of the tasks that are either unused or predicted to be unused in the near future is prevented. Thus, the present disclosure reduces processing time, reduces load on cluster, and improves utilization resources.
[0074] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS

[0075] Environment- 100
[0076] User Equipment (UE)- 102
[0077] Server- 104
[0078] Network- 106
[0079] System -108
[0080] Processor- 202
[0081] Memory- 204
[0082] User Interface- 206
[0083] Database- 208
[0084] Transceiver Unit- 210
[0085] Analyzer Unit- 212
[0086] Determination unit- 214
[0087] Prediction Unit- 216
[0088] Pre-computing Unit- 218
[0089] Storage Unit- 220
[0090] GUI- 402
[0091] Distributed data lake- 404
[0092] Dashboards- 406
[0093] Computation Engine- 408
[0094] AI/ML predictor- 410
[0095] Pre-computation engine- 414
[0096] Distributed file system- 416
[0097] Distributed computation cluster- 418
[0098] Compute master- 420
,CLAIMS:CLAIMS:
We Claim:
1. A method (600) of processing data in a network (106), the method (600) comprising the steps of:
receiving, by one or more processors (202), a plurality of dashboards from a user equipment,;
analysing, by the one or more processors (202), a usage pattern of the plurality of dashboards for a predefined time period;
determining, by the one or more processors (202), at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards; and
pre-computing, by the one or more processors (202), utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network.

2. The method (600) as claimed in claim 1, wherein upon analysis, the method comprises the step of predicting, by the one or more processors (202), via an Artificial Intelligence/ Machine Learning (AI/ML) unit if the at least one or more dashboards from the plurality of dashboards are functional or not.

3. The method (600) as claimed in claim 1, wherein the plurality of dashboards includes data patterns corresponding to operations of multiple components in the network.

4. The method (600) as claimed in claim 1, wherein the method (600) further comprises the step of storing, by the one or more processors (202), the plurality of dashboards received from the user equipment (UE) (102) in a data lake.
5. The method (600) as claimed in claim 1, wherein the predefined time period is defined by at least a service provider.

6. The method (600) as claimed in claim 1, further comprising the step of storing, by the one or more processors (202), the at least one or more functional dashboards and results achieved utilizing the at least one or more functional dashboards in a storage unit (220).

7. A system (108) for processing data in a network (106), the system (108) comprising:
a transceiver unit (210) configured to receive a plurality of dashboards from a user equipment (UE)(102);
an analyser unit (212) configured to analyse a usage pattern of the plurality of dashboards for a predefined time period;
a determination unit (214) configured to determine at least one or more dashboards from the plurality of dashboards which are functional based on the analysis of the usage pattern of the plurality of dashboards; and
a pre-computing unit (218) configured to pre-compute utilizing the at least one or more dashboards which are determined to be functional, and thereby processing data in the network (106).

8. The system (108) as claimed in claim 7, comprising a prediction unit (216) configured to predict, via an Artificial Intelligence/ Machine Learning (AI/ML) unit if the at least one or more dashboards from the plurality of dashboards are functional or not upon analysis of the usage pattern.

9. The system (108) as claimed in claim 7, wherein the plurality of dashboards includes data patterns corresponding to operations of multiple components in the network (106).

10. The system (108) as claimed in claim 7, wherein the transceiver unit (210) is further configured to stores the plurality of dashboards received from the user equipment in a data lake.

11. The system (108) as claimed in claim 7, wherein the predefined time period is defined by at least a service provider.

12. The system (108) as claimed in claim 7, further comprising a storage unit (220) configured to store the at least one or more functional dashboards and results achieved utilizing the at least one or more functional dashboards therein.

13. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory (304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
transmit, plurality of dashboards to the one or more processers (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321048715-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf 2023-07-19
2 202321048715-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf 2023-07-19
3 202321048715-FORM 1 [19-07-2023(online)].pdf 2023-07-19
4 202321048715-FIGURE OF ABSTRACT [19-07-2023(online)].pdf 2023-07-19
5 202321048715-DRAWINGS [19-07-2023(online)].pdf 2023-07-19
6 202321048715-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf 2023-07-19
7 202321048715-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321048715-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321048715-DRAWING [17-07-2024(online)].pdf 2024-07-17
10 202321048715-COMPLETE SPECIFICATION [17-07-2024(online)].pdf 2024-07-17
11 Abstract-1.jpg 2024-09-05
12 202321048715-Power of Attorney [05-11-2024(online)].pdf 2024-11-05
13 202321048715-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf 2024-11-05
14 202321048715-Covering Letter [05-11-2024(online)].pdf 2024-11-05
15 202321048715-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf 2024-11-05
16 202321048715-FORM 3 [03-12-2024(online)].pdf 2024-12-03
17 202321048715-FORM 18 [20-03-2025(online)].pdf 2025-03-20