Abstract: ABSTRACT METHOD AND SYSTEM FOR PROACTIVE PRECOMPUTATION OF NETWORK PERFORMANCE DATA The present disclosure relates to a method (500) and system (108) for proactive precomputation of network performance data. The method (500) includes analysing, using a machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query. Further, the method (500) includes determining data among the related network performance data to be precomputed using a proactive pre-computation algorithm. Further, the method (500) includes executing the proactive pre-computation algorithm to pre-compute the determined data among the network performance data. Further, the method includes storing the pre-computed data in a storage unit (222). Ref. FIG. 5
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR PROACTIVE PRECOMPUTATION OF NETWORK PERFORMANCE DATA
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present disclosure relates generally to data processing systems and more specifically to data computation.
BACKGROUND OF THE INVENTION
[0002] In computing systems, time and resources must be managed efficiently. Most conventional computing systems face the problem of resource unavailability. If a request involves huge amount of data, more resources are required to execute it, but if another request is already executing at the same time, then adequate resources may not be available to process the request.
[0003] Further, the conventional computing systems also face the problem of time inefficient user request computation. In a conventional computing system, users would need to wait for the system to process their request and retrieve the data from underlying sources. This is called delayed data retrieval. This delay can be significant, especially when dealing with large datasets, complex computations or resource unavailability.
[0004] Therefore, there is a need for an efficient computing system which reduces delay in processing and retrieving data.
SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a system and a method for proactive precomputation of network performance data.
[0006] In one aspect of the present invention, a method for proactive precomputation of network performance data is disclosed. The method includes analysing, by one or more processors, using a machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query. Further, the method includes determining, by the one or more processors, data among the related network performance data to be precomputed using a proactive pre-computation algorithm. Further, the method includes executing, by the one or more processors, the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data. Further, the method includes storing, by the one or more processors, the pre-computed data in a storage unit.
[0007] In an embodiment, further, the method includes receiving, by the one or more processors, a second user query from the user, through an user interface (e.g., user interface), wherein the second user query comprises a request for the pre-computed data associated with the network performance.
[0008] In an embodiment, further, the method includes enabling, by the one or more processors, to select by the user a pre-defined template, or a dynamically populated dashboard, or a dynamically selected attributes associated with the network performance data.
[0009] In an embodiment, further, the method includes providing real-time access, by the one or more processors, to information from the storage unit, upon receiving the second user query related to the precomputed data.
[0010] In another aspect of the present invention, a system for proactive precomputation of network performance data is disclosed. The system includes an analysing unit, a determining unit, an executing unit and a storage unit. The analysing unit is configured to perform analysis, using a machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query. The determining unit is configured to determine data among the related network performance data to be precomputed using a proactive pre-computation algorithm. The executing unit is configured to execute the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data. The storage unit is configured to store the pre-computed data.
[0011] In another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, cause the processor to: analyse using the machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query; determine data among the related network performance data to be precomputed using a proactive pre-computation algorithm; execute the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data; and store the pre-computed data in a storage unit.
[0012] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0014] FIG. 1 is an exemplary block diagram of an environment for proactive precomputation of network performance data, according to various embodiments of the present disclosure;
[0015] FIG. 2 is a block diagram of a system of FIG. 1, according to various embodiments of the present disclosure;
[0016] FIG. 3 is an example schematic representation of the system of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system;
[0017] FIG. 4 illustrates an example schematic block diagram of a computing system, according to various embodiments of the present system;
[0018] FIG. 5 shows a sequence flow diagram illustrating a method for proactive precomputation of the network performance data, according to various embodiments of the present disclosure; and
[0019] FIG. 6 is an exemplary signal flow diagram for the proactive precomputation of the network performance data, according to various embodiments of the present disclosure.
[0020] Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] FIG. 1 illustrates an exemplary block diagram of an environment (100) for proactive precomputation of network performance data, according to various embodiments of the present disclosure. The environment (100) comprises a plurality of user equipment’s (UEs) 102-1, 102-2, ……,102-n. The at least one UE (102-n) from the plurality of the UEs (102-1, 102-2, ……102-n) is configured to connect to a system (108) via a communication network (106). Hereafter, label for the plurality of UEs or one or more UEs is (102).
[0026] In accordance with yet another aspect of the exemplary embodiment, the plurality of UEs (102), hereinafter referred to as the UE (102) may be a wireless device or a communication device that may be a part of the system (108). The UE (102) is at least one of, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication or VoIP capabilities. In an embodiment, the UE is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device
[0027] In one embodiment, the UE (102) may access the system (108) via the communication network (106).
[0028] The communication network (106) includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The communication network (106) may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0029] The communication network (106) may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The communication network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0030] The environment (100) includes the server (104) accessible via the communication network (106).
[0031] The server (104) may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system (108), a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The environment (100) further includes the system (108) communicably coupled to the server (104) and the UE (102) via the communication network (106). The system (108) is adapted to be embedded within the server (104) or is embedded as the individual entity. However, for the purpose of description, the system (108) is illustrated as remotely coupled with the server (104), without deviating from the scope of the present disclosure.
[0033] Operational and construction features of the system (108) will be explained in detail with respect to the following figures.
[0034] Referring to FIG. 2, FIG. 2 illustrates an exemplary block diagram of the system (108) for the proactive precomputation of the network performance data, according to one or more embodiments of the present invention.
[0035] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), an user interface (206), a display (208), an input unit (210), and a centralized database (or database) (214). Further the system (108) may comprise one or more processors (202). The one or more processors (202), hereinafter referred to as the processor (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system (108) includes one processor. However, it is to be noted that the system (108) may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0036] The information related to the request may be provided or stored in the memory (204) of the system (108). Among other capabilities, the processor (202) is configured to fetch and execute computer-readable instructions stored in the memory (204). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0037] The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Electrically Erasable Programmable Read-only Memory (EPROM), flash memory, and the like. In an embodiment, the user interface (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as input/output (I/O) devices, storage devices, and the like. The user interface (206) may facilitate communication for the system. The user interface (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit/engine(s) and a database. The processing unit/engine(s) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s).
[0038] The information related to the requests may further be configured to render on the user interface (206). The user interface (206) may include functionality similar to at least a portion of functionality implemented by one or more computer system interfaces such as those described herein and/or generally known to one having ordinary skill in the art. The user interface (206) may be rendered on the display (208), implemented using Liquid Crystal Display (LCD) display technology, Organic Light-Emitting Diode (OLED) display technology, and/or other types of conventional display technology. The display (208) may be integrated within the system (108) or connected externally. Further the input unit (210) may include, but not limited to, keyboard, buttons, scroll wheels, cursors, touchscreen sensors, audio command interfaces, magnetic strip reader, optical scanner, etc.
[0039] The centralized database (214) is communicably connected to the processor (202) and the memory (204). The centralized database (214) is configured to store and retrieve the request pertaining to features, or services or data retrieval of the system (108), access rights, attributes, approved list, and authentication data provided by an administrator. Further the server (104) may allow the system (108) to update/create/delete one or more parameters of their information related to the request, which provides flexibility to roll out multiple variants of the request as per business needs. In another embodiment, the centralized database (214) may be outside the system (108) and communicated through a wired medium and wireless medium.
[0040] Further, the processor (202), in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor (202). In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by an electronic circuitry.
[0041] In order for the system (108) to handle the proactive precomputation of the network performance data, the processor (202) includes an analysing unit (216), a determining unit (218), an executing unit (220), a storage unit (222), a receiving unit (224) and a selecting unit (226). The analysing unit (216), the determining unit (218), the executing unit (220), the storage unit (222), the receiving unit (224) and the selecting unit (226) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor (202). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor (202) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory (204) may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system (108) may comprise the memory (204) storing the instructions and the processing resource to execute the instructions, or the memory (204) may be separate but accessible to the system (108) and the processing resource. In other examples, the processor (202) may be implemented by the electronic circuitry.
[0042] Initially, the user is adapted to provide a first user query via at least one of, but not limited to, the UE (102), the user interface (206), and the input unit (210). In one embodiment, the user is one of, but not limited to, a network operator and a subscriber. The first user query corresponds to the network performance data of at least one of the user and a plurality of subscribers. The first user query pertains to one or more metrics including, but not limited to, such as latency, bandwidth, packet loss, or other network performance indicators. The usage patterns include, but not limited to, the time of day they make queries, the specific metrics they repeatedly check, and any patterns in their data requests. In one embodiment, the selecting unit (226) enables the user to select a pre-defined template, or a dynamically populated dashboard, or a dynamically selected attributes associated with the network performance data.
[0043] Upon receipt of the first user query, the analysing unit (216) is configured to perform analysis of the first user query. In this regard, the analysing unit (216) utilizes one or more machine learning algorithms to identify usage patterns. The usage patterns correspond to results of the first user query. In one embodiment, the analysing unit (216) is configured to identify interests of the user and related network performance data based on the first user query.
[0044] In an embodiment, the analysing unit (216) analyzes the first user query and the subsequent usage patterns using machine learning algorithm to identify what aspects of network performance the user is interested in. For example, if the user frequently queries about network latency during peak hours, the analysing unit (216) will identify latency as a user interest. In this regard, the machine learning algorithms advantageously continuously learns basis the user interactions. The examples of machine learning algorithm includes, but not limited to, supervised learning (such as linear regression, logistic regression, decision trees, Support Vector Machines (SVM) etc.), unsupervised learning (such as K-Means clustering, hierarchical clustering, Principal Component Analysis (PCA), association rules etc.).
[0045] Further, in one embodiment, the user interest includes, but not limited, latency, bandwidth utilization, throughput, packet loss, jitter, error rates, network availability, Quality of Service (QoS), traffic patterns and so on. The user interests are used in at least one of, proactive precomputation, optimized network management, enhanced user interface and experience, advanced analytics and predictive modeling, Service Level Agreement (SLA) monitoring, network planning and capacity management, security monitoring and response.
[0046] The related network performance data refers to a set of metrics and information that are connected to a specific aspect of network performance the user is interested. The related network performance data include, but not limited to, latency-related data, bandwidth and throughput-related data, packet loss-related data, jitter-related data, network availability data, Quality of Service (QoS) data, security- related data.
[0047] Subsequent to the analysing of the first user query, the determining unit (218) is configured to receive information pertaining to the identified user interests and the corresponding identified relative network performance data from the analysing unit (216). Based on the identified user interest and the identified related network performance data, the determining unit (218) is configured to determine which data among the related network performance data is to be precomputed. More specifically, the data is determined based on the usage pattern of the user and/or based on the interests of the user.
[0048] On determination of the data among the related network performance data, the executing unit (220) receives from the determining unit (218) the determined data. On receipt of the determined data, the executing unit (218) is configured to execute a proactive pre-computation algorithm to pre-compute the determined data among the related network performance data. The proactive pre-computation algorithm is a type of algorithm designed to anticipate the future needs of users by analyzing past behavior, patterns, and trends to pre-compute and store relevant data in advance. The key characteristics of proactive pre-computation algorithm includes, but not limited to, predictive analysis, data pre-processing, efficiency optimization, and adaptive learning.
[0049] Upon executing the proactive pre-computation algorithm on the determined data, the executing unit (220) transmits the precomputed data to the storage unit (222). Consequently, the storage unit (222) stores the pre-computed data associated with the network performance. The storage unit (222) is at least one of a distributed file system (408). The distributed file system (408) is a storage system that ensures the precomputed data is accessible across multiple servers. It allows for high availability and scalability, making sure that the data can be quickly retrieved when needed.
[0050] In one embodiment, the user is adapted to provide a second user query via at least one of the user interface (206), the UE (102), and the input unit (210). Accordingly, the receiving unit (224) is configured to receive a second user query. The second user query includes a request for the pre-computed data associated with the network performance. Upon receiving the second user query related to the precomputed data, the determining unit (218) provides real-time access to information associated with the precomputed data requested for in the second user query from the storage unit (222). In one embodiment, the determining unit (218) is configured to retrieve the precomputed data requested in the second user query from the storage unit (222).
[0051] FIG. 3 is an example schematic representation of the system (300) of FIG. 1 in which various entities operations are explained, according to various embodiments of the present system. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE (102-1) and the system (108) for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0052] As mentioned earlier, the first UE (102-1) includes one or more primary processors (305) communicably coupled to the one or more processors (202) of the system (108). The one or more primary processors (305) are coupled with a memory (310) storing instructions which are executed by the one or more primary processors (305). Execution of the stored instructions by the one or more primary processors (305) enables the UE (102-1). The execution of the stored instructions by the one or more primary processors (305) further enables the UE (102-1) to execute the requests in the communication network (106).
[0053] As mentioned earlier, the one or more processors (202) is configured to transmit a response content related to the request to the UE (102-1). More specifically, the one or more processors (202) of the system (108) is configured to transmit the response content to at least one of the UE (102-1). A kernel (315) is a core component serving as the primary interface between hardware components of the UE (102-1) and the system (108). The kernel (315) is configured to provide the plurality of response contents hosted on the system (108) to access resources available in the communication network (106). The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0054] As per the illustrated embodiment, the system (108) includes the one or more processors (202), the memory (204), the user interface (206), the display (208), and the input unit (210). The operations and functions of the one or more processors (202), the memory (204), the user interface (206), the display (208), and the input unit (210) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure. Further, the processor (202) includes the analysing unit (216), the determining unit (218), the executing unit (220), the storage unit (222), the receiving unit (224) and the selecting unit (226). The operations and functions of the analysing unit (216), the determining unit (218), the executing unit (220), the storage unit (222), the receiving unit (224) and the selecting unit (226) are already explained in FIG. 2. For the sake of brevity, we are not explaining the same operations (or repeated information) in the patent disclosure.
[0055] FIG. 4 illustrates a schematic block diagram of a computing system (400), according to various embodiments of the present system. In the computing system (400), the user provides data by way of the user interface (e.g., GUI or the like) (206). The computing system (400) includes a distributed data lake (406), a distributed computation master cluster (404), a distributed file system (408) and template/dashboard/report unit (410). The distributed computation master cluster (404) includes a Machine Learning (ML) prediction unit, a dashboard, and a pre-computation unit.
[0056] The user submits a first user query through the user interface (for example GUI) (206), for network performance data. Upon receiving the request from the user related to the first user query, the user may select the predefined template or the dynamically populated dashboard or the dynamically selected attribute associated with the network performance from the template/dashboard/report unit (410).
[0057] Upon selecting the predefined template or the dynamically populated dashboard or the dynamically selected attribute associated with the network performance, the distributed computation master cluster (404) analyzes, determines and executes the network performance data with the help of the distributed data processing orchestrator (402). The distributed data processing orchestrator (402) fetches the data related to the first user query for analyzing, determining and executing the network performance data from the distributed data lake (406). The distributed data lake (406) is a centralized repository that stores raw and processed data.
[0058] Further, the distributed data processing orchestrator (402) includes ML prediction for analyzing the first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query. Upon analyzing the first user query, the usage patterns corresponding to results of the first user query to identify interests of a user and the related network performance data for the first user query, specific dashboards are identified for pre-computation. Upon identifying the specific dashboards that need to be pre-computed, the identified dashboards are pre-computed using the proactive pre-computation algorithm.
[0059] Upon pre-computation, the pre-computed dashboards are stored in the distributed file system (408) ensuring it is readily accessible across multiple servers. The distributed file system (408) is a storage system that ensures the precomputed data is accessible across multiple servers. The distributed file system (408) allows for high availability and scalability, making sure that the data can be quickly retrieved when needed.
[0060] Further, a second user query is received from the user through the user interface (206). The second user query includes the request for the pre-computed data associated with the network performance.
[0061] Upon receiving the second user query related to the precomputed data associated with the network performance, the distributed computation master cluster (404) provides real-time access to the information from the distributed file system (408) and also provides instant access to the user.
[0062] FIG. 5 is a flow chart illustrating a method (500) for proactive precomputation of network performance data, according to various embodiments of the present system. For the purpose of description, the method (500) is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0063] At step 502, the method includes the step of analyzing, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query using the machine learning algorithm. Upon receiving the first user query, the user is enabled to select the pre-defined template, or the dynamically populated dashboard, or the dynamically selected attributes associated with the network performance data.
[0064] At step 504, the method includes the step of determining the data among the related network performance data to be precomputed using a proactive pre-computation algorithm.
[0065] At step 506, the method includes the step of executing the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data.
[0066] At step 508, the method includes the step of storing the pre-computed data in the storage unit (222). In an embodiment, the method includes the step of receiving the second user query from the user through the user interface (206). The second user query includes the request for the pre-computed data associated with the network performance. Upon receiving the second user query related to the precomputed data, the user is provided real-time access to the information from the storage unit (222).
[0067] FIG. 6 is an exemplary signal flow diagram for the proactive precomputation of the network performance data, according to various embodiments of the present disclosure. For the purpose of description, the signal flow is described with the embodiments as illustrated in FIG. 4 and should nowhere be construed as limiting the scope of the present disclosure.
[0068] At step 602, the first user query is received from the user via user interface (206).
[0069] At step 604, subsequently the user is enabled to select the pre-defined template, or the dynamically populated dashboard, or the dynamically selected attributes associated with the network performance data.
[0070] At step 606, the first user query, usage patterns corresponding to results of the first user query are analyzed to identify interests of the user and related network performance data for the first user query.
[0071] At step 608, upon analyzing the first user query, the data among the related network performance data to be precomputed using a proactive pre-computation algorithm are determined.
[0072] At step 610, upon determining the related network performance data to be precomputed using a proactive pre-computation algorithm, the determined data among the related network performance data is pre-computed using the proactive pre-computation algorithm.
[0073] At step 612, subsequently the pre-computed data is stored in the storage unit (222).
[0074] At step 614, the second user query is received from the user via the user interface (206). The second user query includes the request for the pre-computed data associated with the network performance.
[0075] At step 616, upon receiving the second user query related to the precomputed data, the user is provided real-time access to information from the storage unit (222).
[0076] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to analyze using the machine learning algorithm, the first user query, usage patterns corresponding to results of the first user query to identify interests of the user and related network performance data for the first user query. The processor 202 is further configured to determine data among the related network performance data to be precomputed using the proactive pre-computation algorithm. The processor 202 is further configured to execute the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data. The processor 202 is further configured to store the pre-computed data in a storage unit (222). The computing system of the present disclosure uses a proactive approach to computing and storing data, which allows for instant access and improved efficiency. By leveraging proactive pre-computation techniques for delayed data retrieval Machine Learning (ML) techniques and analyzing usage patterns, enhancing the overall experience and satisfaction.
[0077] With instant access to precomputed data, users can make real-time decisions and perform analysis without the need for processing delays. This enables faster and more efficient decision-making processes.
[0078] Instant access to data: By precomputing and storing relevant data in advance, the user experience is improved by providing real-time access to the desired data and eliminating the waiting time associated with on-demand processing and data retrieval.
[0079] Improved efficiency: The proactive approach reduces the computational load on the system by performing complex computations and data retrieval ahead of time. This leads to improved system efficiency as resources are utilized more effectively, allowing for faster response times and better overall performance.
[0080] Predictive capabilities: By leveraging proactive pre-computation techniques for delayed data retrieval ML techniques and analyzing usage patterns, the system can anticipate users' potential interests based on their current query. This predictive capability allows the system to proactively compute and store additional relevant data, making it available for users who might be interested in exploring related information. This personalized and proactive approach enhances the user experience by providing tailored and valuable insights.
[0081] Enhanced user experience: Users can benefit from a seamless and efficient experience as the elimination of delays in data retrieval improves user satisfaction and enables faster decision-making and analysis.
[0082] Scalability: Precomputing and storing relevant data in advance allows for better scalability. As the system handles increasing user demands, the precomputed data can be readily accessed, reducing the need for additional computational resources during peak times. This scalability ensures that the system can handle a larger user base and maintain a consistent level of performance.
[0083] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS. 1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0084] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0085] Environment - 100
[0086] UEs– 102, 102-1-102-n
[0087] Server - 104
[0088] Communication network – 106
[0089] System – 108
[0090] Processor – 202
[0091] Memory – 204
[0092] User Interface – 206
[0093] Display – 208
[0094] Input unit – 210
[0095] Centralized Database – 214
[0096] Analysing unit – 216
[0097] Determining unit – 218
[0098] Executing unit – 220
[0099] Storage unit – 222
[00100] Receiving unit – 224
[00101] Selecting unit - 226
[00102] System - 300
[00103] Primary processors -305
[00104] Memory– 310
[00105] Kernel– 315
[00106] Computing system – 400
[00107] Distributed data processing orchestrator – 402
[00108] Distributed computation master cluster – 404
[00109] Distributed data lake – 406
[00110] Distributed file system – 408
[00111] Template/dashboard/report unit - 410
,CLAIMS:CLAIMS
We Claim:
1. A method (500) for proactive precomputation of network performance data, the method (500) comprising the steps of:
analysing, by one or more processors (202), using a machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query;
determining, by the one or more processors (202), data among the related network performance data to be precomputed using a proactive pre-computation algorithm;
executing, by the one or more processors (202), the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data; and
storing, by the one or more processors (202), the pre-computed data in a storage unit (222).
2. The method (500) as claimed in claim 1, comprises receiving, by the one or more processors (202), a second user query from the user, through a user interface (206), wherein the second user query comprises a request for the pre-computed data associated with the network performance.
3. The method (500) as claimed in claim 2, comprises enabling, by the one or more processors (202), to select by the user a pre-defined template, or a dynamically populated dashboard, or a dynamically selected attributes associated with the network performance data.
4. The method (500) as claimed in claim 1, comprises providing real-time access, by the one or more processors (202), to information from the storage unit (222), upon receiving the second user query related to the precomputed data.
5. A system (108) for proactive precomputation of network performance data, the system (108) comprising:
an analysing unit (216), configured to perform analysis using a machine learning algorithm, a first user query, usage patterns corresponding to results of the first user query to identify interests of a user and related network performance data for the first user query;
a determining unit (218) configured to determine data among the related network performance data to be precomputed using a proactive pre-computation algorithm;
an executing unit (220), configured to execute the proactive pre-computation algorithm to pre-compute the determined data among the related network performance data; and
a storage unit (222), configured to store the pre-computed data.
6. The system (108) as claimed in claim 5, comprises a receiving unit (224), configured to receive a second user query from the user, through a user interface (206), wherein the second user query comprises a request for the pre-computed data associated with the network performance.
7. The system (108) as claimed in claim 6, comprises a selecting unit (226), configured to enable the user to select a pre-defined template, or a dynamically populated dashboard, or a dynamically selected attributes associated with the network performance data.
8. The system (108) as claimed in claim 5, wherein the determining unit (218) provides real-time access to information from the storage unit (222), upon receiving the second user query related to the precomputed data.
| # | Name | Date |
|---|---|---|
| 1 | 202321048729-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf | 2023-07-19 |
| 2 | 202321048729-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf | 2023-07-19 |
| 3 | 202321048729-FORM 1 [19-07-2023(online)].pdf | 2023-07-19 |
| 4 | 202321048729-FIGURE OF ABSTRACT [19-07-2023(online)].pdf | 2023-07-19 |
| 5 | 202321048729-DRAWINGS [19-07-2023(online)].pdf | 2023-07-19 |
| 6 | 202321048729-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2023(online)].pdf | 2023-07-19 |
| 7 | 202321048729-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321048729-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321048729-DRAWING [17-07-2024(online)].pdf | 2024-07-17 |
| 10 | 202321048729-COMPLETE SPECIFICATION [17-07-2024(online)].pdf | 2024-07-17 |
| 11 | Abstract-1.jpg | 2024-09-05 |
| 12 | 202321048729-Power of Attorney [05-11-2024(online)].pdf | 2024-11-05 |
| 13 | 202321048729-Form 1 (Submitted on date of filing) [05-11-2024(online)].pdf | 2024-11-05 |
| 14 | 202321048729-Covering Letter [05-11-2024(online)].pdf | 2024-11-05 |
| 15 | 202321048729-CERTIFIED COPIES TRANSMISSION TO IB [05-11-2024(online)].pdf | 2024-11-05 |
| 16 | 202321048729-FORM 3 [03-12-2024(online)].pdf | 2024-12-03 |
| 17 | 202321048729-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |