Abstract: ABSTRACT METHOD AND SYSTEM FOR GENERATING THE ONE OR MORE PREDICTIONS The present disclosure relates to a system (120) and a method (500) for generating one or more predictions. The method (500) includes the step of receiving, data pertaining to operation of the network (105) from one or more sources. The method (500) includes the step of training, one or more training models utilizing the received data. The method (500) includes the step of analysing the received data utilizing the one or more training models. The method (500) includes the step of generating, one or more predictions based on the processing of the received data. Ref. Fig. 5
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGING A GENERATING ONE OR MORE PREDICTIONS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates generally to a wireless communication system, more particularly relates to a system and a generating one or more predictions.
BACKGROUND OF THE INVENTION
[0002] Machine Learning (ML) models have revolutionized how data processing takes place in various technology sectors, including telecommunication (telecom). The traditional systems face challenges of reduced accuracy of predictions using old, trained models. The accuracy of predictions from a well-trained model for novel and unseen scenarios may diminish over time if there are alterations in the underlying data patterns. Further, the traditional systems are prone to inefficient utilization of new data streams. The inability to seamlessly integrate and leverage emerging data streams hampers the ability to stay updated with the most relevant information for making informed decisions. Furthermore, the telecom systems face challenges of inadequate real-time decision-making. The current decision-making processes lack the speed and precision required for real-time scenarios, leading to suboptimal outcomes in critical situations. Moreover, the traditional systems lack scalability and adaptability, as these systems struggle to adapt to diverse data sources and changing decision-making requirements, limiting their effectiveness in evolving scenarios.
[0003] There is, therefore, a need for effective solutions for enhancing real-time decision-making through trained models and new data streams.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and system for generating one or more predictions.
[0005] In one aspect of the present invention, the system for generating the one or more predictions is disclosed. The system includes a receiving unit configured to receive data pertaining to operation of the network from one or more sources. The system includes a training unit configured to train one or more training models utilizing the received data. The system further includes an analysing unit configured to analyse the received data utilizing the one or more training models. The system further includes a generation unit configured to generate one or more predictions based on the processing of the received data.
[0006] In an embodiment, the system includes a conversion unit configured to convert a format of the received data to a standard format suitable for analysis of the received data, wherein the conversion unit is configured to perform normalization of the received data and extraction of features from the received data.
[0007] In an embodiment, the system includes a comparison unit configured to compare each of the one or more predictions with predefined threshold to determine a level of certainty associated with each of the one or more predictions.
[0008] In an embodiment, the data relates to at least one of, network functions data and microservices data, and wherein the one or more sources is at least one of network functions, microservices, database, and file systems.
[0009] In an embodiment, the generating unit configured to integrate real-time streaming data one or more predictions based on the processing of the received data, comprising the step of integrating data update the one or more predictions and decisions, wherein the one or more predictions are received based on the data and the one or more trained models.
[0010] In an embodiment, the data is one of real time streaming data, non-real time streaming data, non-streaming data, and trained data.
[0011] In an embodiment, the one or more trained models is retrieved from a database.
[0012] In an embodiment, the one or more training models are retrieved from a database, and wherein the one or more training models are trained utilizing historical data and machine learning data driven techniques.
[0013] In an embodiment, the one or more training models processes the received data based on the nature of the received data and the one or more training models.
[0014] In an embodiment, the one or more generated predictions are rendered on a display of one of a User Equipment (UE) and a User Interface (UI).
[0015] In another aspect of the present invention, the method of generating the one or more predictions is disclosed. The method includes the step of receiving data pertaining to operation of the network from one or more sources. The method further includes the step of training one or more training models utilizing the received data. The method further includes the step of analysing the received data utilizing the one or more training models. The method further includes the step of generating one or more predictions based on the processing of the received data.
[0016] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive, data pertaining to the operation of the network from one or more sources. The processor is configured to train, one or more training models utilizing the received data. The processor is configured to analyse, the received data utilizing the one or more training models. The processor is configured to generate, one or more predictions based on the processing of the received data.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of an environment for generating one or more predictions, according to one or more embodiments of the present invention;
[0020] FIG. 2 is an exemplary block diagram of a system for generating the one or more predictions, according to one or more embodiments of the present invention;
[0021] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0022] FIG. 4 is a signal flow diagram for generating the one or more predictions, according to one or more embodiments of the present invention; and
[0023] FIG. 5 is a schematic representation of a method of generating the one or more predictions, according to one or more embodiments of the present invention.
[0024] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0026] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0027] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0028] FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a network 105 with trained models, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 110, a server 115, the network 105 and a system 120 communicably coupled to each other for generating the one or more predictions.
[0029] As per the illustrated embodiment and for the purpose of description and illustration, the UE 110 includes, but not limited to, a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0030] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0032] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0033] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0034] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is configured for generating the one or more predictions. As per one or more embodiments, the system 120 is adapted to be embedded within the server 115 or embedded as an individual entity.
[0035] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0036] FIG. 2 is an exemplary block diagram of the system 120 for generating the one or more predictions, according to one or more embodiments of the present invention.
[0037] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface 215, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the user interface 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
[0040] The database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 120 to manage the network 105, the processor 205 includes one or more modules/units. In one embodiment, the one or more modules/units includes, but not limited to, a receiving unit 225, a conversion unit 230, a training unit 235, an analysing unit 240, a generation unit 245, and a comparison unit 250 communicably coupled to each other for generating the one or more predictions.
[0042] In one embodiment, the one or more modules/units may be used in combination or interchangeably for generating the one or more predictions.
[0043] The receiving unit 225, the conversion unit 230, the training unit 235, the analysing unit 240, the generation unit 240, and the comparison unit 250, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0044] In an embodiment, the receiving unit 225 is configured to receive data pertaining to operation of the network 105 from one or more sources. The data relates to at least one of, network functions data and microservices data. The one or more sources refers to various systems 120 from which data can be obtained for the purpose of, but not limited to, managing, monitoring, or analyzing operations of the network 105. The operations include, but not limited to, network management, traffic management, performance monitoring, fault management, and resource management. The one or more sources is at least one of network functions, microservices, database 220, and file systems.
[0045] The conversion unit 230 is configured to convert a format of the received data to a standard format suitable for analysis of the received data by the receiving unit 225. The standard format ensures data compatibility with machine learning models by converting various data types into a uniform structure and addressing inconsistencies like missing data, the standard format adheres to predefined schemas, such as JSON, CSV, or Parquet, making the data easily usable for tasks like feature engineering and model training. Standardized data from unstructured formats may seamlessly integrate real-time and historical data for network environments.
[0046] In an embodiment, the raw data is not suitable to train the model. The raw data contains various data with different formats. Further, the conversion unit 230 is configured to perform processing of the raw data and transformed into standardized data by the processing. The processing includes but not limited to data cleaning, normalization of the received data and extraction of features from the received data. For example, a data with respect to network latency is measured in milliseconds in one source and in seconds in another source. The values of latency are converted into uniform measuring unit. The standardized data containing homogenous data with uniform format is suitable for training the model.
[0047] Upon converting the format of the received data to the standard format from the conversion unit 230, the training unit 235 is configured to train one or more training models 305 (as shown in FIG. 3) utilizing the received data. The training model 305 refers to the machine learning model specifically designed to learn patterns from the training data. The training model 305 are trained to identify trends and patterns within the data, which can then be applied to predict outcomes when new or unseen data is encountered. The unseen data refers to data that was not part of the initial training set. The examples of the training model 305 are, but not limited to, predictive models, classification models, anomaly detection models, and clustering models, which aim to recognize patterns and trends from both historical and real-time network data. Further, the training model 305 allows for accurate predictions and insight generation, enabling more effective network management by anticipating network behaviors and detecting anomalies before they cause issues.
[0048] In an embodiment, the one or more training models 305 are retrieved from a database 220. The one or more training models 305 are trained utilizing historical data and machine learning data driven techniques. The various machine learning techniques may utilize, including, but not limited to, predictive modelling, classification models, anomaly detection, and clustering techniques. The one or more training model 305 are trained using historical data stored in the database 220. The one or more training model 305 historical data includes, but not limited to, past network performance metrics, user behavior logs, and other relevant information. The one or more training model 305 involves applying machine learning algorithms to received data to develop models that may predict future outcomes, classify events, or detect anomalies. Examples of the data used by the training models include parameters such as, but not limited to, bandwidth usage, latency, packet loss rates, network traffic volume, CPU usage, and call drop rates. The models utilize algorithms like some of, but not limited to, time series forecasting and isolation forest to predict network performance and detect anomalies, such as, but not limited to, unusual traffic patterns or sudden changes in signal strength. Further the examples of c\such techniques include, but not limited to, time series forecasting, which may be used to predict future network congestion based on past traffic patterns, and anomaly detection algorithms like isolation forest, which can detect unusual traffic patterns or signal anomalies indicative of potential network faults or security breaches. The machine learning data-driven techniques enable the model to continuously improve aadaptive prediction module 315 (as shown in the FIG. 3) and adapt to changing network 105 conditions.
[0049] Upon training the one or more training models 305, the analysing unit 240 is configured to analyse the received data utilizing the one or more training models 305. The one or more training models 305 processes and analyses the data based on the nature of the received data and the one or more training models. The nature of the received data and one or more training model 305 could relate to the specific characteristics and requirements of the data being processed, as well as the type of machine learning models used for that processing. The nature of the received data is dependent on, but not limited to, bandwidth data, latency-sensitive data, encrypted data, and location-based data. The nature of the received data would determine the most appropriate one or more training models to use for processing, ensuring the system 120 operates efficiently and meets the required performance metrics. The one or more training models 305 includes, but not limited to, convolutional neural networks, recurrent neural networks, generative adversarial networks, and decision trees or random forests.
[0050] The generating unit 245 is further configured to further configure to integrate data to update the adaptive prediction module 315. The adaptive prediction module 315are received based on the real-time streaming data and the one or more trained models. The adaptive prediction module 315 are obtained by applying the trained models to incoming real-time streaming data, continuously refining and updating the adaptive prediction module 315 as new data is received The generation unit 245 continuously receives data streams in real-time from various sources such as, but not limited to, network functions, microservices, or sensors, that reflect the current state of the network 105. The data is one of one of real time streaming data, non-real time streaming data, non-streaming data, and trained data. Further, in one embodiment, the one or more trained models is retrieved from the database 220.
[0051] Further, the generation unit 245 integrates real-time streaming data from sources like network functions and microservices to update the adaptive prediction module 315 dynamically. The data is one of real time streaming data non-real time streaming data, non-streaming data, and trained data and also the one or more trained models is retrieved from the database 220.
[0052] In an embodiment, the adaptive prediction module 315 is rendered on a display of one of the UE 110 and the UI 215. By processing the real-time streaming data alongside pre-trained models developed from historical data. The generation unit 245 unit generates accurate forecasts based on the integration of real-time streaming data from sources like network functions and microservices alongside the pre-trained models developed from historical data. The generation unit 245 continuously integration creates the feedback loop, allowing the adaptive prediction module 315 to evolve with changing network 105 conditions and improving overall network performance. To make the adaptive prediction module 315, the generation unit 245 applies pre-trained machine learning models that have learned from historical data. The trained models have been trained on previous network performance metrics and behaviors, allowing the trained models to recognize patterns and trends that indicate potential issues. By continuously processing and analyzing the incoming data, the generation unit 245 creates a feedback loop, ensuring that the adaptive prediction module 315 remains accurate and relevant. In one embodiment, the one or more predictions are rendered on a display of one of the UI 215 and the UE 110.
[0053] Upon receiving the generated the adaptive prediction module 315 from the generation unit 245, the comparison unit 250 is configured to compare, each of adaptive prediction module 315 with a predefined threshold to determine a level of certainty associated with each of the adaptive prediction module 315. Further, the comparison unit 250 quantifies the confidence in the adaptive prediction module 315, based on historical data and current conditions, indicating how likely each of the adaptive prediction module 315 is to be accurate. A higher level of certainty suggests more reliable to the adaptive prediction module 315, while a lower level indicates uncertainty or potential inaccuracy. The level of certainty includes, but is not limited to, network congestion prediction, signal quality assessment, anomaly detection, and resource utilization. The predefined thresholds are dynamically adjustable based on learning from historical data and real-time performance metrics. Learning occurs through machine learning algorithms that analyze past predictions and the accuracy and continuously refining the thresholds based on the analysis of past predictions and the accuracy to enhance the adaptive prediction module 315 reliability and adapt to changing conditions.
[0054] In an embodiment, the predefined threshold is the specific value or set of values used by the comparison unit 250 to evaluate the certainty or reliability of each of the one or more predictions generated by the generation unit 245. The predefined threshold serves as the benchmark that determines whether the adaptive prediction module 315 meet established criteria for action or response, such as, but not limited to, reallocating resources or triggering alerts.
[0055] In an embodiment, the network performance matrix refers to key indicators used to assess and manage the quality and efficiency of network operations. The network performance matrix includes, but not limited to, bandwidth utilization threshold which measures the percentage of the total available bandwidth being used by network traffic, and latency threshold which the time takes for data to travel from its source to its destination, typically measured in millisecond. By defining and monitoring the network performance metrics, network 105 can effectively manage resources, optimize performance, and ensure a high level of service quality for users.
[0056] In an embodiment, the level of certainty refers to the confidence or reliability associated with the adaptive prediction module 315 made by the system 120. The level of certainty indicates how the predicted outcome will occur. The level of certainty helps in making informed decisions, especially when the network needs to respond to potential issues or optimize performance. The level of certainty includes, but not limited to, predicting network congestion, identifying potential hardware failures, anomaly detection, and optimizing network slicing. The level of certainty associated with the adaptive prediction module 315 is crucial for maintaining high performance, reliability, and user satisfaction.
[0057] In an alternate embodiment, the training unit 235 receives a request for training the one or more training models from at least one of a microservice, a service and an application. In addition to providing the request, at least one of the microservice, the service and the application is further configured to provide the data from one of the trained models retrieved from the database 220, streaming data and non-streaming data. The analysing unit 240, the generation unit 245 and the comparison unit 250 perform the function as mentioned earlier and are not repeated for the sake of brevity, without limiting the scope of the present disclosure. The one or more predictions generated as such is rendered on the display of one of the UI 215 and the UE 110. In another embodiment, the one or more predictions as generated may be utilized for certain other objectives, such as allocating one or more resources and generating alerts.
[0058] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 120 for generating the one or more predictions, according to one or more embodiments of the present invention. The one or more trained model 305 is hereinafter referred to as the trained model 305.
[0059] The architecture 300 includes the trained model 305, a retraining module 310, the database 220, an adaptive prediction module 315, and a workflow manager 320.
[0060] The user interface215 serves as the interaction point for the user. In one embodiment, the point of interaction is the UE 110. The user interface215 collects user inputs and displays outputs. The inputs from the user interface215 sent to the adaptive prediction module 315 for further processing.
[0061] This adaptive prediction model 315 is responsible for managing the prediction process, including the trained model 305 and the retraining model310. The adaptive prediction module 315 interacts with multiple components to ensure that predictions are accurate and up to date. The component within the adaptive prediction model 315 handles the model training process and uses the data stored in the database 220 to train models and predict outcomes based on the trained model 305. The results of the predictions are sent to the workflow manager 320 and stored back in the database220. The trained model 305 receiving the data pertaining to operation of the network 105 from one or more sources. The one or more sources is at least one of network functions, microservices, database 220, and file systems. The trained model 305 immediately utilizes the trained model 305 to predict outcomes based on the data. The trained model 305 leverages the well-established model to generate accurate predictions based on input data. The input data relates to at least one of, network functions data and microservices data.
[0062] The retraining model 310 is responsible for updating the models when new data or feedback indicates that the existing model may not be performing optimally. The retraining model 310 ensures that the adaptive prediction module 315 is continuously improving over time. The retraining module 310 enabling users to retrain the existing model to keep the adaptive prediction module 315 current and relevant. The retraining module 310 allows users to update and improve the existing model by retraining it on new data. The retraining unit 310confirms the model remains current and may adapt to changes in the data circumstances. The retraining module 310 enables faster retraining and reducing the time required compared to training the model from scratch.
[0063] All processed data, including the outputs from the retraining module 310 and the trained model 305. The processed data is stored in the database 220. The storage is essential for ensuring that all data is available for future analysis and decision-making processes. The adaptive prediction model 315 interacts with the database 220 to retrieve and store data.
[0064] Upon storing the processed data in database 220, the workflow manager 320 oversees the overall process flow and ensures that each component works in coordination. The workflow manager 320 handles task scheduling, coordination between the user interface215, the adaptive prediction model 315, and the database 220, and ensures that predictions and retraining are done efficiently.
[0065] FIG. 4 is a signal flow diagram for generating the one or more predictions, according to one or more embodiments of the present invention.
[0066] At step 405, the flow begins with receipt of the user input through the user interface215. The user interface215 serves as the interaction point where user inputs are collected and decisions on whether to proceed are made. The inputs transmit to adaptive prediction module 315.
[0067] At step 410, upon receiving the inputs from the user interface215, the adaptive prediction module 315 sends data to the trained model 305 for initial prediction. The adaptive prediction module 315 interacts with the trained model 305 and the retraining module 310when model updates are needed. The adaptive prediction module 315 communicates with the database 220 to fetch and store the data and send the results to the workflow manager320. In one embodiment, the adaptive prediction module 31 is rendered on the display of one of the UI 215 and the UE 110.
[0068] At step 415, upon receiving the data from form the user interface215 for initial prediction, the trained model 305 process training based on the existing data. The trained model 305 sends prediction results back to the workflow manager 320.
[0069] At step 420, upon receiving the instructions from the trained model 305, the retraining module 310 fetches the new data from the database 220. The retraining module 310 sends the updated data back to the workflow manager 320.
[0070] At step 425, thereafter receiving and storing data from both the trained model 305 and retraining module 310. The database 220 supplies data to both the adaptive prediction module 315 and the workflow manager 320 as needed.
[0071] At step 430, upon receiving the predicted data from the adaptive prediction module 315, the workflow manager 320 sends the final output back to the user interface215 for the display.
[0072] At step 435, upon receiving the final result from the workflow manager 320, the result is presented in the user interface 215 and the notification is provided to the user.
[0073] FIG. 5 is a flow diagram of a method 500 for generating the one or more predictions, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0074] At step 505, the method 500 includes the step of receiving data pertaining to operation of the network 105 from one or more sources. The data relates to at least one of, network functions data and microservices data, The one or more sources is at least one of network functions, microservices, database 220, and file systems. Upon receiving the data, the method comprises the step of converting the format of the received data to the standard format suitable for analysis of the received data. The step of converting the format includes normalization of the received data and extraction of features from the received data.
[0075] At step 510, the method 500 includes the step of training one or more training models utilizing the received data. The one or more training models are retrieved from the database 220. The one or more training models is trained utilizing historical data and machine learning data driven techniques.
[0076] At step 515, the method 500 includes the step of analysing the received data utilizing the one or more training models. The one or more training models processes the received data based on the nature of the received data and the one or more training models.
[0077] At step 520, the method 500 includes the step of generating adaptive prediction module 315 based on the processing of the received data. On generation of the adaptive prediction module 315, the method 500 comprises the step of comparing each of the adaptive prediction module 315 with predefined threshold to determine the level of certainty associated with each of the adaptive prediction module 315.
[0078] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive the data pertaining to operation of the network 105 from one or more sources. The processor 205 is further configured to train, one or more training models utilizing the received data. The processor 205 is further configured to analyse, the received data utilizing the one or more training models. The processor 205 is further configured to generate, the adaptive prediction module 315 based on the processing of the received data.
[0079] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0080] The present disclosure incorporates technical advancement by dynamically training and updating models with real-time data, ensuring continuous adaptation and high accuracy in predictions. Automates the data normalization and feature extraction, enhancing analysis efficiency and allowing seamless integration of diverse data sources. The ability to assess prediction certainty adds robustness to decision-making, while its versatility in handling various data types ensures broad applicability across different network environments. Additionally, the design is scalable, flexible, and modular, allowing for efficient optimization and reuse across varying network sizes and configurations.
[0081] The present invention offers significant advantages by offering improved accuracy in predictions, enabling proactive adjustments to maintain optimal performance. It allows for real-time analysis of network data, leading to quicker decision-making and reducing the time needed for model retraining. By automating predictions, the invention ensures consistent high-quality service, and the integration of adaptive learning improves the invention's ability to respond to dynamic network conditions. Additionally, it facilitates efficient resource management, thereby optimizing network operations and boosting overall reliability.
[0082] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0083] Environment – 100
[0084] Network- 105
[0085] User Equipment (UE) - 110
[0086] Server - 115
[0087] System -120
[0088] Processor - 205
[0089] Memory - 210
[0090] User Interface - 215
[0091] Database - 220
[0092] Receiving unit - 225
[0093] Conversion unit - 230
[0094] Training unit - 235
[0095] Analysing unit - 240
[0096] Generation unit – 245
[0097] Comparison unit - 250
[0098] Trained model - 305
[0099] Retraining module – 310
[00100] Adaptive prediction module - 315
[00101] Workflow manager - 320
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of generating the one or more predictions with trained models, the method (500) comprising the steps of:
receiving, by the one or more processors (205), data pertaining to operation of the network (105) from one or more sources;
training, by the one or more processors (205), one or more training models utilizing the received data;
analysing, by the one or more processors (205), the received data utilizing the one or more training models; and
generating, by the one or more processors (205), one or more predictions based on the processing of the received data.
2. The method (500) as claimed in claim 1, wherein on receiving the data, the method comprises the step of converting, by the one or more processors (205), a format of the received data to a standard format suitable for analysis of the received data, wherein the step of converting the format includes normalization of the received data and extraction of features from the received data.
3. The method (500) as claimed in claim 1, wherein on generation of the one or more predictions, the method comprises the step of comparing, by the one or more processors (205), each of the one or more predictions with predefined threshold to determine a level of certainty associated with each of the one or more predictions.
4. The method (500) as claimed in claim 1, wherein the data relates to at least one of, network functions data and microservices data, and wherein the one or more sources is at least one of network functions, microservices, database, and file systems.
5. The method (500) as claimed in claim 1, wherein the step of generating, by the one or more processors (205), one or more predictions based on the processing of the received data, comprising the step of integrating data update the one or more predictions and decisions, wherein the one or more predictions are received based on the data and the one or more trained models.
6. The method (500) as claimed in claim 5, wherein the data is one of real time streaming data, non-real time streaming data, non-streaming data, and trained data.
7. The method (500) as claimed in claim 5, wherein the one or more trained models is retrieved from a database (220).
8. The method (500) as claimed in claim 1, wherein the one or more training models are retrieved from a database, and wherein the one or more training models is trained utilizing historical data and machine learning data driven techniques.
9. The method (500) as claimed in claim 1, wherein the one or more training models processes the received data based on the nature of the received data and the one or more training models.
10. The method as claimed in claim 1, wherein the one or more generated predictions are rendered on a display of one of a User Equipment (UE) and a User Interface (UI).
11. A system (120) for generating the one or more predictions with trained models, the system (120) comprising:
a receiving unit (225) configured to receive, data pertaining to operation of the network (105) from one or more sources;
a training unit (235) configured to train, one or more training models utilizing the received data;
an analysing unit (240) configured to analyse, the received data utilizing the one or more training models; and
a generation unit (245) configured to generate, one or more predictions based on the processing of the received data.
12. The system (120) as claimed in claim 11, comprising a conversion unit (230) configured to convert, a format of the received data to a standard format suitable for analysis of the received data, wherein the conversion unit (230) is configured to perform normalization of the received data and extraction of features from the received data.
13. The system (120) as claimed in claim 11, comprising a comparison unit (250) configured to compare, each of the one or more predictions with predefined threshold to determine a level of certainty associated with each of the one or more predictions.
14. The system (120) as claimed in claim 11, wherein the data relates to at least one of, network functions data and microservices data, and wherein the one or more sources is at least one of network functions, microservices, database, and file systems.
15. The system (120) as claimed in claim 11, wherein the generating unit (245) is configured to integrate real-time streaming data one or more predictions based on the processing of the received data, comprising the step of integrating data update the one or more predictions and decisions, wherein the one or more predictions are received based on the data and the one or more trained models.
16. The system (120) as claimed in claim 11, wherein the data is one of real time streaming data, non-real time streaming data, non-streaming data, and trained data.
17. The system (120) as claimed in claim 11, wherein the one or more trained models is retrieved from a database (220).
18. The system (120) as claimed in claim 11, wherein the one or more training models are retrieved from a database (220), and wherein the one or more training models is trained utilizing historical data and machine learning data driven techniques.
19. The system (120) as claimed in claim 11, wherein the one or more training models processes the received data based on the nature of the received data and the one or more training models.
20. The system (120) as claimed in claim 11, wherein the one or more generated predictions are rendered on a display of one of a User Equipment (UE) and a User Interface (UI)
| # | Name | Date |
|---|---|---|
| 1 | 202321067262-STATEMENT OF UNDERTAKING (FORM 3) [06-10-2023(online)].pdf | 2023-10-06 |
| 2 | 202321067262-PROVISIONAL SPECIFICATION [06-10-2023(online)].pdf | 2023-10-06 |
| 3 | 202321067262-FORM 1 [06-10-2023(online)].pdf | 2023-10-06 |
| 4 | 202321067262-FIGURE OF ABSTRACT [06-10-2023(online)].pdf | 2023-10-06 |
| 5 | 202321067262-DRAWINGS [06-10-2023(online)].pdf | 2023-10-06 |
| 6 | 202321067262-DECLARATION OF INVENTORSHIP (FORM 5) [06-10-2023(online)].pdf | 2023-10-06 |
| 7 | 202321067262-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321067262-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321067262-DRAWING [07-10-2024(online)].pdf | 2024-10-07 |
| 10 | 202321067262-COMPLETE SPECIFICATION [07-10-2024(online)].pdf | 2024-10-07 |
| 11 | Abstract.jpg | 2025-01-02 |
| 12 | 202321067262-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321067262-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321067262-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321067262-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321067262-FORM 3 [31-01-2025(online)].pdf | 2025-01-31 |