Sign In to Follow Application
View All Documents & Correspondence

Method And System For Forecasting Events In A Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR FORECASTING EVENTS IN A NETWORK. The present disclosure relates to a system (120) and a method (500) for forecasting events in a network (105). The method (500) includes the step of retrieving data from one or more data sources. The method (500) further includes the step of training each of a plurality of models with the retrieved data. The method (500) further includes the step of forecasting utilizing each of the plurality of trained models for one or more events. The method (500) further includes the step of generating an output for each of the trained models. The method (500) further includes the step of rendering the output generated for each of the trained models to a user. Ref. Fig. 6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil Meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR FORECASTING EVENTS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380005, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates generally to a wireless communication system, and in particular to, a system and a method for forecasting events in a network.
BACKGROUND OF THE INVENTION
[0002] Generally, in the telecommunication network the various models may be trained utilizing the training data that may be stored in storage devices. The trained models may provide the outputs in the form of numbers, metrics, etc. Utilizing these outputs, the predictions may be done by the models.
[0003] The complex machine learning models used for forecasting events may lead to a lack of consumers trust and understanding of the model's predictions. In particular, the outputs provided by machine learning algorithms may be difficult to study and understand the trends, patterns, complex datasets, etc.
[0004] The consumer may also face issues in comparing the predictions provided by the various models that may lead to issues in selecting the appropriate model depending on the provided data.
[0005] There is, therefore, a dire need for a system and method for optimally forecasting events and representation thereof on a display device that allows user friendly comparative study of the models by the consumers.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and system for forecasting events in a network.
[0007] In one aspect of the present invention, the method for forecasting events in the network is disclosed. The method includes the step of retrieving data from one or more data sources. The method further includes the step of training each of a plurality of models with the retrieved data. The method further includes the step of forecasting utilizing each of the plurality of trained models one or more events. The method further includes the step of generating an output for each of the trained models. The method further includes the step of rendering the output generated for each of the trained models to a user.
[0008] In an embodiment, the one or more data sources include at least one of, file input, data from source path, input stream, Hypertext Transfer Protocol 2 (HTTP2), Distributed File System (DFS) and data from Network Access Server (NAS).
[0009] In an embodiment, the step of training each of a plurality of models with the retrieved data, includes the steps of identifying patterns from the data and enabling each of the plurality of models to learn the identified patterns of the data.
[0010] In an embodiment, prior the step of training the plurality of models further includes the steps of preprocessing the data, selecting one or more features for training each of the plurality of models and configuring hyperparameters for each of the plurality of models.
[0011] In an embodiment, the step of forecasting utilizing each of the plurality of trained models one or more events incudes the step of forecasting utilizing each of the plurality of trained models, the one or more events based on an input, such as a data range, and the learnt patterns of the data.
[0012] In an embodiment, the step of generating an output for each of the trained models includes at least one of generating training status list for each of the trained models including information of time of training, further generating at least one of actual values, test values and forecasted values for each of the trained models, and generating the one or more performance indicators for each of the trained models.
[0013] In an embodiment, the step of rendering the output generated for each of the trained models to the user includes the step of displaying on a display device, the output generated, wherein the output generated is displayed on the display device in at least one of, a tabular view or a graphical view.
[0014] In another aspect of the present invention, the system for forecasting events in the network is disclosed. The system includes a data integrator unit configured to retrieve data from one or more data sources. The system further includes a model training unit configured to train each of the plurality of models with the retrieved data. The system further includes a forecasting engine configured to forecast utilizing each of the plurality of trained models one or more events. The system further includes an output generating unit configured to generate an output for each of the trained models. The system further includes a graphical representation configured to render the output generated for each of the trained models to the user.
[0015] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to retrieve data from one or more data sources. The processor is further configured to train each of the plurality of models with the retrieved data. The processor is further configured to forecast utilizing each of the plurality of trained models, one or more events. The processor is further configured to generate an output for each of the trained models. The processor is further configured to render the output generated for each of the trained models to the user.
[0016] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to render, the output generated for each of the trained models.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of an environment for forecasting events in a network, according to one or more embodiments of the present invention;
[0020] FIG. 2 is an exemplary block diagram of a system for forecasting events in the network, according to one or more embodiments of the present invention;
[0021] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention;
[0022] FIG. 4 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0023] FIG. 5 is a flow diagram for forecasting events in the network, according to one or more embodiments of the present invention; and
[0024] FIG. 6 is a schematic representation of a method for forecasting events in the network, according to one or more embodiments of the present invention.
[0025] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0026] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0027] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0028] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0029] FIG. 1 illustrates an exemplary block diagram of an environment 100 for forecasting events in a network 105, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 110, a server 115, the network 105 and a system 120 communicably coupled to each other for forecasting events in the network 105.
[0030] As per the illustrated embodiment and for the purpose of description and illustration, the UE 110 includes, but not limited to, a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0031] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0032] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0033] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (5G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0034] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0035] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is configured for forecasting the events in the network 105. As per one or more embodiments, the system 120 is adapted to be embedded within the server 115 or embedded as an individual entity.
[0036] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0037] FIG. 2 is an exemplary block diagram of the system 120 for forecasting the events in the network 105, according to one or more embodiments of the present invention.
[0038] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a User Interface (UI) 215, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0039] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0040] In an embodiment, the UI 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 215 facilitates communication of the system 120. In one embodiment, the UI 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
[0041] The database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0042] In order for the system 120 to forecasting events in the network 105, the processor 205 includes one or more modules. In one embodiment, the one or more modules/units includes, but not limited to, a data integrator unit 225, a pre-processing unit 230, a model training unit 235, a forecasting engine 240, an output generating unit 245, and a graphical representation unit 250 communicably coupled to each other for forecasting the events in the network 105.
[0043] In one embodiment, the one or more modules may be used in combination or interchangeably for forecasting the events in the network 105.
[0044] The data integrator unit 225, the pre-processing unit 230, the model training unit 235, the forecasting engine 240, the output generating unit 245, and the graphical representation unit 250, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0045] In an embodiment, the data integrator unit 225 is configured to retrieve data from one or more data sources. The historical data refers to previously collected and stored information from various network-related activities, events, and system behaviours that serve as the foundation for training forecasting models. The historical data typically includes a wide array of data points that describe past states or actions within the network 105. The historical data enables the system 120 to learn patterns corresponding to the network 105 that may be used to predict future events. The patterns refer to recurring behaviors, trends, or relationships identified within the data that describe the performance and activities of the network 105. The patterns emerge from the analysis of various network-related events, such as, but not limited to, traffic spikes, user behavior during peak hours, seasonal fluctuations in demand, and system responses to varying levels of congestion. By detecting the patterns, the system 120 is able to forecast future network events, like potential congestion or service degradation, allowing operators to make proactive adjustments to optimize performance. The one or more data sources refers to the various systems, platforms, or interfaces from which historical data is collected and retrieved to train the machine learning models for forecasting. The one or more data sources may be diverse and may include, but not limited to, structured, semi-structured, or unstructured data, coming from different parts of the network infrastructure or external systems 120.
[0046] In an embodiment, the one or more data sources is at least one of, but not limited to, a file input, data from source path, an input stream, a Hypertext Transfer Protocol 2 (HTTP2), a Distributed File System (DFS) and data from Network Access Server (NAS). The system 120 retrieves the data from the file input to analyse past network behaviour and identify patterns that may help predict future events, such as, but not limited to, a potential service outage or network congestion during peak hours.
[0047] The data from source path refers to accessing data stored at the specific file path or directory within the network 105. The data from source may automate the retrieval of the new files from the source path as they are generated, ensuring that the forecasting models are continuously trained on the most recent data. The data from source helps in predicting issues like handover failures between cells in the network 105.
[0048] The input stream is the continuous flow of data that provides real-time or near-real-time network information. The input stream process streaming data to detect immediate or short-term anomalies, such as a sudden spike in traffic that might indicate the denial-of-service attack.
[0049] The HTTP/2 is the protocol for efficient data transfer over the internet. The HTTP/2 allows multiple streams of data to be sent concurrently over a single connection. The HTTP/2 improving the speed and performance of the data retrieval enhancing retrieval speed and performance, which can be leveraged to gather data from monitoring tools or external systems that track the applications and the services, such as, but not limited to, mobile edge computing (MEC) or virtualized network functions (VNFs), aiding in predicting service degradation or system bottlenecks.
[0050] The DFS is designed for handling vast amounts of the data across multiple nodes in the distributed network. The infrastructure needs to handle the growing data demands from user devices, IoT systems, and ultra-reliable low-latency communication (URLLC). The DFS with its scalable and fault-tolerant architecture is well-suited for storing large datasets generated by the network functions like Core Network (CN), Radio Access Network (RAN), and applications like edge computing. The examples of the DFS includes, but not limited to, data processing in DFS, data generation, visualization and reporting. data ingestion, and data storage.
[0051] Upon retrieving the data from the one or more data sources, the model training unit 235 is configured to train each of a plurality of models with the retrieved data. In an embodiment, training each of the plurality of models with the retrieved data in the model training unit 235 corresponds to applying one or more machine learning or statistical techniques, algorithms, and decision-making processes to the retrieved data. The one or more logics aid in identifying patterns, trends, and relationships within the historical data to enable each of the plurality of models to make accurate predictions or to forecast events accurately. The logic refers to the rules and principles guiding data analysis and decision-making, often based on the predefined criteria. In contrast, the Machine Learning (ML) models are mathematical frameworks that learn from data to make predictions or classifications, automatically adjusting the parameters based on the input. While logic provides the structured framework for identifying patterns, the ML models may adapt and improve the predictions over time as they encounter new data. Thus, the model training unit 235 utilizes both logics to establish the analytical framework and the ML models to learn from historical data, enhancing the accuracy of event forecasting in the network environment.
[0052] Accordingly, the model training unit 235 trains each of the plurality of models with the retrieved data to identify patterns from the data, thereby enabling each of the plurality of models to learn the identified trends/patterns of the data. Further, the data pre-processing unit 230 is further configured to pre-process the retrieved data, select one or more features for training each of the plurality of models and configure, hyperparameters for each of the plurality of models. The model training unit 235 involves identifying the most relevant data attributes that significantly influence the model’s ability to make accurate predictions. The one or more features are chosen from the data and represent important aspects of the network 105. Examples of features in the network includes at least one of, but not limited to, traffic volume, signal strength, bandwidth utilization, and handover success rate, time of the day and time of week, and a combination thereof.
[0053] In an embodiment, the hyperparameters of each module that are not learned from the data but are configured before training begins. Properly tuning the hyperparameters may significantly impact the model’s performance and its ability to generalize from the training data. The examples of the hyperparameters in network models are included, but not limited to, learning rate, batch size, number of layers from the deep learning models, and dropout rate.
[0054] Thereafter, the forecasting engine 240 is configured to forecast one or more events utilizing each of the plurality of trained models that may occur within the network 105. The one or more events includes, but is not limited to, network congestion, service degradation, handover failures, and anomalies in behaviours of the network 105.
[0055] The network congestion may occur during peak hours in densely populated urban areas, where the sudden surge in connected devices may overwhelm the available network capacity. In this scenario, the forecasting engine 240 may predict the event based on historical traffic patterns and subsequently trigger load-balancing measures to mitigate the impact. The service degradation may be predicted by analysing increased latency or degraded throughput during the live-streamed sports event, where the large number of users consume bandwidth-heavy services like High Definition (HD) video streaming, leading to increased latency or degraded throughput. By analysing indicators like rising latency or reduced throughput, the forecasting engine 240 may anticipate service degradation. The forecasting engine 240 forecast network anomalies such as abnormal increases in packet loss or unusual traffic patterns that could indicate a Distributed Denial of Service (DDoS) attack.
[0056] Additionally, the forecasting engine 240 forecasts utilizing each of the plurality of trained models, one or more events based on the input and the learnt patterns of the data. The events refer to significant occurrences or conditions within the network 105 that may impact the event’s performance. The events are forecasted based on historical data and learned patterns, enabling proactive measures to maintain network efficiency. The examples of the events are including, but not limited to, network congestion, service degradation, seasonal traffic fluctuations, and maintenance or outages. The forecasting engine 240 analyzes past performance metrics and user behavior to identify key indicators, such as, but not limited to, seasonal traffic fluctuations and peak usage times. For instance, historical data revealing user traffic spikes during specific hours or events enables the forecasting engine unit 240 to anticipate similar patterns in the future. The forecasting engine 240 enabling predictions of the network congestion or service degradation. Such predictions empower network operators to take proactive measures, like adjusting resource allocation and implementing load balancing, ensuring optimal performance and enhancing overall service quality for users.
[0057] Upon forecasting the one or more events using each of the plurality of trained models, the output generating unit 245 is configured to generate an output for each of the trained models.
[0058] The output generated for each of the trained models includes at least one of training status list for each of the trained models including information of time of training at least one of, actual values, test values and forecasted values for each of the trained models and information on one or more performance indicators for each of the trained models. The performance metrics that may be utilized for evaluating the trained models, including accuracy and RMSE, but not limited to, Mean Absolute Error (MAE), confusion matrix, and training time. The training status list includes, the status of each of the model's training process, such as when the model was last trained and how long the training took, the actual values observed in the data which the model was trying to predict, the predicted values generated by the model during the testing phase, and future predictions made by each of the model based on the retrieved data. The overall accuracy of the model's predictions and the metric used to evaluate the difference between the predicted values and the actual values. The lower RMSE indicates better performance.
[0059] Thereafter, the graphical representation unit 250 is configured to render the output generated by the output generating unit 245 for each of the plurality of the models to the user.
[0060] In one embodiment, the graphical representation 250 renders the output generated for each of the trained models to the user by displaying the generated output on one of the UE 110 and the UI 215. The output generated is rendered on the UE 110 and the UI 215 in at least one of, a tabular view or a graphical view. The output may include, but not limited to, predictions, accuracy measures, training status, and other data points provided by each of the plurality of models. The output is shown on a physical screen, such as, but not limited to, a computer monitor, tablet, or any other device with a display. In one embodiment, the generated output is presented in a table format, showing the data in rows and columns. The table format allows the user to easily compare actual versus predicted values as well as performance metrics. In another embodiment, the output is displayed visually using charts or graphs. For example, it might use, line charts, bar charts, heat maps. The graphical view makes easier for users to identify patterns and outliers quickly.
[0061] FIG. 3 describes a preferred embodiment of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0062] As mentioned earlier in FIG. 1, each of the first UE 110a, the second UE 110b, and the third UE 110c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
[0063] The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 causes the first UE 110a to transmit the request to one or more processors, based on the user selecting or customizing the one or more parameters via the UI 215 of the UE 110, to forecast one or more events to the network 105.
[0064] As mentioned earlier in FIG. 2, the one or more processors 205 of the system 120 is configured to forecast the one or more events. As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the UI 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the UI 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0065] Further, the processor 205 includes the data integrator unit 225, the pre-processing unit 230, the model training unit 235, the forecasting engine 240, the output generating unit 245, and the graphical representation unit 250. The operations and functions of the data integrator unit 225, the pre-processing unit 230, the model training unit 235, the forecasting engine 240, the output generating unit 245, and the graphical representation unit 250 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.

[0066] FIG. 4 is an exemplary block diagram of an architecture 400 of the system 120 for forecasting the events in the network 105, according to one or more embodiments of the present invention. The architecture 400 includes a data source 405, a data integration 225, a data pre-processing 230, a model training 235, a prediction module 240, a graphic representation unit 250, pre-trained models 410, and the database 220.
[0067] The data source 405 refers to the origins of the data used for training and forecasting. The data source 405 includes various data sources like the file, source paths, input streams, HTTP, DFS, and NAS. The data source 405 is initially retrieved from the sources and passed to the next stage to the data integration module 225. The data source 405 various forms of historical or real-time data relevant to network operations are gathered from the data source 405. The data source 405 is the starting point of the forecasting process, feeding directly into the data integration 225 for further processing.
[0068] The data integration 225 is responsible for collecting and organizing data from the data source 405. The data integration 225 ensures that the data is formatted correctly and standardized for further processing. For example, data retrieved from NAS or DFS might be in different formats or structures, and the data integration 225 consolidates into the unified format. The data integration 225 step is crucial for ensuring that the data is properly aligned for pre-processing. After data integration 225, the information is passed to the data pre-processing 230 to clean, filter, and refine the data.
[0069] The data pre-processing 230 is responsible for cleaning, transforming, and preparing the data for analysis and the model training 235. The pre-processing 230 includes, but not limited to, removing irrelevant or redundant data, handling missing values, and normalizing the data to ensure consistency. After data integration 225, the data pre-processing 230 is forwarded to the model training 235 to begin the training process or passed to the prediction module 240for immediate predictions.
[0070] The model training 235 is where the actual training of machine learning models occurs. Using the pre-processed data, various models are trained to identify patterns, trends, and relationships in the network's historical data. The models may use various machine learning algorithms. Once data is pre-processed 230, the model training 235 is used to train the machine learning models to generate patterns for future forecasting. After training, the models are stored and made available for the prediction tasks. Once the model training 235 are trained, that are passed on to the prediction module 240 for event forecasting.
[0071] The prediction module 240 responsible for making predictions based on the trained models. the prediction module 240 utilizes the trends and patterns learned during the training phase to forecast future network events such as, but not limited to, traffic spikes, outages, or system performance issue. The prediction module 240computes outputs from either freshly trained models or pre-existing models stored in the database 220. The prediction module 410 also provides performance metrics like RMSE, accuracy, and other relevant evaluation metrics. Further, the prediction module 410 are computed and passed to the graphic representation unit 250 for rendering. The forecasts generated by the prediction module 240are then visualized in the for-user consumption.
[0072] The pre-trained models 415 improve the forecasting process by quickly adapting existing knowledge from large databases 220 to the current data. During the model training 235, the pre-trained models 415 are fine-tuned to recognize unique patterns in the network environment. The pre-trained models 415 are then utilized in the prediction module 240to forecast future events, such as traffic loads and potential outages. This integration enhances predictive accuracy and resource efficiency, enabling faster decision-making and scalability as the data sources 405 and network activities arise.
[0073] The graphic representation unit 250 visualizes the output from the prediction module 240. The graphical representation helps users understand the forecasts and trends through charts, graphs, and tables. The graphic representation unit 250 translates the tabular output predictions into graphical formats, making the output prediction easier for users to interpret the results visually. The graphic representation unit 250 supports visualization of predictions across multiple models. The graphic representation unit 250 once the output is rendered visually, the graphic representation unit 250is available for users or administrators to make informed decisions regarding network resource management, load balancing, or any necessary adjustments to improve performance.
[0074] The database 220 is the central repository where all the data, including historical logs, the pre-trained models 415, and forecasting results is stored. The database 220 ensures that data is available for future retrieval and comparison. For instance, previous predictions or historical traffic data may be stored in the database 220 and used later for further analysis or re-training models. The database 220 also holds the results of the predictions, which may be compared to actual outcomes to assess model accuracy.
[0075] .
[0076] FIG. 5 is a flow diagram for forecasting the events in the network 105, according to one or more embodiments of the present invention.
[0077] At the step 505, the data source 405 is where the relevant data originates, which may include various inputs such as file data, network logs, user activity records, and performance metrics. The data is retrieved from the data source 405, which provides the necessary input for analysis. The data source 405 is crucial for informing the predictive models about past and present conditions within the network 105. The data source 405 acts as the starting point for the forecasting process, providing the raw data needed for further analysis.
[0078] At step 510, upon data is retrieved from the data source 405, the data source 405 enters the data collection & preprocessing phase. The data collection & preprocessing step is essential for cleaning and transforming the data to ensure the quality and usability. The data collection & preprocessing performs the tasks during the data collection & preprocessing phase may include filtering out noise, handling missing values, normalizing data formats, and extracting key features necessary for effective model training. The data collection & preprocessing step aligns with the invention’s goal of ensuring high-quality data inputs.
[0079] At step 515, the pre-trained model 415 A and pre-trained model 415 B represent two distinct pre-trained machine learning models that have been trained on extensive datasets prior to the application. Each model uses different algorithms or approaches to make predictions based on the incoming data. The pre-trained model 415 A and pre-trained model 415 B process the pre-processed data simultaneously, generating their respective predictions.
[0080] At step 520, the model A tabular predictions & model B tabular predictions step involves the output generated by each of the pre-trained models 415. The model A tabular predictions & model B tabular predictions produce the predictions in the structured tabular format that details the expected outcomes based on the provided data. The model A tabular predictions & model B tabular predictions outputs serve as the intermediary representation of the forecasting results, facilitating further analysis and comparison between the model A and model B tabular predictions
[0081] At step 525, the model training 235 develops predictive models by learning from historical data that has been collected and pre-processed. The model training 235 adjusts the internal parameters of pre-trained models to recognize patterns, resulting in optimized models that may generate accurate predictions based on new data. The model training 235 predictions are then compared and visualized, transforming raw data into actionable insights for forecasting future events. Further, the model output aggregates the predictions from both model A and model B. The model output aggregation is crucial for understanding the overall predictive performance of the models. The model output prepares the predictions for the comparative analysis, allowing users to assess which model provides better insights based on the specific the data source 405.
[0082] At step 530, the final visual representation of all predictions creates the visual display of the predictions from model A and model B. The visual representation may take the form of graphs, charts, or dashboards, making visual representation easier to interpret the data. By visualizing the predictions, users may quickly compare and contrast the results from both model A and model B. The visual representation of all predictions aids in decision-making processes by providing the clear overview of the predictions and their implications for network management.
[0083] FIG. 6 is a flow diagram of a method 600 for forecasting the events in the network 105, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0084] At step 605, the method 600 includes the step of retrieving the data from the one or more data sources. The one or more data sources include at least one of, file input, data from source path, input stream, HTTP2, HDFS and data from NAS.
[0085] At step 610, the method 600 includes the step of training each of the plurality of models with the retrieved data using. The step of training each of the plurality of models with the retrieved data includes the steps of identifying trends/patterns from the data and enabling each of the plurality of models to learn the identified patterns of the data. Further the step of training each of the plurality of models further includes the steps of preprocessing the data, selecting one or more features for training each of the plurality of models and configure hyperparameters for each of the plurality of models.
[0086] At step 615, the method 600 includes the step of forecasting utilizing each of the plurality of trained models one or more events. The step of forecasting utilizing each of the plurality of trained models one or more events incudes the step of forecasting utilizing each of the plurality of trained models, the one or more events based on the input and the learnt trends/patterns of the data.
[0087] At step 620, the method 600 includes the step of generating the output for each of the trained models. The step of generating the output for each of the trained models includes at least one of generating training status list for each of the trained models including information of time of training, further generating at least one of actual values, test values and forecasted values for each of the trained models and generating the accuracy and RMSE for each of the trained models.
[0088] At step 625, the method 600 includes the step of rendering the output generated for each of the trained models to the user. The step of rendering the output generated for each of the trained models to the user includes the step of displaying on the display device, the output generated, and the output generated is displayed on the display device in at least one of, the tabular view or the graphical view.
[0089] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to retrieve data from one or more data sources. The processor 205 is further configured to train each of the plurality of models with the retrieved data. The processor 205 is further configured to forecast utilizing each of the plurality of trained models, one or more events. The processor 205 is further configured to generate an output for each of the trained models. The processor 205 is further configured to render the output generated for each of the trained models to the user.
[0090] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0091] The present disclosure includes technical advancements in that the multi-model training framework simultaneously trains various predictive models on data, enhancing forecasting accuracy. The invention employs advanced data preprocessing to optimize data quality and generates comprehensive outputs, including training status, actual, test, and forecasted values, providing valuable insights for users. User-friendly rendering options in both tabular and graphical formats improve result interpretation, while dynamic event forecasting enables real-time predictions based on identified trends and patterns, making it ideal for applications requiring quick responses to changing network conditions.
[0092] The present invention offers multiple advantages including accurate event forecasting through multiple trained models, integrates a wide range of data sources. The invention also optimizes model training with key feature selection and hyperparameter configuration. Additionally, the invention provides user-friendly visualization of actual and forecasted values and supports real-time decision-making by delivering actionable insights for improved network performance and resource management.
[0093] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS
[0094] Environment- 100
[0095] User Equipment (UE)- 110
[0096] Server- 115
[0097] Network- 105
[0098] System -120
[0099] Processor- 205
[00100] Memory- 210
[00101] User interface- 215
[00102] Database – 220
[00103] primary processor – 305
[00104] primary memory - 310
[00105] Data integrator unit – 225
[00106] Pre-processing unit 230
[00107] Model training unit - 235
[00108] Forecasting engine/ Prediction module - 240
[00109] Output generating unit - 245
[00110] Graphical representation - 250
[00111] Data source - 405
[00112] Pre-trained model - 415
,CLAIMS:CLAIMS
We Claim:
1. A method (600) for forecasting events in a network, the method (600) comprising the steps of:
retrieving, by the one or more processors (205), data from one or more data sources;
training, by the one or more processors (205), each of a plurality of models with the retrieved data;
forecasting, by the one or more processors, utilizing each of the plurality of trained models, one or more events;
generating, by the one or more processors (205), an output for each of the trained models; and
rendering, by the one or more processors (205), the output generated for each of the trained models to a user.

2. The method (600) as claimed in claim 1, wherein the one or more data sources include at least one of, file input, data from source path, input stream, Hypertext Transfer Protocol 2 (HTTP2), Distributed File System (DFS) and data from Network Access Server (NAS).

3. The method (600) as claimed in claim 1, wherein the step of, training, each of a plurality of models with the retrieved data, includes the steps of:
identifying, by the one or more processors (205), patterns from the data; and
enabling, by the one or more processors (205), each of the plurality of models to learn the identified patterns of the data.

4. The method (600) as claimed in claim 1, wherein prior the step of, training, each of a plurality of models, further includes the steps of:
preprocessing, by the one or more processors (205), the data;
selecting, by the one or more processors (205), one or more features for training each of the plurality of models; and
configuring, by the one or more processors (205), hyperparameters for each of the plurality of models.

5. The method (600) as claimed in claim 1, wherein the step of, forecasting, utilizing each of the plurality of trained models, one or more events, includes the step of:
forecasting, by the one or more processors (205), utilizing each of the plurality of trained models, the one or more events based on at least one of an input and the learnt patterns of the data, wherein the input is at least one of a date range.

6. The method (600) as claimed in claim 1, wherein the step of, generating, an output for each of the trained models include at least one of:
generating, a training status list for each of the trained models including information of time of training;
generating, at least one of, actual values, test values and forecasted values for each of the trained models; and
generating, one or more performance indicators for each of the trained models.

7. The method (600) as claimed in claim 1, wherein the step of, rendering, the output generated for each of the trained models to a user, includes the step of:
displaying, on a display device, the output generated, wherein the output generated is displayed on the display device in at least one of, a tabular view or a graphical view.

8. A system (120) for forecasting events in a network, the system (120) comprising:
a data integrator unit (225), configured to, retrieve, data from one or more data sources;
a model training unit (230), configured to, train, each of a plurality of models with the retrieved data;
a forecasting engine (240), configured to, forecast, utilizing each of the plurality of trained models, one or more events;
an output generating unit (245), configured to, generate, an output for each of the trained models; and
a graphical representation unit (250), configured to, render, the output generated for each of the trained models to a user.

9. The system (120) as claimed in claim 8, wherein the one or more data sources (405) include at least one of, file input, data from source path, input stream, Hypertext Transfer Protocol 2 (HTTP2), Distributed File System (DFS) and data from Network Access Server (NAS).

10. The system (120) as claimed in claim 8, wherein the model training unit (230), trains each of the plurality of models with the retrieved data by:
identifying, patterns from the data; and
enabling, each of the plurality of models to learn the identified patterns of the data.

11. The system (120) as claimed in claim 8, wherein prior the step of, training, each of a plurality of models, is further configured to:
preprocess, the data;
select, one or more features for training each of the plurality of models; and
configure, hyperparameters for each of the plurality of models.

12. The system (120) as claimed in claim 8, wherein the forecasting engine (235), forecasts, utilizing each of the plurality of trained models, one or more events, based on an input and the learnt patterns of the data, wherein the input is a data range.

13. The system (120) as claimed in claim 8, wherein the output generation unit (245) generates for each of the trained models include at least one of:
training status list for each of the trained models including information of time of training;
at least one of, actual values, test values and forecasted values for each of the trained models; and
information on one or more performance indicators for each of the trained models.

14. The system (120) as claimed in claim 8, wherein the graphical representation unit (250) renders, the output generated for each of the trained models to the user, by:
displaying, on a display device, the output generated, wherein the output generated is displayed on the display device in at least one of, a tabular view or a graphical view.

15. A User Equipment (UE) (110), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory (304) stores instructions which when executed by the one or more primary processors (305) causes the UE (110) to:
render, output generated by the one or more processors (205) for each of the trained models on the UE; and
select, at least one model among the rendered trained models based on an input provided by a user,
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1

Documents

Application Documents

# Name Date
1 202321067270-STATEMENT OF UNDERTAKING (FORM 3) [06-10-2023(online)].pdf 2023-10-06
2 202321067270-PROVISIONAL SPECIFICATION [06-10-2023(online)].pdf 2023-10-06
3 202321067270-FORM 1 [06-10-2023(online)].pdf 2023-10-06
4 202321067270-FIGURE OF ABSTRACT [06-10-2023(online)].pdf 2023-10-06
5 202321067270-DRAWINGS [06-10-2023(online)].pdf 2023-10-06
6 202321067270-DECLARATION OF INVENTORSHIP (FORM 5) [06-10-2023(online)].pdf 2023-10-06
7 202321067270-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321067270-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321067270-DRAWING [06-10-2024(online)].pdf 2024-10-06
10 202321067270-COMPLETE SPECIFICATION [06-10-2024(online)].pdf 2024-10-06
11 Abstract.jpg 2024-12-07
12 202321067270-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321067270-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321067270-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321067270-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321067270-FORM 3 [31-01-2025(online)].pdf 2025-01-31