Sign In to Follow Application
View All Documents & Correspondence

Method And System For Rendering Prediction Outputs On A User Interface (Ui)

Abstract: ABSTRACT METHOD AND SYSTEM FOR RENDERING PREDICTION OUTPUTS ON A USER INTERFACE (UI) The present invention relates to a system 108 and method 600 for rendering prediction outputs on a user interface (UI). The method 600 includes receiving real-time data from one or more data sources, applying the received data to one or more artificial intelligence/machine learning (AI/ML) models 212, generating real-time prediction outputs based on the application of the data to the AI/ML models, and rendering the prediction outputs via charts, graphs, and interactive elements on the UI. The present invention provides an intuitive representation and interactive exploration, allowing users to explore real-time predictions in different contexts, enabling deeper insights and better decision-making. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 October 2023
Publication Number
16/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR RENDERING PREDICTION OUTPUTS ON A USER INTERFACE (UI)
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of object driven data analysis and advanced prediction system in field of network management. More particularly, the present invention relates to a system and a method thereof to provide a user-friendly visualization of real-time prediction outputs.
BACKGROUND OF THE INVENTION
[0002] With the increase in the number of users, the network service provisions have been implementing upgrades to enhance the service quality so as to keep pace with such high demand. To enhance the user experience and implement advanced monitoring mechanisms, prediction methodologies are being incorporated into network management. An advanced prediction system integrated with an AI/ML system excels in executing a wide array of algorithms and predictive tasks. The output of the integrated AI/ML models is observed by means of visualization platforms to assess the accuracy. However, contemporary visualization techniques may not provide an intuitive representation of prediction outputs, making it hard for users to make informed decisions based on the data.

[0003] The existing systems offer static visualizations, failing to provide dynamic, interactive interfaces for exploring real-time predictions in different contexts. Moreover, contemporary visualization methods do not effectively communicate the uncertainty or confidence levels associated with predictions, potentially leading to misinterpretation and mis-judgment.. Users without a technical background face challenges in understanding and utilizing real-time predictions effectively.

[0004] Therefore, there is a requirement of a system and a method thereof to provide a user-friendly prediction output in a user interface.

SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provides a method and system for rendering prediction outputs on a user interface (UI).
[0006] In one aspect of the present invention, the method rendering prediction outputs on a user interface (UI) is disclosed. The method includes the step of receiving, by one or more processors, real time data from one or more data sources. Said one or more data sources is at least one of a probing unit. The received data in the receiving unit is pre-processed and standardized. The method further includes the step applying, by the one or more processors, the received data to one or more artificial intelligence/machine learning (AI/ML) models. The method further includes the step of generating, by the one or more processors, the real time prediction outputs based on the application of the received data on the AI/ML model, wherein each of the charts, the graphs and the interactive elements are dynamically updated corresponding to receipt of latest data.
[0007] In an embodiment, the method includes rendering, by the one or more processors, these prediction outputs through interactive visual elements, such as charts and graphs, on the user interface (UI). Further, in response to rendering the real time prediction outputs, the method comprises the step of receiving a user input to interact with the real time prediction outputs using the one or more processors. The method also ensures that each of the rendered visual elements dynamically updates in response to the latest data received, thereby providing users with current insights.
[0008] In an embodiment, the method encompasses receiving the user input to facilitate interaction with the real-time prediction outputs, enhancing user engagement and experience.
[0009] In an embodiment, the present invention provides a system for rendering real-time prediction outputs on a user interface (UI) utilizing artificial intelligence and machine learning (AI/ML) models.
[0010] In an embodiment of the present invention, the system comprises a receiving unit configured to gather real-time data from one or more data sources. The one or more data sources is at least one of a probing unit. The received data in the receiving unit is pre-processed and standardized. The system further includes an applying unit that is configured to utilize the AI/ML models to analyze the received data, generating real-time prediction outputs.
[0011] In an embodiment of the present invention, the system features a generating unit that is responsible for creating these prediction outputs based on the application of the received data to the AI/ML models. Further, a rendering unit displays the prediction outputs on the user interface (UI) through interactive visual elements such as charts and graphs, enhancing user understanding and engagement. The rendering unit is configured to dynamically update the graphs and the interactive elements corresponding to receipt of latest data. Further, the receiving unit is configured to receive a user input to interact with the real time prediction outputs.
[0012] In an embodiment, the system includes capabilities for preprocessing and standardizing the received data to improve its quality and usability. Each rendered visual element is designed to dynamically update in response to the latest data, providing users with current insights.
[0013] Furthermore, the system allows for user interaction, enabling users to engage with the real-time prediction outputs effectively, thereby enhancing their experience and decision-making capabilities.
[0014] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to receive real time data from one or more data sources; and apply, the received data to one or more artificial intelligence/machine learning (AI/ML) models. The processor is further configured to generate, real time prediction outputs based on the application of the received data on the AI/ML model; and render, the real time prediction outputs via charts, graphs, and interactive elements on a user interface (UI).
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of an environment for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention;
[0018] FIG. 2 is an exemplary block diagram of a system for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention;
[0019] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0020] FIG. 4 is an exemplary architecture illustrating the flow for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present disclosure;
[0021] FIG. 5 is an exemplary signal flow diagram illustrating the flow for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present disclosure; and
[0022] FIG. 6 is a flow diagram of a method for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] The present invention discloses a system and a method for rendering prediction outputs on a user interface (UI). More particularly, the system described herein a comprehensive approach for predicting future requirements, and/or potential issues with real-time data sources, such as a probing unit. The prediction is based on applying an artificial intelligence/machine learning (AI/ML) model to analyze incoming data trends. This system captures real-time data, processes it, and generates intuitive visualizations to help users understand predictions and make informed decisions about potential issues.
[0028] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention. The environment 100 includes a user equipment (UE) 102, a server 104, a network 106, and a system 108. A user interacts with the system 108 utilizing the UE 102.
[0029] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0030] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, and mainframe computer.
[0031] In addition to the aforementioned devices, the UEs may also encompass wearable technology such as smartwatches and fitness trackers, which can provide real-time data and notifications. Furthermore, Internet of Things (IoT) devices, including smart home appliances and connected sensors, may be considered part of the UEs. Moreover, specialized devices like industrial machines, robotics, and kiosks can serve as UEs.

[0032] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a public-switched telephone network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a third generation (3G), a fourth generation (4G), a fifth generation (5G), a sixth generation (6G), a new radio (NR), a narrow band internet of things (NB-IoT), an open radio access network (O-RAN), and the like.
[0033] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network may also utilize advanced technologies such as software-defined networking (SDN) and network function virtualization (NFV).
[0034] Moreover, the network 106 can integrate various protocols and frameworks, such as internet protocol (IP), transmission control protocol (TCP), and user datagram protocol (UDP).
[0035] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The environment 100 further includes the system 108 communicably coupled to the server 104, and the UE 102 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0037] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0038] FIG. 2 is an exemplary block diagram of the system 108 for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention.
[0039] As per the illustrated and preferred embodiment, the system 108 for rendering prediction outputs on a user interface (UI), includes one or more processors 202, a memory 204, and a probing unit 206. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0040] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to rendering prediction outputs on a user interface (UI). The memory 204 includes, by way of example but not limitation, various types of non-transitory storage media such as volatile memory, including Random Access Memory (RAM), and non-volatile memory, such as Solid-State Drives (SSD), Hard Disk Drives (HDD), Electrically Programmable Read-Only Memory (EPROM), and FLASH memory. The memory 204 may support a range of configurations and architectures, including dynamic RAM (DRAM), static RAM (SRAM), and hybrid memory solutions, depending on the specific operational requirements of the processor 202.
[0041] The memory 204 may also incorporate, by way of example but not limitation, advanced features such as error correction codes (ECC) that enhance data integrity by detecting and correcting potential data corruption during read and write operations. In addition, the memory 204 can utilize wear leveling techniques in solid-state memory to ensure even distribution of write and erase cycles, thereby extending the lifespan and reliability of the storage medium.
[0042] Moreover, the memory 204 is configured to implement various data organization methods, which may include hierarchical storage management and caching strategies that optimize access speeds and data retrieval efficiency. It may also include file systems such as NTFS, FAT32, or custom file structures designed to facilitate rapid data access while maintaining order and organization within the stored data.
[0043] The memory 204 may also support the execution of complex algorithms and computational routines, enabling efficient data processing and application hosting. This includes the ability to manage large datasets and perform real-time data analysis, which is critical for applications requiring high performance and responsiveness.
[0044] Furthermore, the memory 204 can integrate security features such as data encryption and secure erase functionalities to protect sensitive information from unauthorized access and to ensure compliance with security standards. This combination of capabilities allows the memory 204 to serve as a robust foundation for the operational needs of the system, supporting the processor 202 in its execution of tasks within the networked environment.
[0045] As per the illustrated embodiment, the probing unit 206 is configured to collect and process data associated with the operations performed in the network 106. The probing unit 206 includes, by way of example but not limitation, a variety of devices configured to collect and process data associated with operations performed in the network 106. This may encompass Internet of Things (IoT) sensors, environmental monitors, user interaction trackers, telemetry devices, and data loggers. Specific examples of probing units may include temperature sensors, humidity sensors, motion detectors, GPS trackers, RFID readers, smart meters, and network performance monitors. The probing unit 206 can also include application performance management (APM) tools and user devices such as smartphones, tablets, and laptops. These examples of probing unit 206 types are non-limiting and may not be mutually exclusive; for instance, a single device could function both as a telemetry device and an environmental monitor.
[0046] In one embodiment, the probing unit 206, such as a network monitoring sensor, is a standard-based device that enables real-time data collection related to network performance, user activity, or environmental conditions. The probing unit 206 plays a crucial role within the network 106, continuously gathering and preprocessing data from various sources. This includes input from user devices, environmental factors, and operational metrics. Specifically, the probing unit 206 provides real-time insights by capturing relevant information, which may include, but is not limited to, temperature, bandwidth usage, user interactions, error rates, device statuses, location data, and system logs within the network 106.
[0047] For example, a probing unit 206 could be a network performance sensor strategically deployed throughout the infrastructure. This sensor continuously monitors key performance metrics such as signal strength, data throughput, and latency. When the probing unit 206 detects a drop in signal quality in a specific area, it collects this data in real-time and transmits it to the central management system for further analysis and action.
[0048] Furthermore, the probing unit 206 may integrate data from external sources such as weather APIs, social media feeds, and market analytics to enrich the data landscape. By leveraging diverse data inputs, the probing unit 206 enhances the overall predictive capabilities of the system, allowing for a more comprehensive understanding of network dynamics and user behavior.
[0049] As per the illustrated embodiment, the system 108 includes the processor 202 to render prediction outputs on a user interface (UI). The processor 202 includes a receiving unit 208, an applying unit 210, an artificial intelligence/machine learning (AI/ML) model 212, a generating unit 214 and a rendering unit 216. The processor 202 is communicably coupled to the one or more components of the system 108 such as the probing unit 206, and the memory 204. In an embodiment, operations and functionalities of the receiving unit 208, applying unit 210, artificial intelligence/machine learning (AI/ML) model 212, generating unit 214, rendering unit 216, and the one or more components of the system 108 can be used in combination or interchangeably.
[0050] In an embodiment, the receiving unit 208 of the processor 202 is configured to receive real-time data from one or more data sources within the network 106. Said data source is the probing unit 206. The received data in the receiving unit 208 is pre-processed and standardized. In particular, the processor 202 may include a normalizer to preprocess the received data. The normalizer performs at least one of, but not limited to, data normalization. The data normalization is the process of at least one of, but not limited to, reorganizing the received data, removing the redundant data within the received data, formatting the received data and removing null values from the received data. The main goal of the the normalizer is to achieve a standardized data format across the entire system 108. The normalizer ensures that the normalized data is stored appropriately in the probing unit 206 for subsequent retrieval and analysis. In one embodiment, the data received by the receiving unit 208 is normalized by the normalizer of the processor 202.
[0051] In one embodiment, the data is at least one of, but not limited to the output data of probing units and the like. For example, the output data may be data pertaining to network address translation (NAT) presence, packet filtering behavior, packet loss rates and latency, incoming/outgoing traffic and the like.
[0052] The receiving unit 208 may also incorporate mechanisms for monitoring real-time changes in network conditions and user interactions. For example, it can track fluctuations in network traffic, detect new customer activations, or monitor service cancellations. This real-time data collection allows for dynamic adjustments to network resources and helps inform operational decisions based on current network usage.
[0053] In another embodiment, the receiving unit 208 may utilize message queuing protocols like MQTT or AMQP to receive data from the data source such xprobe, vprobe and like. These protocols allow for asynchronous communication, where messages can be sent and stored until the receiving unit is ready to process them, ensuring that no data is lost during peak loads.
[0054] In an alternate embodiment, the receiving unit 208 may utilize an event-driven architecture where the probing unit sends notifications or alerts when specific events occur. This technique allows the receiving unit 208 to react promptly to changes in the network, such as security breaches or performance issues. In an alternate embodiment, the receiving unit 208 may leverage network telemetry techniques to continuously monitor and collect data from the data source. This involves using protocols like ICMP for status messages or SNMP for network management, enabling real-time insights into network health and performance. This method is particularly useful for integrating various services and ensuring that the receiving unit can access the latest data as needed.
[0055] In an embodiment, the applying unit 210 of the processor 202 is configured to apply the received data to one or more artificial intelligence/machine learning (AI/ML) models 212. The applying unit 210 is responsible for processing the normalized data provided by the receiving unit 208 and utilizing it to train, test, or make predictions with the AI/ML models 212.
[0056] The applying unit 210 may involve the steps which include processing of the data, identifying any patterns or anomalies. By analyzing the temporal or sequential aspects of the data, the applying unit 210 uncovers insights such as seasonal trends, cyclical behaviors, or sudden shifts in data patterns. This capability is particularly valuable in network security, where understanding normal behavior is critical for timely anomaly detection.
[0057] Furthermore, the applying unit 210 may facilitate feature selection and engineering, where relevant features are extracted from the incoming data to improve the model's accuracy. For example, a retail company might use the applying unit 210 to extract features such as purchase history, customer demographics, and seasonal trends to better forecast demand for specific products.
[0058] In addition, the applying unit 210 can encompass various methodologies for model evaluation and validation. This may include cross-validation techniques to assess model performance and ensure that the predictions are robust and reliable. For example, a healthcare provider might use the applying unit 210 to validate predictive models for patient readmissions, ensuring that the insights derived from patient data lead to effective intervention strategies.
[0059] By systematically applying the received data to the AI/ML model 212, the applying unit 210 contributes to enhanced decision-making and improved operational outcomes within the network 106, ultimately driving better service delivery and customer satisfaction. While training, the AI/ML model 212 tracks and monitors the received data pertaining to the operation of the network 106. Further, the AI/ML model 212 learns at least one of, but not limited to, trends and patterns associated with the operation of the network 106. For example, the system 108 selects an appropriate AI/ML model 212, such as at least one of, but not limited to, a neural network or a decision tree logic, from a set of available options of the AI/ML model 212. Thereafter, the selected AI/ML model 212 is trained using the normalized data. In one embodiment, the selected AI/ML model 212 is trained on historical data associated with the operation of the network 106.
[0060] Said artificial intelligence/machine learning (AI/ML) model 212 within a processor 202 designed for advanced data analysis in a network 106. The AI/ML model 212 includes various algorithms and techniques, such as supervised learning, unsupervised learning, and reinforcement learning, to derive insights and predictions from the processed data.
[0061] The AI/ML model 212 further encompasses steps for training on large datasets, allowing it to learn patterns and relationships within the data. This training process may involve optimizing model parameters through techniques such as gradient descent, regularization, and hyperparameter tuning to enhance model performance and accuracy.
[0062] Additionally, the AI/ML model 212 supports feature selection and engineering, which are crucial for improving predictive capabilities by identifying the most relevant features within the dataset. This process involves statistical methods and domain expertise to ensure that the model captures essential characteristics of the data.
[0063] Further, the AI/ML model 212 is also configured to perform real-time inference, enabling it to generate predictions or classifications based on incoming data streams. This capability allows for immediate responsiveness in dynamic environments, enhancing operational decision-making.
[0064] The generating unit 214 of the processor 202 to generate the real time prediction outputs based on the application of the received data on the AI/ML model 212. Upon training, the generating unit 214 utilizes the model to apply incoming data and generate actionable insights or predictions relevant to specific use cases. The generating unit 214 may employ techniques such as batch processing or stream processing, depending on the nature of the incoming data and the application requirements. For instance, in a streaming context, the unit can provide immediate predictions, such as detecting anomalies in the network as they occur.
[0065] In an embodiment, the generating unit 214 may integrate with APIs to facilitate seamless data exchange and prediction delivery. This integration allows for real-time alerts and notifications to stakeholders when certain thresholds or conditions are met, enhancing decision-making capabilities.
[0066] Moreover, the generating unit 214 can support multiple output formats, ensuring compatibility with various downstream applications. For instance, it can provide JSON or XML responses for integration with web services or graphical visualizations for user interfaces, aiding in the interpretation and utilization of the predictions.
[0067] In the context of IoT applications, the generating unit 214 can utilize data from connected devices to produce insights on environmental conditions, equipment status, or user interactions. This enables proactive maintenance alerts or smart adjustments in automated systems, contributing to enhanced operational efficiency.
[0068] Further, the generating unit 214 may leverage dashboarding and reporting tools to present the generated predictions effectively. This may include visual analytics platforms that allow users to interact with the prediction data, facilitating deeper insights and informed decision-making across various organizational levels.
[0069] In an embodiment, the rendering unit 216 is configured to render, the real time prediction outputs via charts, graphs, and interactive elements on the UI. Said each of the charts, the graphs and the interactive elements are dynamically updated corresponding to receipt of latest data. In an embodiment, said rendering unit 216 is responsible for visualizing the real-time prediction outputs generated by the generating unit 214 of the processor 202. This unit transforms the prediction data into user-friendly formats, such as charts, graphs, and interactive elements, facilitating effective interpretation and decision-making.
[0070] In an embodiment, the charts and graphs may be line charts, bar charts, pie charts, or scatter plots. Further, the user-friendly display may have an interactive dashboard that provides real-time insights through interactive elements. The interactive dashboard allows the user to filter data, select variables, and view dynamic visualizations. Further, the display may have features of sliders and filters for allowing the users to manipulate parameters (e.g., adjusting thresholds) and see how results change in real-time.
[0071] .
[0072] In one embodiment, the rendering unit 216 may include at least one of, but not limited to, interactive visualization components, dynamic reporting tools, and analytics dashboards. Specific techniques may include real-time data binding, animation effects for transitions, and customizable themes to enhance the user experience and ensure insights are presented effectively.
[0073] The rendering unit 216 may employ various visualization techniques, such as line charts, bar graphs, heat maps, and scatter plots, depending on the nature of the data. For instance, time-series data might be best represented through line charts that show trends over time.
[0074] Moreover, the rendering unit 216 can incorporate real-time updates, ensuring users have access to the most current data. For instance, in a smart city monitoring dashboard, traffic data visualizations can be refreshed in real-time to reflect current conditions, aiding traffic management efforts.
[0075] .
[0076] The receiving unit 208, applying unit 210, the AI/ML model 212, generating unit 214 and the rendering unit 216, in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0077] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for rendering prediction outputs on a user interface (UI). It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0078] FIG. 3 shows communication between the UE 102, and the system 108. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, and the system 108 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but is not limited to, session initiation protocol (SIP), system information block (SIB) protocol, transmission control protocol (TCP), user datagram protocol (UDP), file transfer protocol (FTP), hypertext transfer protocol (HTTP), simple network management protocol (SNMP), internet control message protocol (ICMP), hypertext transfer protocol secure (HTTPS), terminal network (TELNET), post office protocol (POP3), internet message access protocol (IMAP), secure socket layer (SSL), transport layer security (TLS), dynamic host configuration protocol (DHCP), remote desktop protocol (RDP), network file system (NFS), lightweight directory access protocol (LDAP), real-time transport protocol (RTP), network time protocol (NTP), ethernet protocol, wireless fidelity (Wi-Fi) protocols (IEEE 802.11), file transfer protocol secure (FTPS), simple mail transfer protocol (SMTP), point-to-point protocol (PPP), internet control message protocol version 6 (ICMPv6), multicast domain name system (mDNS), extensible messaging and presence protocol (XMPP), secure copy protocol (SCP), session description protocol (SDP), internet group management protocol (IGMP), address resolution protocol (ARP), network file sharing protocol (SMB/CIFS), and lightweight directory access protocol version 3 (LDAPv3).
[0079] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a user interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, field programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), system on chip (SoC) devices, graphics processing units (GPUs), neural processing units (NPUs), embedded processors, smartphone application processors, digital signal controllers, real-time processing units, high-performance computing processors, cloud computing processors, multi-core processors, application processors, RISC (reduced instruction set computing) processors, CISC (complex instruction set computing) processors, and/or any devices that manipulate signals based on operational instructions.
[0080] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to manage operations in the network 106. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0081] In an embodiment, the user interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface (GUI), a web user interface (Web UI), a command line interface (CLI), a voice user interface (VUI), a touch interface, an application programming interface (API), an augmented reality interface (AR), a virtual reality interface (VR), a natural language interface, a tactile interface, a multi-modal interface, a mobile application interface, a desktop application interface, a kiosk interface, a remote control interface, a game controller interface, a wearable device interface, a sensor-based interface, and the like. The user interface (UI) 306 allows the user to transmit the request to the system 108 for performing the operation. In one embodiment, the user may include at least one of, but not limited to, a network operator.
[0082] In accordance with the exemplary embodiment, let us assume the probing unit 206 is hosted on the server 104. The probing unit 206 is configured to respond to all the requests received from the UE 102. Based on the requests, the probing unit 206 performs the operation such as storing the customers details. Further, the data pertaining to the operation performed by the probing unit 206 is stored in the defined standard format. Further the receiving unit 208 is configured to receive data stored in the probing unit 206. Upon receiving the data, the AI/ML model 212 is trained utilizing the received data and a generating unit 214 is configured to generate the real time prediction outputs based on the application of the received data on the AI/ML model 212; and the rendering unit 216 configured to render the real time prediction outputs via charts, graphs, interactive elements on the UI.
[0083] As mentioned earlier in FIG.2, the system 108 includes the processors 202, the memory 204, and the probing unit 206 for rendering prediction outputs on a user interface (UI), which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0084] Further, as mentioned earlier the processor 202 includes the receiving unit 208, the applying unit 210, the AI/ML model 212, the generating unit 214, and rendering unit 216 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0085] FIG. 4 is an architecture illustrating the flow for rendering prediction outputs on the user interface (UI), according to one or more embodiments of the present disclosure.
[0086] In one embodiment, the architecture 400 includes a raw data source 402, a real time data training & predictions model 404, an output data generation and formatter 406, a database (DB) 408, an interactive visualization tool 410, a workflow management module (WFM) 412, and the user interface 306. Said raw data source 402 is the probing unit 206. The real time data training & predictions model 404 is configured to receive the data from the data source 402 and the received data is pre-processed and standardized. In one embodiment, the data is at least one of, but not limited to the output data of probing units and the like. For example, the output data may be data pertaining to network address translation (NAT) presence, packet filtering behavior, packet loss rates and latency, incoming/outgoing traffic and the like.
[0087] Further, the real time data training & predictions model 404 is trained by utilizing the received data and learns trends and patterns from the received data. For example, the system 108 selects an appropriate AI/ML model 212, such as at least one of, but not limited to, a neural network or a decision tree logic, from a set of available options of the AI/ML model 212. Thereafter, the selected AI/ML model 212 is trained using the received data. Thereafter, the real time data training & predictions model 404 analyses the generated historical trends and the current trends associated with the operation of the network 106 utilizing the trained AI/ML model 212 to detect a pattern between the generated historical trends and the current trends.
[0088] In the next step, the output data generation and formatter 406 is configured to generate the real time prediction outputs and support multiple output formats, ensuring compatibility with various downstream applications. For instance, it can provide JSON or XML responses for integration with web services or graphical visualizations for user interfaces, aiding in the interpretation and utilization of the predictions.
[0089] In an embodiment, the database 408 is a distributed data lake used to store the processed data and model outputs. Said data base 408 serves as the central hub for all incoming data, providing a unified and accessible source for analysis. In an exemplary embodiment, any of the appropriate data normalizer may perform normalization to adjust the data values to a common scale without distorting differences in the ranges of values, encode to convert categorical data into numerical formats that can be easily processed by machine learning model, and structuring to organize the data into a predefined schema or structure, such as tables or arrays, which is essential for efficient querying and analysis.
[0090] Thereafter, the processed data from the real time data training & predictions model 404 and the output data generation and formatter 406 is transmitted to the interactive visualization tool 410, which transforms the data into visual formats such as graphs and charts. The visualized data is passed through the workflow management module (WFM) 412, which coordinates how the prediction data is delivered to the user interface UI 306. Said WFM 412 is a crucial component in systems that automate and streamline processes, ensuring that tasks are executed efficiently and in the correct order.
[0091] For example, the interactive visualization tool dynamically updates the user interface UI 306 with predicted trends, while the workflow manager unit 412 ensures that the predictions are displayed accurately. The user interface 306 enables users to view, analyze, and interact with these predictions, facilitating real-time insights based on the processed data.
[0092] FIG. 5 is a signal flow diagram illustrating the flow for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present disclosure.
[0093] At step 502, the UE 102 transmits the request to the probing unit 206 in order to perform operations related to rendering prediction outputs on a user interface (UI). For example, the operation may be storing information of the plurality of customers while adding the plurality of customers in the network 106.
[0094] At step 504, the probing unit 206 stores the data related to the operations in the appropriate format. Said data is at least one of, but not limited to the output data of probing units and the like. For example, the output data may be data pertaining to network address translation (NAT) presence, packet filtering behavior, packet loss rates and latency, incoming/outgoing traffic and the like. Further, the probing unit 206 includes, by way of example but not limitation, a variety of devices configured to collect and process data associated with operations performed in the network 106. For example, the probing unit (206) may identify an operating system (e.g., Windows, Linux) and version based on network responses; gathers information about routers, switches, and firewalls that might be part of the network path, and also gathers data on network interfaces, including IP/MAC addresses, bandwidth usage, and packet statistics.
[0095] At step 506, the probing unit 206 transmits the relevant data to the receiving unit 208 of the processor 202. In alternate embodiment, the processor 202 receives the data related to the operations from the probing unit 206. The received data in the receiving unit 208 is pre-processed and standardized in the appropriate format. Further, the applying unit 210 of the processor 202 is configured to apply the received data to one or more artificial intelligence/machine learning (AI/ML) models 212. The applying unit 210 is responsible for processing the normalized data provided by the receiving unit 208 and utilizing it to train, test, or make predictions with the AI/ML models 212.
[0096] At step 508, the processor 202 trains the AI/ML model 212 utilizing the received data. While training, the AI/ML model 212 tracks and monitors the received data pertaining to the operation of the network 106. The AI/ML model 212 learns trends and patterns associated with the operation of the network 106. Further, the AI/ML model 212 learns at least one of, but not limited to, trends and patterns associated with the operation of the network 106. For example, the system 108 selects an appropriate AI/ML model 212, such as at least one of, but not limited to, a neural network or a decision tree logic, from a set of available options of the AI/ML model 212. Thereafter, the selected AI/ML model 212 is trained using the normalized data. In one embodiment, the selected AI/ML model 212 is trained on historical data associated with the operation of the network 106.
[0097] At step 510, the processor 202 generates the real time prediction outputs, where the generating unit 214 utilizes the model to apply incoming data and generate actionable insights or predictions relevant to specific use cases. The generating unit 214 may employ techniques such as batch processing or stream processing, depending on the nature of the incoming data and the application requirements. For instance, in a streaming context, the unit can provide immediate predictions, such as detecting anomalies in financial transactions as they occur, which is critical for fraud prevention.
[0098] At step 512, the processor 202 render the real time prediction outputs via charts, graphs and interactive elements on the UE. The user equipment 102 displays the predicted future trends to the network operator via the UI 306. This step is responsible for visualizing the real-time prediction outputs generated by the processor 202. This step transforms the prediction data into user-friendly formats, such as charts, graphs, and interactive elements, facilitating effective interpretation and decision-making.
[0099] FIG. 6 is a flow diagram of a method 600 for rendering prediction outputs on a user interface (UI), according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[00100] At step 602, the method 600 includes the step of receiving, by one or more processors 202, real time data from one or more data sources. Said one or more data sources is the probing unit 206. The data received in the receiving unit is pre-processed and standardized.

[00101] At step 604, the method 600 includes the step of applying, by the one or more processors, the received data to one or more artificial intelligence/machine learning (AI/ML) models 212. The artificial intelligence/machine learning (AI/ML) model 212 utilizes the received data pertaining to the operation of the network 106. In particular, subsequent to receiving the data from the probing unit 206, the applying unit 210 applies the received data to the AI/ML model 212. The AI/ML model 212 identifies the trends and patterns pertaining to the operation of the network 106 from the received data.

[00102] At step 606, the method 600 includes the step of generating, by the one or more processors, the real time prediction outputs based on the application of the received data on the AI/ML model 212. In one embodiment, the generating unit 214 utilizes the trained AI/ML model 212 to generate the historical and the current trends associated with the operation of the network 106. For example, the generating unit 214 generate trends pertaining to the number of customers added in the network 106 in last three months and the number of customers added in the network 106 in a current month. Thereafter, utilizing the trained AI/ML model 212, the generating unit 214 analyses the historical and the current trends associated with the operation of the network to detect the pattern between the historical trends and the current trends. For example, by comparing the number of customers added in the network 106 in last three months with the number of customers added in the network 106 in the current month, the generating unit 214 detects the pattern pertaining to the increasing number of customers that are added in the network 106 every month. Based on the detected pattern, the generating unit 214 predicts the future trends corresponding to at least one of, but not limited to, the overall growth of the network and the circle wise growth of the network 106.
[00103] At step 608, the method 600 includes rendering, by the one or more processors, the real time prediction outputs via charts, graphs, and interactive elements on the UI, where each of the charts, the graphs and the interactive elements are dynamically updated corresponding to receipt of latest data. Further, in response to rendering the real time prediction outputs, the method comprises the step of receiving a user input to interact with the real time prediction outputs by the one or more processors. For instance, trends regarding customer growth may be depicted using line graphs to show the increase in customers over time, enabling operators to quickly grasp changes and patterns. The rendering unit 216 may provide comparative visualizations, highlighting differences between historical data and projected future trends. For example, side-by-side bar charts can illustrate the number of customers added each month, facilitating an immediate understanding of growth dynamics.

[00104] In one embodiment, the rendered visualizations are updated in real-time as new data is processed, ensuring that network operators always have access to the most current insights. This capability is particularly useful in dynamic environments where conditions can change rapidly, enabling proactive decision-making.
[00105] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receives the data pertaining to the operation of the network 106 from the probing unit 206. The processor 202 is further configured to train an artificial intelligence/machine learning (AI/ML) model 212 utilizing the received data pertaining to the operation of the network 106. The processor 202 is further configured to predict future trends of the network 106 utilizing the trained AI/ML model 212.
[00106] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00107] The present disclosure provides significant technical advancements by real-time prediction outputs and data visualization. This invention enables the user to gain immediate insights into customer growth and network performance. This capability allows for proactive management of network resources and better-informed decision-making, overall leading to enhanced operational efficiency. The ability to visualize data helps in quickly identifying patterns and anomalies, resulting in timely interventions.
[00108] Furthermore, the integration of interactive visualization tools empowers network operators to explore data in a user-friendly manner, enabling them to make strategic adjustments based on emerging trends. This results in optimized resource allocation, improved network reliability, and increased customer satisfaction. The advancements also support better communication of insights to stakeholders, fostering collaboration and informed discussions regarding network strategies.
[00109] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[00110] Environment - 100;
[00111] User Equipment (UE) - 102;
[00112] Server - 104;
[00113] Network- 106;
[00114] System -108;
[00115] Processor - 202;
[00116] Memory - 204;
[00117] Probing unit – 206;
[00118] Receiving unit – 208;
[00119] Applying unit – 210;
[00120] AI/ML model– 212;
[00121] Generating unit – 214;
[00122] Rendering unit – 216;
[00123] Primary Processor – 302;
[00124] Memory – 304;
[00125] User Interface (UI) – 306;
[00126] Raw data source – 402;
[00127] Real time data training and prediction model – 404;
[00128] Output data generation and formatter – 406;
[00129] Database - 408;
[00130] Interactive Visualization Tool – 410;
[00131] WFM– 412;
[00132] Brain – 414;

,CLAIMS:CLAIMS

We Claim
1. A method 600 of rendering prediction outputs on a User Interface (UI), the method 600 comprising the steps of:
receiving, by one or more processors 202, real time data from one or more data sources;
applying, by the one or more processors 202, the received data to one or more artificial intelligence/machine learning (AI/ML) models 212;
generating, by the one or more processors 202, the real time prediction outputs based on the application of the received data on the AI/ML model 212; and
rendering, by the one or more processors 202, the real time prediction outputs via charts, graphs, and interactive elements on the UI.

2. The method 600 as claimed in claim 1, wherein the one or more data sources is at least one of a probing unit 206 and wherein the received data is pre-processed and standardized.

3. The method 600 as claimed in claim 1, wherein each of the charts, the graphs and the interactive elements are dynamically updated corresponding to receipt of latest data.

4. The method 600 as claimed in claim 1, wherein in response to rendering the real time prediction outputs, the method 600 comprises the step of receiving, by the one or more processors 202, a user input to interact with the real time prediction outputs.

5. A system 108 for rendering prediction outputs on a user interface (UI), the system 108 comprising:
a receiving unit 208 configured to receive, real time data from one or more data sources 402;
an applying unit 210 configured to apply, the received data to one or more artificial intelligence/machine learning (AI/ML) models 212;
a generating unit 214 configured to generate, the real time prediction outputs based on the application of the received data on the AI/ML model 212; and
a rendering unit 216 configured to render, the real time prediction outputs via charts, graphs, and interactive elements on the UI.

6. The system 108 as claimed in claim 5, wherein the one or more data sources is at least one of a probing unit 206 and wherein the received data is pre-processed and standardized.

7. The system 108 as claimed in claim 5, wherein each of the charts, the graphs and the interactive elements are dynamically updated corresponding to receipt of latest data.

8. The system 108 as claimed in claim 5, wherein the receiving unit 208 is configured to receive, a user input to interact with the real time prediction outputs.

Documents

Application Documents

# Name Date
1 202321068466-STATEMENT OF UNDERTAKING (FORM 3) [11-10-2023(online)].pdf 2023-10-11
2 202321068466-PROVISIONAL SPECIFICATION [11-10-2023(online)].pdf 2023-10-11
3 202321068466-FORM 1 [11-10-2023(online)].pdf 2023-10-11
4 202321068466-FIGURE OF ABSTRACT [11-10-2023(online)].pdf 2023-10-11
5 202321068466-DRAWINGS [11-10-2023(online)].pdf 2023-10-11
6 202321068466-DECLARATION OF INVENTORSHIP (FORM 5) [11-10-2023(online)].pdf 2023-10-11
7 202321068466-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321068466-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321068466-DRAWING [11-10-2024(online)].pdf 2024-10-11
10 202321068466-COMPLETE SPECIFICATION [11-10-2024(online)].pdf 2024-10-11
11 Abstract.jpg 2025-01-06