Sign In to Follow Application
View All Documents & Correspondence

System And Method For Recommending An Artificial Intelligence/Machine Learning (Ai/Ml) Model

Abstract: ABSTRACT SYSTEM AND METHOD FOR RECOMMENDING AN ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) MODEL The present invention relates to a system (108) and a method (600) for recommending an Artificial Intelligence/Machine Learning (AI/ML) model (220). The method (600) includes step of retrieving data from a plurality of data sources (110). The method (600) further includes step of training a plurality of AI/ML models (220) with the retrieved data. The method (600) further includes step of generating an output for each of the trained AI/ML model (220) among the plurality of AI/ML models (220) based on training. The method (600) further includes step of recommending, at least one trained AI/ML model (220) among the plurality of AI/ML models (220) to a user based on the generated output of each of the trained AI/ML models (220). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 October 2023
Publication Number
17/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil Meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR RECOMMENDING AN ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML) MODEL
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for recommending an Artificial Intelligence/Machine Learning (AI/ML) model.
BACKGROUND OF THE INVENTION
[0002] Generally, in the telecommunication network, different types of data from various data sources are retrieved. In traditional systems, in order to forecast events, it is first checked to understand which would be a suitable Machine Learning (ML) model to be deployed in order to generate accurate prediction of future events. Therefore, in order to check for a suitable ML model for prediction, in traditional practices, multiple different types of ML models are trained with same type of data. The process of training the multiple ML models with same type of data is a time consuming and a costly task. Due to which, the process of predicting future events may get delayed, thereby disrupting the functioning of the network.
[0003] In view of the above, there is a dire need for a system and a method for recommending a suitable Artificial Intelligence/Machine Learning (AI/ML) model for predicting future events that allows user friendly comparative study of the AI/ML models.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provides a method and a system for recommending an Artificial Intelligence/Machine Learning (AI/ML) model.
[0005] In one aspect of the present invention, the method for recommending an Artificial Intelligence/Machine Learning (AI/ML) model is disclosed. The method includes the step of retrieving data from a plurality of data sources. The method further includes the step of training a plurality of AI/ML models with the retrieved data. The method further includes the step of generating an output for each of the trained AI/ML model among the plurality of AI/ML models based on training. The method further includes the step of recommending at least one trained AI/ML model among the plurality of AI/ML models to a user based on the generated output of each of the trained AI/ML models.
[0006] In another embodiment, the step of retrieving data from the plurality of data sources, further includes the steps of, preprocessing the retrieved data, storing the pre-processed data in a storage unit and extracting one or more features from the pre-processed data for training the plurality of AI/ML models.
[0007] In yet another embodiment, the step of training the plurality of AI/ML models with the retrieved data includes the steps of configuring one or more hyperparameters for each of the AI/ML model among the plurality of AI/ML models and selecting a training date range and test/prediction range for each of the AI/ML model among the plurality of AI/ML models for training.
[0008] In yet another embodiment, for training the plurality of AI/ML models, an input is received from the user pertaining to a training purpose and a training name, the training purpose and training name of the plurality of AI/ML models are stored in the storage unit in order to utilize the plurality of trained AI/ML models in future.
[0009] In yet another embodiment, the generated output for each of the trained AI/ML model among the plurality of trained AI/ML models include at least one of, an accuracy and a Root Mean Square Error (RMSE).
[0010] In yet another embodiment, the step of generating, the output for each of trained AI/ML model among the plurality of trained AI/ML models, based on training further includes the step of representing at least one of, the generated outputs and a training status list of each of the trained AI/ML among the plurality of trained AI/ML models in one or more formats on a User Interface (UI).
[0011] In yet another embodiment, the training status list includes at least one of, a date of training, a day of week training, an actual value, a predicted value and a forecasted value.
[0012] In yet another embodiment, the one or more formats includes at least one of, but not limited to, a tabular format and a graphical format.
[0013] In yet another embodiment, the step of, recommending, at least one trained AI/ML model among the plurality of AI/ML models to the user based on the generated output of each of the trained AI/ML model among the plurality of trained AI/ML models, includes steps of comparing the generated output of each of the trained AI/ML model with the generated outputs of remaining of the trained AI/ML models and recommending, at least one of the trained AI/ML model from the plurality of AI/ML models based on the comparison of the generated outputs using one or more predefined rules.
[0014] In yet another embodiment, the one or more predefined rules includes at least one of recommending, a trained AI/ML model having higher accuracy compared to the accuracy of the remaining of the trained AI/ML models and recommending, a trained AI/ML model having a least RMSE value compared to the RMSE values of the remaining of the trained AI/ML models.
[0015] In another aspect of the present invention, the system for recommending an Artificial Intelligence/Machine Learning (AI/ML) model is disclosed. The system includes a retrieving unit configured to retrieve data from a plurality of data sources. The system further includes a training unit configured to train a plurality of AI/ML models with the retrieved data. The system further includes a generating unit configured to generate an output for each of the trained AI/ML model among the plurality of AI/ML models based on training. The system further includes a recommending unit configured to recommend the at least one trained AI/ML model among the plurality of AI/ML models to a user based on the generated output of each of the trained AI/ML models.
[0016] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to retrieve data from a plurality of data sources. The processor is further configured to train, a plurality of AI/ML models with the retrieved data. The processor is further configured to generate an output for each of the trained AI/ML model among the plurality of AI/ML models based on training. The processor is further configured to recommend, the at least one trained AI/ML model among the plurality of AI/ML models to a user based on the generated output of each of the trained AI/ML models.
[0017] In another aspect of the present invention, a User Equipment (UE) is disclosed. One or more primary processors is communicatively coupled to one or more processors. The one or more primary processors is further coupled with a memory. The memory stores instructions which when executed by the one or more primary processors causes the UE to transmit data to the one or more processors, view, generated outputs on the UI and receive, recommendations form the one or more processors.
[0018] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0020] FIG. 1 is an exemplary block diagram of an environment for recommending an Artificial Intelligence/Machine Learning (AI/ML) model, according to one or more embodiments of the present invention;
[0021] FIG. 2 is an exemplary block diagram of a system for recommending the AI/ML model, according to one or more embodiments of the present invention;
[0022] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0023] FIG. 4 is an exemplary architecture for recommending the AI/ML model, according to one or more embodiments of the present disclosure;
[0024] FIG. 5 is an exemplary signal flow diagram illustrating the flow for recommending the AI/ML model, according to one or more embodiments of the present disclosure; and
[0025] FIG. 6 is a flow diagram of a method for recommending the AI/ML model, according to one or more embodiments of the present invention.
[0026] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0027] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0028] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0029] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0030] Various embodiments of the present invention provide a system and a method for recommending an Artificial Intelligence/Machine Learning (AI/ML) model. The most unique aspect of the invention lies in an ability to automatically recommend the suitable AI/ML model to a user based on the input data. The disclosed system and method aim at enhancing AI/ML model selection which can be used for predicting one or more future events. Due to recommending the suitable AI/ML model to the user, the invention thereby reducing time for training each AI/ML model from the plurality of the AI/ML models as performed in traditional practices. The invention also provides a graphical view as well as a tabular view of each of the AI/ML model output with executed training data that facilitates user to do comparative study of the AI/ML models involved in the training.
[0031] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for recommending an Artificial Intelligence/Machine Learning (AI/ML) model 220 according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, the network 106, a system 108, and a plurality of data sources 110. In the network 106 different types of data from the plurality of data sources 110 are retrieved. Based on the retrieved data a suitable AI/ML model 220 among a plurality of AI/ML models 220 is recommended to a user to generate accurate prediction for one or more future events.
[0032] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0033] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0036] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0037] The environment 100 further includes the plurality of data sources 110. In one embodiment, the plurality of data sources 110 are origins from which the data is retrieved and utilized for at least one of, but not limited to, analysis, research, and decision-making. In one embodiment, the plurality of data sources 110 is at least one of, but not limited to, sensors, applications, network functions and one or more databases. In particular, the one or more databases are at least one of, but not limited to, a Network Attached Storage (NAS) and a Distributed File System (DFS). Herein, the NAS is a dedicated file storage device that connects to the network 106, allowing multiple users and devices to access and share data from a centralized location. The DFS is a key component for storing and processing large datasets in a distributed computing environment. In one embodiment, the one plurality of data sources 110 is associated with the sources included within the network 106 and outside the network 106.
[0038] The environment 100 further includes the system 108 communicably coupled to the server 104, the UE 102, and the plurality of data sources 110 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0039] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0040] FIG. 2 is an exemplary block diagram of the system 108 for recommending the AI/ML model 220, according to one or more embodiments of the present invention.
[0041] As per the illustrated and preferred embodiment, the system 108 for recommending the AI/ML model 220, includes one or more processors 202, a memory 204, a storage unit 206 and a plurality of Artificial Intelligence/Machine Learning (AI/ML) models 220. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0042] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for recommending the AI/ML model 220. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0043] The environment 100 further includes the storage unit 206. As per the illustrated embodiment, the storage unit 206 is configured to store data retrieved from the plurality of data sources 110. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0044] As per the illustrated embodiment, the system 108 includes the plurality of AI/ML models 220. The plurality of AI/ML models 220 facilitates system 108 in performing tasks such as detecting anomalies, recognizing patterns, making predictions, solving problems, enhancing decision-making, and providing insights across various fields. For example, the plurality of AI/ML models 220 facilitates solving real-world problems without extensive manual intervention. In one embodiment, the plurality of AI/ML models 220 are trained using the retrieved data. In an alternate embodiment, the plurality of AI/ML models 220 are pretrained. Herein, the system 108 recommends at least one trained AI/ML model 220 among the plurality of AI/ML models to the user which can be utilized for future analysis.
[0045] As per the illustrated embodiment, the system 108 includes the processor 202 for recommending the AI/ML model 220. The processor 202 includes a retrieving unit 208, a training unit 210, a generating unit 212, a recommending unit 214, and a predicting unit 216. The processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204, the storage unit 206 and the plurality of AI/ML models 220. In an embodiment, operations and functionalities of includes the retrieving unit 208, the training unit 210, the generating unit 212, the recommending unit 214, the predicting unit 216 and the one or more components of the system 108 can be used in combination or interchangeably.
[0046] In one embodiment, initially the retrieving unit 208 of the processor 202 is configured to retrieve data from the plurality of data sources 110. In one embodiment, the data is at least one of, but not limited to an input data stream and a Hyper Text Transfer Protocol (HTTP2) request provided by the user. For example, the data may be related to performance metrics of one or more network functions. In an alternate embodiment the data includes at least one of, but not limited to, a user profile, geographic locations, sensor data, text data, and historical data. In one embodiment, the retrieving unit 208 retrieves the data from the plurality of data sources 110 which are present within the network 106 and outside the network 106. In one embodiment, the plurality of data sources 110 periodically transmits the data to the system 108.
[0047] In one embodiment, the retrieving unit 208 retrieves the data from the plurality of data sources 110 via an interface. In one embodiment, the interface includes at least one of, but not limited to, one or more Application Programming Interfaces (APIs) which are used for retrieving the data from the plurality of data sources 110. The one or more APIs are sets of rules and protocols that allow different entities to communicate with each other. The one or more APIs define the methods and data formats that entities can use to request and exchange information, enabling integration and functionality across various platforms. In particular, the APIs are essential for integrating different systems, accessing services, and extending functionality.
[0048] Upon retrieving the data from the plurality of data sources 110, the retrieving unit 208 is further configured to preprocess the retrieved data. In particular, the retrieving unit 208 is configured to preprocess the retrieved data to ensure the data consistency and quality of the data within the system 108. The retrieving unit 208 performs at least one of, but not limited to, data normalization, data definition and data cleaning procedures.
[0049] While preprocessing, the retrieving unit 208 performs at least one of, but not limited to, reorganizing the data, removing the redundant data, formatting the data, removing null values from the data, cleaning the data, handling missing values, and adding static or dynamic fields in the retrieved data. The static fields are attributes that remain constant and do not change over time and the dynamic fields are attributes that change over time. For example, the static fields include, the user information and the dynamic fields includes, at least one of, but not limited to, performance metrics. The main goal of the preprocessing is to achieve a standardized data format across the system 108. While preprocessing, the duplicate data and inconsistencies are eliminated from the retrieved data. The retrieving unit 208 is further configured to store the pre-processed data in at least one of, the storage unit 206 for subsequent retrieval and analysis.
[0050] Upon storing the pre-processed data in the storage unit 206, the retrieving unit 208 is further configured to extract one or more features from the pre-processed data for training the plurality of AI/ML models 220. For example, let us consider that the pre-processed data is related to one or more network functions in the network 106. Then in order to train the plurality of AI/ML models 220, the retrieving unit 208 extracts the one or more features such as at least one of, but not limited to, a traffic volume, a packet loss, a latency, a throughput, and an error rate related to one or more network functions in the network 106.
[0051] In one embodiment, the traffic volume refers to the amount of data transmitted over the network 106 during a specific period. In one embodiment, packet loss is the percentage of packets that are lost during transmission of the data from a source to a destination. In one embodiment, the latency is the time taken for data to travel from the source to the destination. In one embodiment, the throughput is a rate at which the data is successfully transmitted from the source to the destination, often measured in bits per second (bps). In one embodiment, the error rate is a frequency of errors in transmission of the data from the source to the destination.
[0052] Upon preprocessing the data and extracting the one or more features from the pre-processed data, the training unit 210 of the processor 202 is configured to train the plurality of AI/ML models 220 with the retrieved data. In particular, the training unit 210 trains the plurality of AI/ML models 220 with the one or more features extracted from the pre-processed data. In order to train the plurality of AI/ML models 220, the training unit 210 configures one or more hyperparameters for each of the AI/ML model 220 among the plurality of AI/ML models 220. In one embodiment, the one or more hyperparameters of each of the AI/ML model 220 includes at least one of, but not limited to, a learning rate, a batch size, and a number of epochs.
[0053] In one embodiment, prior to configuring the one or more hyperparameters, the training unit 210 employs mathematical calculations on the pre-processed data and assesses the impact of the one or more hyperparameters on the performance of the plurality of AI/ML models 220. Herin, mathematical calculations include at least one of, but not limited to, a mean, a mode, a variance, a trend, an Autocorrelation Function (ACF), and a Partial Autocorrelation Function (PACF).
[0054] In one embodiment, while configuring the one or more hyperparameters, the mean is used to summarize performance metrics (e.g., accuracy, F1-score) across different hyperparameter configurations. In one embodiment, the mode identifies the most frequently occurring hyperparameter settings during configuring the one or more hyperparameters. In one embodiment the variance is a measure of how much the performance of the plurality of AI/ML models 220 varies when the one or more hyperparameters are changed. In one embodiment, the trend refers to the general direction in which data points move over time as the one or more hyperparameter are changed. In one embodiment, the ACF measures the correlation between the performance of the plurality of AI/ML models 220 at one hyperparameter setting and the performance of the plurality of AI/ML models 220 at previous settings facilitating identifying patterns over iterations. In one embodiment, the PACF measures the correlation between performance of the plurality of AI/ML models 220 at given hyperparameter settings and performance of the plurality of AI/ML models 220 at earlier hyperparameter settings.
[0055] In one embodiment, based on the impact of the one or more hyperparameters on the performance of the plurality of AI/ML models 220, the training unit 210 configures the one or more hyperparameters. Herein, the impact on the performance of the plurality of AI/ML models 220 pertains to at least one of, but not limited to, the AI/ML models 220 complexity, training time of the plurality of AI/ML models 220, and a convergence speed of the plurality of AI/ML models 220. Subsequent to configuring the one or more hyperparameters of the each of the AI/ML model 220 among the plurality of AI/ML models 220, the training unit 210 infers that the each of the AI/ML model 220 is ready for training.
[0056] Upon configuring the one or more hyperparameters of each of the AI/ML model 220 among the plurality of AI/ML models 220, the training unit 210 is further configured to select a training date range and a test/prediction range for each of the AI/ML model 220 among the plurality of AI/ML models 220 for training. For example, let us consider that the pre-processed data includes 1 month data related to one or more network functions in the network 106. Then the training unit 210 selects the training date range as 20 days data and the test/prediction range as 10 days data. In other words, for training the plurality of AI/ML models 220, the training unit 210 splits the pre-processed data into at least one of, but not limited to, training data and testing data. Further, the training unit 210 feeds the training data to the plurality of AI/ML models 220 based on which the plurality of AI/ML models 220 are trained by the training unit 210.
[0057] In one embodiment, the training unit 210 trains the plurality of AI/ML models 220 by applying one or more logics. In one embodiment, the one or more logics may include at least one of, but not limited to, a k-means clustering, a hierarchical clustering, a Principal Component Analysis (PCA), an Independent Component Analysis (ICA), a deep learning logics such as Artificial Neural Networks (ANNs), a Convolutional Neural Networks (CNNs), a Recurrent Neural Networks (RNNs), a Long Short-Term Memory Networks (LSTMs), a Generative Adversarial Networks (GANs), a Q-Learning, a Deep Q-Networks (DQN), a Reinforcement Learning Logics, etc.
[0058] In one embodiment, for training the plurality of AI/ML models 220 an input is received from the user pertaining to a training purpose and a training name. For example, the user can provide the training purpose such as forecasting and the training name such as recommendation of the AI/ML models 220. Herein, the training purpose and training name of the plurality of AI/ML models 220 are stored in the storage unit 206 in order to utilize the plurality of trained AI/ML models 220 in the future.
[0059] Upon training the plurality of AI/ML models 220, the generating unit 212 of the processor 202 is configured to generate, an output for each of the trained AI/ML model 220 among the plurality of AI/ML models 220 based on training. For example, subsequent to training, the plurality of trained AI/ML models 220 are fed with the testing data in order to evaluate performance of the plurality of trained AI/ML models 220. In one embodiment, the generated outputs for each of the trained AI/ML model 220 among the plurality of trained AI/ML models 220 include at least one of, an accuracy and a Root Mean Square Error (RMSE).
[0060] In one embodiment, the accuracy of the plurality of trained AI/ML models 220 is a metric that measures the proportion of correct predictions made by each of the trained AI/ML model 220 compared to the total number of predictions. For example, if at least one trained AI/ML model 220 makes 100 predictions and gets 90 predictions right, then the accuracy of the at least one trained AI/ML model 220 is 90%. In one embodiment, the RMSE is a metric which is used to evaluate the performance of plurality of trained AI/ML models 220. The RMSE measures the average of the errors between predicted values and actual values. Herein, the actual values are the true or observed outcomes in a dataset. The actual values represent the real-world measurements or results that the plurality of trained AI/ML models 220 aims to predict. Herein, the predicted values are the outcomes that each of the trained AI/ML model 220 estimates based on the input data.
[0061] For example, let us assume that actual values (x) are 10,12,15,20,25 and the predicted values (y) are 8,11,14,18,30. In order to calculate the RMSE value of at least one trained AI/ML model 220, the difference between the actual values and the predicted values are calculated which are inferred as error such as error (x -y) = (2, 1,1,2,-5). Thereafter, the squares of the error are calculated such as squared error = (4,1,1,4,25). Further, the mean of the squared errors is calculated such as mean = (4+1+1+4+25)/5 = 35/5 = 7. Furthermore, the square root of the men is calculated to get the RMSE value of at least one trained AI/ML model 220 such as v7 = 2.65. In this example, the RMSE value of the at least one trained AI/ML model 220 is approximately 2.65 which indicates that on average the predictions are about 2.65 units away from the actual values.
[0062] Upon generating the output, the generating unit 212 is further configured to represent at least one of, the generated outputs and a training status list of each of the trained AI/ML model 220 among the plurality of trained AI/ML models 220 in one or more formats on a User Interface (UI) 306. In one embodiment, the training status list includes at least one of, but not limited to, a date of training, a day of week training, the actual value, the predicted value and a forecasted value. Herein, the one or more formats includes at least one of, but not limited to, a tabular format and a graphical format. For example, the generated outputs and the training status list of each of the trained AI/ML model 220 are shown to the user via the UI 306 in the tabular view or the graphical view.
[0063] In another example, the user may view a list of the plurality of trained AI/ML models 220 in the tabular view. When at least one trained AI/ML model 220 is selected by the user from the list, then another tabular view is popped up which shows at least one of, but not limited to, the accuracy, the RMSE value, the date of training, the day of week training, the actual value, the predicted value and the forecasted value to the user on the UI 306. Advantageously, due to the graphical representation of the outputs as well as the tabular representation of the outputs, the user experience is enhanced.
[0064] Upon generating the output and representing the generated outputs, for each of the trained AI/ML model 220 among the plurality of AI/ML models 220, the recommending unit 214 of the processor 202 is configured to recommend at least one trained AI/ML model 220 among the plurality of AI/ML models 220 to the user based on the generated output of each of the trained AI/ML models 220. In order to recommend at least one trained AI/ML model 220 to the user, the recommending unit 214 compares the generated output of each of the trained AI/ML model 220 with the generated outputs of remaining of the trained AI/ML models 220 and recommends at least one of the trained AI/ML model 220 from the plurality of AI/ML models 220 based on the comparison of the generated outputs using one or more predefined rules.
[0065] In one embodiment, the one or more predefined rules includes at least one of, but not limited to, recommending the trained AI/ML model 220 having higher accuracy compared to the accuracy of the remaining of the trained AI/ML models 220 and recommending the trained AI/ML model 220 having the least RMSE value compared to the RMSE values of the remaining of the trained AI/ML models 220. For example, let us consider an AI/ML model A with the minimum RMSE value and the higher accuracy, an AI/ML model B with the minimum RMSE value and the moderate accuracy and an AI/ML model C with the high RMSE value and the lower accuracy. Herein, the recommending unit 214 compares the RMSE value and the accuracy of the AI/ML model A, the AI/ML model B and the AI/ML model C. Based on the comparison, the recommending unit 214 recommends the AI/ML model A to the user. Advantageously, due to the recommendation of at least one trained AI/ML model 220 the time required for checking the suitable AI/ML model for prediction of the one or more future events is reduced.
[0066] Upon recommending at least one trained AI/ML model 220, the predicting unit 216 of the processor 202 is configured to predict one or more future events utilizing the recommended at least one trained AI/ML model 220. In one embodiment, the one or more future events may include at least one of, but not limited to, predicting one or more anomalies in the network 106. Advantageously, due to utilizing the recommended at least one trained AI/ML model 220, the system 108 enhances the predictions of one or more future events.
[0067] The retrieving unit 208, the training unit 210, the generating unit 212, the recommending unit 214, and the predicting unit 216 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0068] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for recommending the AI/ML model 220. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0069] FIG. 3 shows communication between the UE 102, the system 108, and the plurality of data sources 110. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108, and the plurality of data sources 110. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, the system 108, and the plurality of data sources 110 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0070] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0071] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for recommending the AI/ML model 220. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0072] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 306 of the UE 102 allows the user to transmit data to the one or more processors 202, view the generated output of each of the trained AI/ML model 220 and receive recommendations form the one or more processors 202 regarding the at least one trained AI/ML model 220 which can be utilized by the user for predicting the one or more future events. In one embodiment, the user may be at least one of, but not limited to, a network operator.
[0073] As mentioned earlier in FIG.2, the system 108 includes the processors 202, the memory 204 and the storage unit 206, for recommending the AI/ML model 220, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0074] Further, as mentioned earlier the processor 202 includes the retrieving unit 208, the training unit 210, the generating unit 212, the recommending unit 214, the predicting unit 216 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0075] FIG. 4 is an exemplary the system 108 architecture 400 for recommending the AI/ML model 220, according to one or more embodiments of the present disclosure.
[0076] The architecture 400 includes a data source 110 which is at least one of the plurality of the data sources 110. Herein, the plurality of the data sources 110 are in communication with the network components. The architecture 400 further includes a data integrator 402, a data pre-processing unit 404, an AI/ML model training unit 406, the predicting unit 216, the storage unit 206, a graphical representation unit 410 and the recommending unit 214 communicably coupled to each other via the network 106.
[0077] In one embodiment, the data integrator 402 periodically receives the data from the data source 110. The data may be the input stream provided by the user which is crucial for training the plurality of the AI/ML models 220 and recommending at least one trained AI/ML model 220 among the plurality of AI/ML models 220. Herein, data integrator 402 combines the data retrieved from the data source 110 and provides a unified view to the user that enables comprehensive analysis. For example, the system 108 provides an integrated view of the data retrieved from the data source 110 pertaining to the one or more network functions.
[0078] In one embodiment, the data pre-processing unit 404 receives the retrieved data from the data integrator 402 and preprocesses the data. For example, the data undergoes preprocessing to ensure data consistency within the system 108. In particular, the preprocessing involves tasks like data cleaning, normalization, removing unwanted data like outliers, duplicate records and handling missing values. In yet another example, the raw data is pre-processed to clean, normalize, and convert the raw data into a structured format suitable for analysis. In an embodiment, preprocessing the data includes cleaning the data by removing unwanted columns from a data frame, removing unwanted rows from the data frame that contains invalid column values, such as NaN, None, 0, null, or empty strings. The data pre-processing unit 404 cleans and normalizes the data based on a string, numeric, a date or default operation filter, dynamic operations like substring extraction and concatenation on columns.
[0079] In one embodiment, the AI/ML model training unit 406 acts as the training unit 210. Herein, the AI/ML model training unit 406 trains the plurality of the trained AI/ML models 220 using the data pre-processed by the data pre-processing unit 404.
[0080] In one embodiment, the predicting unit 216 generates the output of each of the plurality of the trained AI/ML models 220 which includes at least one of, but not limited to, the accuracy and the RMSE value. In one embodiment, the predicting unit 216 generates the output by using the current plurality of the trained AI/ML models 220 or the pre-trained plurality of the AI/ML models 220.
[0081] In one embodiment, the storage unit 206 includes a structured collection of the preprocessed data, and the output generated by each trained AI/ML model 220 among the plurality of the trained AI/ML models 220, which are managed and organized in a way that allows system 108 for easy access, retrieval, and manipulation. The storage unit 206 is used to store, manage, and retrieve large amounts of information efficiently.
[0082] In one embodiment, the graphical representation unit 410 retrieves the information pertaining to the generated outputs from the predicting unit 216 and provides the visual representation of the generated outputs in at least one of, but not limited to, the tabular format and graphical format on the UI 306.
[0083] In one embodiment, the recommending unit 214 recommends at least one the trained AI/ML model 220 among the plurality of the trained AI/ML models 220 to the user based on the output generated by each of the plurality of the trained AI/ML models 220.
[0084] FIG. 5 is a signal flow diagram illustrating the flow for recommending the AI/ML model 220, according to one or more embodiments of the present disclosure.
[0085] At step 502, the system 108 retrieves data from the plurality of data sources 110. For example, the data is associated with at least one of, the one or more network functions in the network 106 and the input dataset stored by the user in the plurality of data sources 110 for training the plurality of the AI/ML models 220. In one embodiment, the system 108 transmits at least one of, but not limited to, a Hyper Text Transfer Protocol (HTTP) request to the plurality of data sources 110 to retrieve the data. In one embodiment, a connection is established between the system 108 and the plurality of data sources 110 before retrieving data. Further, the retrieved data is integrated and preprocessed by the system 108.
[0086] At step 504, the system 108 trains the plurality of the AI/ML model 220 with the retrieved data.
[0087] At step 506, the system 108 generates an output for each of the trained AI/ML model 220 among the plurality of AI/ML models 220 based on training. Herein, the generated output for each of the trained AI/ML model 220 among the plurality of trained AI/ML models 220 include at least one of, the accuracy and the RMSE. For example, for a particular trained AI/ML model 220 the output includes 80% of accuracy and the 3 as RMSE value.
[0088] At step 508, the system 108 transmits the recommendations regarding at least one trained AI/ML model 220 to the user which is suitable for predicting the one or more future events. Herein, the system 108 transmits the recommendations to the user by at least one of, but not limited to, the HTTP request. Further, the user can view the recommendations and the output generated by each of the trained AI/ML model 220 in at least one of, graphical and tabular format on the UI 306 of the UE 102.
[0089] FIG. 6 is a flow diagram of a method 600 for recommending the AI/ML model 220, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0090] At step 602, the method 600 includes the step of retrieving data from the plurality of data sources 110. In one embodiment, the retrieving unit 208 retrieves the data from the plurality of data sources 110. In particular, the retrieving unit 208 utilizes the one or more APIs for retrieving the data from the plurality of data sources 110. Further, the retrieved data is integrated by the retrieving unit 208. Thereafter, the integrated data is preprocessed by the retrieving unit 208 to ensure the data consistency and quality within the system 108.
[0091] At step 604, the method 600 includes the step of training the plurality of AI/ML models 220 with the retrieved data. In one embodiment, the training unit 210 trains the plurality of AI/ML models 220 with the retrieved data. For example, let us consider the retrieved data as the dataset which includes data related to the one or more network functions. Then, the training unit 210 splits the dataset into the training data and the testing data such as 80% of the dataset is considered as the training data and the 20% of the dataset is considered as the testing data. Thereafter, the training data is fed to each of the AI/ML model 220 among the plurality of AI/ML models 220 for training.
[0092] At step 606, the method 600 includes the step of generating the output for each of the trained AI/ML model 220 among the plurality of AI/ML models 220 based on training. In one embodiment, the generating unit 214 generates the output for each of the trained AI/ML model 220. For example, subsequent to training, the training unit 210 provides the testing data to each of the trained AI/ML model 220 among the plurality of AI/ML models 220. Based on training, each of the trained AI/ML model 220 generates output which includes the accuracy and the RMSE value of each of the trained AI/ML model 220. Let us consider that AI/ML model A, the AI/ML model B and the AI/ML model C generates outputs such as for the AI/ML model A the accuracy is 95% and the RMSE value is 2, for the AI/ML model B the accuracy is 90% and the RMSE value is 3, and for the AI/ML model C the accuracy is 80% and the RMSE value is 4.5.
[0093] At step 608, the method 600 includes the step of recommending, at least one trained AI/ML model 220 among the plurality of AI/ML models 220 to the user based on the generated output of each of the trained AI/ML models 220. For example, let us consider that the AI/ML model A generates output such as the 95% accuracy and the RMSE value 2, the AI/ML model B generates output such as the 90% accuracy and the RMSE value is 3, and the AI/ML model C generates output the 80% accuracy and the RMSE value is 4.5. Further, the recommending unit 214 compares the RMSE value and the accuracy of the AI/ML model A, the AI/ML model B and the AI/ML model C. Based on comparison, the recommending unit 214 identifies that the AI/ML model A performs better as compared to the AI/ML model B and the AI/ML model C. Therefore, the recommending unit 214 recommends the AI/ML model A to the user which can be utilized for predictions of the one or more future events.
[0094] In one embodiment, at least one of, but not limited to, the graphical representation and the tabular representation is provided to the user regarding the at least one of, the generated outputs of the plurality of the trained AI/ML models 220 and the recommendation of at least one trained AI/ML model 220.
[0095] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202. The processor 202 is configured to retrieve data from a plurality of data sources 110. The processor 202 is further configured to train the plurality of AI/ML models 220 with the retrieved data. The processor 202 is further configured to generate the output for each of the trained AI/ML model 220 among the plurality of AI/ML models 220 based on training. The processor 202 is further configured to recommend at least one trained AI/ML model 220 among the plurality of AI/ML models 220 to the user based on the generated output of each of the trained AI/ML models 220.
[0096] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0097] The present disclosure provides technical advancements where the system automatically recommends the suitable AI/ML model to the user to provide suitable output for the provided input after the completion of ML training. Due to recommendation of the suitable AI/ML model the time required for checking the suitable AI/ML model for prediction of the one or more future events is reduced. Utilizing the recommended AI/ML model the system enhances the predictions of one or more future events. The system provides the graphical representation as well as tabular representation of the plurality of the AI/ML models outputs which enhances the user experience and facilities user to understand which is the suitable AI/ML model among the plurality of the AI/ML models which can provide better predictions.
[0098] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0099] Environment - 100;
[00100] User Equipment (UE) - 102;
[00101] Server - 104;
[00102] Network- 106;
[00103] System -108;
[00104] Plurality of data sources – 110;
[00105] Processor - 202;
[00106] Memory - 204;
[00107] Storage unit – 206;
[00108] Retrieving unit – 208;
[00109] Training unit – 210;
[00110] Generating unit – 212;
[00111] Recommending unit – 214;
[00112] Predicting unit – 216;
[00113] Plurality of AI/ML Models – 220;
[00114] Primary Processor – 302;
[00115] Memory – 304;
[00116] User Interface (UI) – 306;
[00117] Data integrator – 402;
[00118] Data pre-processing unit - 404;
[00119] AI/ML Model training unit – 406;
[00120] Graphical representation unit – 410.
,CLAIMS:CLAIMS
We Claim:
1. A method (600) for recommending an Artificial Intelligence/Machine Learning (AI/ML) model (220), the method (600) comprising the steps of:
retrieving, by one or more processors (202), data from a plurality of data sources (110);
training, by the one or more processors (202), a plurality of AI/ML models (220) with the retrieved data;
generating, by the one or more processors (202), an output for each of the trained AI/ML model (220) among the plurality of AI/ML models (220) based on training; and
recommending, by the one or more processors (202), at least one trained AI/ML model (220) among the plurality of AI/ML models (220) to a user based on the generated output of each of the trained AI/ML models (220).

2. The method (600) as claimed in claim 1, wherein the step of retrieving, data from the plurality of data sources (110), further includes the steps of:
preprocessing, by the one or more processors (202), the retrieved data;
storing, by the one or more processors (202), the pre-processed data in a storage unit (206); and
extracting, by the one or more processors (202), one or more features from the pre-processed data for training the plurality of AI/ML models (220).

3. The method (600) as claimed in claim 1, wherein the step of training the plurality of AI/ML models (220) with the retrieved data includes the steps of:
configuring, by the one or more processors (202), one or more hyperparameters for each of the AI/ML model (220) among the plurality of AI/ML models (220); and
selecting, by the one or more processors (202), a training date range and test/prediction range for each of the AI/ML model (220) among the plurality of AI/ML models (220) for training.

4. The method (600) as claimed in claim 1, wherein for training the plurality of AI/ML models (220), an input is received from the user pertaining to a training purpose and a training name, wherein the training purpose and training name of the plurality of AI/ML models (220) are stored in the storage unit (206) in order to utilize the plurality of trained AI/ML models (220) in future.

5. The method (600) as claimed in claim 1, wherein the generated output for each of the trained AI/ML model (220) among the plurality of trained AI/ML models (220) include at least one of, an accuracy and a Root Mean Square Error (RMSE).

6. The method (600) as claimed in claim 1, wherein the step of generating, the output for each of trained AI/ML model (220) among the plurality of trained AI/ML models (220), based on training further includes the steps of:
representing, by the one or more processors (202), at least one of, the generated outputs and a training status list of each of the trained AI/ML model (220) among the plurality of trained AI/ML models (220) in one or more formats on a User Interface (UI) (306).

7. The method (600) as claimed in claim 6, wherein the training status list includes at least one of, a date of training, a day of week training, an actual value, a predicted value and a forecasted value.

8. The method (600) as claimed in claim 6 wherein the one or more formats includes at least one of, but not limited to, a tabular format and a graphical format.

9. The method (600) as claimed in claim 1, wherein the step of recommending, at least one trained AI/ML model (220) among the plurality of AI/ML models (220) to the user based on the generated output of each of the trained AI/ML model (220) among the plurality of trained AI/ML models (220), includes steps of;
comparing, by the one or more processors (202), the generated output of each of the trained AI/ML model (220) with the generated outputs of remaining of the trained AI/ML models (220); and
recommending, by the one or more processors (202), at least one of the trained AI/ML model (220) from the plurality of AI/ML models (220) based on the comparison of the generated outputs using one or more predefined rules.

10. The method (600) as claimed in claim 9, wherein the one or more predefined rules includes at least one of;
recommending, a trained AI/ML model (220) having higher accuracy compared to the accuracy of the remaining of the trained AI/ML models (220); and
recommending, a trained AI/ML model (220) having a least RMSE value compared to the RMSE values of the remaining of the trained AI/ML models (220).

11. The method (600) as claimed in claim 1, wherein the recommended at least one trained AI/ML model (220) is utilized by the one or more processors (202) for predicting one or more future events.

12. A system (108) for recommending an Artificial Intelligence/Machine Learning (AI/ML) model (220), the system (108) comprising:
a retrieving unit (208), configured to, retrieve, data from a plurality of data sources (110);
a training unit (210), configured to, train, a plurality of AI/ML models (220) with the retrieved data;
a generating unit (212), configured to, generate, an output for each of the trained AI/ML model (220) among the plurality of AI/ML models (220) based on training; and
a recommending unit (214), configured to, recommend, the at least one trained AI/ML model (220) among the plurality of AI/ML models (220) to a user based on the generated output of each of the trained AI/ML models (220).

13. The system (108) as claimed in claim 12, wherein the retrieving unit (208) is further configured to:
preprocess, the retrieved data;
store, the pre-processed data in a storage unit (206); and
extract, one or more features from the pre-processed data for training the plurality of AI/ML models (220).

14. The system (108) as claimed in claim 12, wherein the training unit (210) trains the plurality of AI/ML models (220) with the retrieved data, by:
configuring, one or more hyperparameters for each of the AI/ML model (220) among the plurality of AI/ML models (220); and
selecting, a training date range and test/prediction range for each of the AI/ML model (220) among the plurality of AI/ML models (220) for training.

15. The system (108) as claimed in claim 12, wherein for training the plurality of AI/ML models (220), an input is received from the user pertaining to a training purpose and a training name to the plurality of AI/ML models (220), wherein the training purpose and training name of the plurality of AI/ML models (220) are stored in the storage unit (206) in order to utilize the plurality of trained AI/ML models (220) in future.

16. The system (108) as claimed in claim 12, wherein the generated outputs for each of the trained AI/ML model (220) among the plurality of trained AI/ML models (220) include at least one of, an accuracy and a Root Mean Square Error (RMSE).

17. The system (108) as claimed in claim 12, wherein the generating unit (212) is further configured to:
represent, at least one of, the generated outputs and a training status list of each of the trained AI/ML (220) among the plurality of trained AI/ML models (220) in one or more formats on a User Interface (UI) (306).

18. The system (108) as claimed in claim 17, wherein the training status list includes at least one of, a date of training, a day of week training, an actual value, a predicted value and a forecasted value.

19. The system (108) as claimed in claim 17, wherein the one or more formats includes at least one of, but not limited to, a tabular format and a graphical format.

20. The system (108) as claimed in claim 12, wherein the recommending unit (214) recommends, at least one trained AI/ML model (220) among the plurality of AI/ML models (220) to the user based on the generated output of each of the trained AI/ML model (220) among the plurality of trained AI/ML models (220), by:
comparing, the generated outputs of each of the trained AI/ML model (220) with the generated outputs of remaining of the trained AI/ML models (220); and
recommending, at least one of the trained AI/ML model (220) from the plurality of AI/ML models (220) based on the comparison of the generated outputs using one or more predefined rules.

21. The system (108) as claimed in claim 20, wherein the one or more predefined rules includes at least one of;
recommending a trained AI/ML model (220) having higher accuracy compared to the accuracy of the remaining of the trained AI/ML models (220); and
recommending a trained AI/ML model (220) having a least RMSE value compared to the RMSE values of the remaining of the trained AI/ML models (220).

22. The system (108) as claimed in claim 12, wherein the recommended at least one trained AI/ML model (220) is utilized by a predicting unit (216) for predicting one or more future events.

23. A User Equipment (UE) (102), comprising:
one or more primary processors (302) communicatively coupled to one or more processors (202), the one or more primary processors (302) coupled with a memory 304), wherein said memory (304) stores instructions which when executed by the one or more primary processors (302) causes the UE (102) to:
transmit, data to the one or more processors (202);
view, generated outputs on the UI (306); and
receive, recommendations form the one or more processors (202);
wherein the one or more processors (202) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321071950-STATEMENT OF UNDERTAKING (FORM 3) [20-10-2023(online)].pdf 2023-10-20
2 202321071950-PROVISIONAL SPECIFICATION [20-10-2023(online)].pdf 2023-10-20
3 202321071950-FORM 1 [20-10-2023(online)].pdf 2023-10-20
4 202321071950-FIGURE OF ABSTRACT [20-10-2023(online)].pdf 2023-10-20
5 202321071950-DRAWINGS [20-10-2023(online)].pdf 2023-10-20
6 202321071950-DECLARATION OF INVENTORSHIP (FORM 5) [20-10-2023(online)].pdf 2023-10-20
7 202321071950-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321071950-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321071950-DRAWING [19-10-2024(online)].pdf 2024-10-19
10 202321071950-COMPLETE SPECIFICATION [19-10-2024(online)].pdf 2024-10-19
11 Abstract.jpg 2025-01-11
12 202321071950-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321071950-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321071950-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321071950-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321071950-FORM 3 [31-01-2025(online)].pdf 2025-01-31