Abstract: ABSTRACT SYSTEM AND METHOD FOR REFINING AN ARTIFICIAL INTELLIGENCE (AI) MODEL SELECTION AND SEQUENCING The present invention relates to a system (108) and a method (600) for refining an Artificial Intelligence (AI) model selection and sequencing. The method (600) includes step of receiving data from at least one data source (110). Thereafter, selecting one or more AI models (220) to execute a task utilizing the received data. Furthermore, determining, a sequence of the one or more selected AI models (220) of each of the one or more selected AI models (220). The method (600) includes step of executing each of the one or more selected AI models (220) in the determined sequence utilizing the received data. Thereafter evaluating, a set of performance parameters of the one or more selected AI models (220). Furthermore, customizing the one or more selected AI models (220) based on the evaluation by updating the set of parameters to refine the selection of the one or more AI models (220). Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR REFINING AN ARTIFICIAL INTELLIGENCE (AI) MODEL SELECTION AND SEQUENCING
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for refining an Artificial Intelligence (AI) model selection and sequencing.
BACKGROUND OF THE INVENTION
[0002] In general, with increase in number of users, the network service provisions have been implementing to up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up to date features into the scope of provision so as to enhance user experience. For this purpose, integrating AI (artificial Intelligence) and ML (machine learning) for various network practices like estimating network performance, tracking health of a network, enhancing user interactive features, and monitoring security has become essential. Incorporating advanced AI/ML methodology has become a priority to keep up with rapidly evolving telecom sector. The AI/ML incorporation usually performed by introducing training models with specific data set to enable them to recognize patterns, trends and based on these, to predict required output. ML training for the given data extracted from data source is performed by a specifically constructed system.
[0003] However, there exists complexity in performing data source Integration due to diversity of data sources, both internal and external, which is challenging and resource intensive. Moreover, there are chances that highly complex task computation yields less precise results and due to incomplete source integration coordination between linked models is suboptimal, leading to inconsistent and less reliable results. System performance falls short of its potential due to suboptimal algorithm selection and sequencing. Presently, no such mechanism available that can integrate the data sources and enables a user to intervene in data processing methodology selection. There is a need to introduce a system and method which is capable of performing optimal integration of data sources. The system would yield more satisfactory result if there is provision for user to select and implement a desired methodology to the data.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provides a method and a system for refining an Artificial Intelligence (AI) model selection and sequencing.
[0005] In one aspect of the present invention, the method for refining the AI model selection and sequencing is disclosed. The method includes the step of receiving, by one or more processors, data from at least one data source. The method further includes the step of selecting by the one or more processors, one or more AI models to execute a task utilizing the received data. The method further includes the step of determining by the one or more processors, a sequence of the one or more selected AI models based on a set of parameters of each of the one or more selected AI models. The method further includes the step of executing, by the one or more processors, each of the one or more selected AI models in the determined sequence utilizing the received data. The method further includes the step of evaluating, by the one or more processors, a set of performance parameters of the one or more selected AI models by comparing a final output generated on the execution of each of the one or more selected AI models with a predefined set of performance parameters. The method further includes the step of customizing, by the one or more processors, the one or more selected AI models based on the evaluation, the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models
[0006] In another embodiment, the data the data source is at least one of, a Network File System (NFS), a Network Management System (NMS), a Network Data Analytics Function (NWDAF), an Application Programming Interface (API), and one or more databases.
[0007] In yet another embodiment, the one or more AI models are selected based on at least one of meta-learning and reinforcement learning.
[0008] In yet another embodiment, the set of parameters corresponds to at least one of a computational efficiency, data dependency, and compatibility with respect to each of the one or more selected AI models.
[0009] In yet another embodiment, the output of each of the one or more selected AI models is an input for a subsequent AI model of the one or more selected AI models.
[0010] In yet another embodiment, the final output is generated by a last AI model in the sequence of the one or more selected AI models and wherein the set of performance parameters correspond to at least one of an accuracy, a precision, and a recall of the final output.
[0011] In yet another embodiment, the predefined set of parameters correspond to at least one of an accuracy, a precision, and a recall of a previously trained AI model, wherein the predefined set of parameters is retrieved from a database.
[0012] In yet another embodiment, the method includes steps of storing, by the one or more processors, logs associated with the sequence and output generated by each of the one or more selected AI models in the sequence.
[0013] In another aspect of the present invention, the system for refining an Artificial Intelligence (AI) model selection and sequencing is disclosed. The system includes a receiving unit configured to receive data from at least one data source. The system further includes an selecting unit configured to select one or more AI models to execute a task utilizing the received data. The system further includes a determining unit configured to determine, a sequence of the one or more selected AI models based on a set of parameters of each of the one or more selected AI models. The system further includes an executing unit configured to execute, each of the one or more selected AI models in the determined sequence utilizing the received data. The system further includes an evaluating unit configured to evaluate, a set of performance parameters of the one or more selected AI models by comparing a final output generated on the execution of each of the one or more selected AI models with a predefined set of performance parameters. The system further includes a customizing unit configured to customize the one or more selected AI models based on the evaluation, the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models.
[0014] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to receive data from at least one data source. The processor is further configured to select one or more AI models to execute a task utilizing the received data. The processor is further configured to determine a sequence of the one or more selected AI models based on a set of parameters of each of the one or more selected AI models. The processor is further configured to execute each of the one or more selected AI models in the determined sequence utilizing the received data. The processor is further configured to evaluate a set of performance parameters of the one or more selected AI models by comparing a final output generated on the execution of each of the one or more selected AI models with a predefined set of performance parameters. The processor is further configured to customize, the one or more selected AI models based on the evaluation, the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models.
[0015] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0017] FIG. 1 is an exemplary block diagram of an environment for refining an Artificial Intelligence (AI) model selection and sequencing, according to one or more embodiments of the present invention;
[0018] FIG. 2 is an exemplary block diagram of a system for refining the AI model selection and sequencing, according to one or more embodiments of the present invention;
[0019] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0020] FIG. 4 is an exemplary architecture for refining the AI model selection and sequencing, according to one or more embodiments of the present disclosure;
[0021] FIG. 5 is an exemplary signal flow diagram illustrating the flow for refining the AI model selection and sequencing, according to one or more embodiments of the present disclosure; and
[0022] FIG. 6 is a flow diagram of a method for refining the AI model selection and sequencing, according to one or more embodiments of the present invention.
[0023] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0024] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0025] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0026] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0027] Various embodiments of the present invention provide a system and a method for refining an Artificial Intelligence (AI) model selection and sequencing. The present invention is able to integrate data from various data sources within a network and outside the network so as to reduce complexity of the data integration from the various data sources and increase coordination between a plurality of AI models. The disclosed system and method use a combination of techniques to automatically select the most suitable AI model for a particular task from the plurality of AI models. In particular, the system determines an optimal sequence for chaining the plurality of AI models together. Further, the present invention provides a unique approach of feeding data to a first AI model which produces an output. Then, the output produced by the first AI model becomes input for the next AI model among the plurality of AI models. The final output of a last AI model among the plurality of AI models in the optimal sequence represents the system's result or prediction. The present invention automatically customizes for one or more selected AI models to refine the selection of the one or more AI models.
[0028] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for refining an Artificial Intelligence (AI) model 220 selection and sequencing, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, a system 108, and data sources 110. In an embodiment, the AI models 220 refer to different frameworks or paradigms for solving problems or performing tasks using logic. The AI models 220 represent various approaches to logic design and problem-solving, each suited to different types of tasks. The disclosed system 108 selects one or more AI models 220 among a plurality of the AI models 220 within the system 108 for the given task. Once one or more AI models 220 are selected, the system 108 determines the optimal sequence for sequencing/chaining the one or more AI models 220 together. Herein, refining the one or more AI models 220 selection and sequencing refers to replace the selected one or more AI models 220 with any other AI model 220 and based on the updated selection of the one or more AI model 220, the sequence of the one or more selected AI models 220 is changed by the system 108.
[0029] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0030] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0033] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0034] The environment 100 further includes the data sources 110. Hereinafter, the data sources 110 is referred to one or more data sources 110 without limiting the scope of the invention. Based on the requirement in the invention the data source 110 and the data sources 110 are used interchangeably. In one embodiment, the one or more data sources 110 is at least one of, Network File System (NFS), Network Management System (NMS), Network Data Analytics Function (NWDAF), Application Programming Interface (API), and one or more databases. In particular, the one or more data sources 110 is associated with at least one of, but not limited to, a third-party service provider or is proprietary of a service provider.
[0035] As per the illustrated embodiment, the one or more data sources 110 are configured to store data associated with the network 106. The one or more data sources 110 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the one or more data sources 110 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0036] The environment 100 further includes the system 108 communicably coupled to the server 104, the UE 102, the one or more data sources 110 via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0037] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0038] FIG. 2 is an exemplary block diagram of the system 108 for refining the AI model 220 selection and sequencing, according to one or more embodiments of the present invention.
[0039] As per the illustrated and preferred embodiment, the system 108 for refining the AI model 220 selection and sequencing, includes one or more processors 202, a memory 204, a storage unit 206 and a plurality of AI models 220. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0040] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for refining the AI model 220 selection and sequencing. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0041] As per the illustrated embodiment, the storage unit 206 is configured to store data associated with the plurality of AI models 220. In particular, the storage unit 206 is configured to store a predefined set of parameters corresponding to at least one of, but not limited to, an accuracy, a precision, and a recall of a previously trained AI model 220 among the plurality of AI models 220. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0042] As per the illustrated embodiment, the system 108 includes the plurality of AI models 220. Herein, the plurality of AI models 220 that selects suitable logic for the particular tasks are generally an Artificial Intelligence/Machine Leaning (AI/ML) models. In an alternate embodiment, the plurality of AI models 220 are systematic procedures or formulas for solving problems or performing tasks which are used to process data, make decisions, and perform various operations. For example, the model 220 facilitates in solving real-world problems without extensive manual intervention. Herein the plurality of AI models 220 and one or more AI models 220 are used interchangeably without limiting scope of the invention.
[0043] As per the illustrated embodiment, the system 108 includes the processor 202 for refining the AI model 220 selection and sequencing. The processor 202 includes a receiving unit 208, a selecting unit 210, a determining unit 212, an executing unit 214, an evaluating unit 216, a processing unit 218, a customizing unit 222, and a logging unit 224. The processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204, the storage unit 206, and the plurality of AI models 220. In an embodiment, operations and functionalities of the receiving unit 208, the selecting unit 210, the determining unit 212, the executing unit 214, the evaluating unit 216, the processing unit 218, the customizing unit 222, the logging unit 224 and the one or more components of the system 108 can be used in combination or interchangeably.
[0044] In one embodiment, initially the receiving unit 208 of the processor 202 is configured to receive data from the data sources 110. In another embodiment, the receiving unit 208 of the processor 202 is configured to receive data from at least one data source 110 among the data sources 110. Herein, the data received from the data sources 110 is at least one of, but not limited to, historical data associated with a specific task performed by the processor 202. In particular, the specific task is at least one of, but not limited to, classification, clustering, and regression of the received data. For example, the data sources 110 are at least one of, but not limited to, one or more databases from which the receiving unit 208 receives the data related to the historical task stored at one or more databases.
[0045] In one embodiment, the receiving unit 208 integrates the data received from the data sources 110. In particular, the receiving unit 208 integrates the data received from the data sources 110 within the network 106 and the data sources 110 outside the network 106. Herein, integrating data involves combining data from various data sources 110 to provide a unified view or to enable comprehensive analysis. The processes of integrating data are essential for gaining insights, improving decision-making, and ensuring consistency across system 108. In one embodiment, the present system 108 includes an interface specifically constructed for the purpose of data sources 110 integration. The interface such as Application Programming Interfaces (APIs) are utilized by the receiving unit 208 for the data sources 110 integration. The APIs are a set of rules and protocols that allow different software applications to communicate with each other. In particular, the APIs are essential for integrating different systems, accessing services, and extending functionality.
[0046] Upon receiving the data from at least one data source 110, the processing unit 218 of the processor 202 is configured to preprocess the data received from the at least one data source 110. In particular, the data integrated from the various data sources 110 is preprocessed by the processing unit 218. In one embodiment, the processing unit 218 is configured to normalize the integrated data to ensure the data consistency and quality within the system 108. The processing unit 218 performs at least one of, but not limited to, data normalization.
[0047] The data normalization is the process of at least one of, but not limited to, reorganizing the integrated data, removing the redundant data within the integrated data, formatting the integrated data, removing null values from the integrated data, handling missing values from the integrated data. The main goal of the the processing unit 218 is to achieve a standardized data format across the entire system 108. The processing unit 218 eliminates duplicate data and inconsistencies which reduces manual efforts. The processing unit 218 ensures that the normalized data is stored appropriately in at least one of, the storage unit 206 for subsequent retrieval and analysis.
[0048] Upon preprocessing the received data, the selecting unit 210 of the processor 202 is configured to select one or more AI models 220 from the plurality of AI models 220 to execute the task utilizing the received data. Herein, the selecting unit 210 selects the one or more AI models 220 based on at least one of, but not limited to, a meta-learning and a reinforcement learning of the received data. For example, while meta-learning the selecting unit 210 learns from historical data related to previous executed tasks to improve the process of selecting the one or more AI models 220. In one embodiment, the selecting unit 210 is at least one of, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model.
[0049] In one embodiment, the one or more AI models 220 selection among the plurality of AI models 220 is a critical part for the selecting unit 210 which involves understanding the requirements of the task. Herein, the selecting unit 210 determines whether the task is at least one of, but not limited to classification, clustering, or another type based on which the selecting unit 210 selects the one or more AI models 220. In one embodiment, the selecting unit 210 uses techniques like k-fold cross-validation to assess the plurality of AI models 220 performance and generalizability based on which the selecting unit 210 selects the one or more AI models 220.
[0050] In alternate embodiment, the selecting unit 210 optimizes or tunes a set of parameters associated with the plurality of AI models 220 to improve the plurality of AI models 220 performance based on which the selecting unit 210 selects the one or more AI models 220. In one embodiment, the set of parameters corresponds to at least one of, but not limited to, a computational efficiency, data dependency, and one or more AI models 220 compatibility with respect to each of the one or more AI models 220. In yet another embodiment, the selecting unit 210 retrieves some part of the received data which acts as testing data and tests the plurality of AI models 220 to determine the best performer among the plurality of AI models 220 based on which the selecting unit 210 selects the one or more AI models 220.
[0051] Upon selecting the one or more AI models 220, the determining unit 212 of the processor 202 is configured to determine a sequence of the one or more selected AI models 220 based on the set of parameters of each of the one or more selected AI models 220. In particular, the determining unit 212 determines the sequence of the one or more selected AI models 220 for chaining the one or more selected AI models 220 together. In one embodiment, the set of parameters corresponds to at least one of, but not limited to, the computational efficiency, the data dependency, and the AI models 220 compatibility with respect to each of the one or more selected AI models 220 to ensure smooth flow of the data between the chained one or more selected AI models 220. For example, of the one or more selected AI models 220 are chained in the sequence based on the high computational efficiency of the one or more selected AI models 220.
[0052] Upon determining the sequence of the one or more selected AI models 220, the executing unit 214 of the processor 202 is configured to execute each of the one or more selected AI models 220 in the determined sequence utilizing the received data. In particular, the received data is fed to a first AI model in the sequence of the one or more selected AI models 220. Further, the first AI model processes the received data and produces an output. Thereafter, the executing unit 214 provides the produced output to the next AI model present in the sequence of the one or more selected AI models 220. In particular, the output of each of the one or more selected AI models 220 is an input for a subsequent AI model of the one or more selected AI models 220. This process continues iteratively until the last AI model in the sequence is reached.
[0053] For example, while executing, the one or more selected AI models 220 processes the received data. Herein, the selected one or more selected AI models 220 are at least one of, but not limited to, a neural network or a decision tree logic. In one embodiment, the one or more selected AI models 220 are trained on historical data associated with the previously executed task. Based on training, the one or more selected AI models 220 processes the received data.
[0054] In an embodiment, the executing unit 214 executes each of the one or more selected AI models 220 iteratively. While iteratively processing the one or more selected AI models 220 when the last AI model in the sequence of the one or more selected AI models 220 produces output, the output produced by the last AI model in the sequence is inferred as a final output. In particular, the final output is used for further analysis or application within the network 106.
[0055] Upon producing the final output, the evaluating unit 216 of the processor 202 is configured to evaluate overall performance of the process performed within the system 108. In particular, the evaluating unit 216 evaluates a set of performance parameters of the one or more selected AI models 220. Herein, the set of performance parameters include at least one of, but not limited to, an accuracy, a precision, and a recall of the final output by comparing the final output of the one or more selected AI models 220 with a predefined set of performance parameters.
[0056] In particular, the evaluating unit 216 evaluates the set of performance parameters of the one or more selected AI models 220 by retrieving a predefined set of performance parameters from the storage unit 206. In particular, the predefined set of performance parameters include at least one of, but not limited to, an accuracy, a precision, and a recall of the final output of a previously trained AI model 220. Herin, the predefined set of performance parameters associated with the previously trained AI model are predefined by the executing unit 214 based on training by applying one or more logics.
[0057] In one embodiment, the one or more logics may include at least one of, but not limited to, a k-means clustering, a hierarchical clustering, a Principal Component Analysis (PCA), an Independent Component Analysis (ICA), a deep learning logics such as Artificial Neural Networks (ANNs), a Convolutional Neural Networks (CNNs), a Recurrent Neural Networks (RNNs), a Long Short-Term Memory Networks (LSTMs), a Generative Adversarial Networks (GANs), a Q-Learning, a Deep Q-Networks (DQN), a Reinforcement Learning Logics, etc.
[0058] Further, the evaluating unit 216 evaluates the set of performance parameters of the one or more selected AI models 220 by comparing the final output of the one or more selected AI models with the predefined set of performance parameters retrieved from the storage unit 206. In one embodiment, when the deviation is determined by the evaluating unit 216 such as the set of performance parameters of the one or more selected AI models 220 is not within a range of the predefined set of performance parameters. Then based on the deviation, the evaluating unit 216 infers that the one or more selected AI models 220 is not suitable to perform the task utilizing the received data. In an alternate embodiment, the evaluating unit 216 infers that the one or more selected AI models 220 is not suitable for the received data. Herin the predefined set of performance parameters are the limits associated with the performance parameters of the one or more selected AI models 220.
[0059] Upon evaluating the performance of the one or more selected AI models 220, the customizing unit 222 of the processor 202 is configured to customize the one or more selected AI models 220 based on the performance evaluation. Herein the customizing includes at least one of, but not limiting to, updating the set of parameters to refine the selection of the one or more AI models 220. In particular, the final output and the performance evaluation of the one or more selected AI models 220 is provided as feedback to the customizing unit 222 by the evaluating unit 216. Based on the feedback, the customizing unit 222 customizes the selection of the one or more AI models 220 and the sequencing/chaining of the one or more selected AI models 220.
[0060] For example, when the evaluation unit 216 infers that the one or more selected AI models 220 is not suitable for the received data, then the customizing unit 222 selects another AI model among the plurality of the AI models 220 so that the one or more selected AI models 220 is suitable for the received data. Similarly, the customizing unit 222 adjust the parameters of the one or more selected AI models 220 to make suitable for the received data.
[0061] In another example, at least one of, the evaluating unit 216 and the customizing unit 222 continuously monitors and analyses the performance of the one or more selected AI models 220 associated with the tasks to refine the selected and the sequence of the one or more selected AI models 220. The continuous monitoring analysis facilities to refine the sequence of the one or more selected AI models 220 that work well together which improves the overall performance of the one or more selected AI models 220 associated with the tasks.
[0062] In one embodiment, the logging unit 224 of the processor 202 is configured to store logs pertaining to at least one of, but not limited to, selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance evaluation of the one or more selected AI models 220 in the storage unit 206. In one embodiment, the logs facilitate at least one of, but not limited to, monitoring, analysing of system behaviour and performance of the system over time. In an alternate embodiment, the logs pertaining to at least one of, but not limited to, the selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance evaluation of the one or more selected AI models 220 are notified to a user in real time. Advantageously, due to automatic selection of the one or more AI models 220 and sequencing of the one or more selected AI models 220 the accuracy and efficiency in complex tasks involving plurality of AI models 220 is increased due to which the overall system 108 performance is increased.
[0063] The receiving unit 208, the selecting unit 210, the determining unit 212, the executing unit 214, the evaluating unit 216, the processing unit 218, the customizing unit 222, and the logging unit 224 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0064] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for refining the AI model 220 selection and sequencing. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0065] FIG. 3 shows communication between the UE 102, the system 108, and the data sources 110. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108 and the data sources 110. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, the system 108, and the data sources 110 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0066] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0067] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for refining the AI model 220 selection and sequencing. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0068] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 306 of the UE 102 allows the user to transmit data within the network 106 to the system 108 for the AI model 220 selection and sequencing. Herein, the UE 102 act as the data source 110. In one embodiment, the user receives the notification from the system 108 regarding at least one of, but not limited to, the selection of the AI model 220 and sequencing one or more AI models 220. In one embodiment, the user may be at least one of, but not limited to, a network operator. In one embodiment, the user initiates the data source integration and methodology for the selection one or more AI models 220 by means of the UI 306.
[0069] As mentioned earlier in FIG.2, the system 108 includes the processors 202, and the memory 204, for refining the AI model 220 selection and sequencing, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0070] Further, as mentioned earlier the processor 202 includes the receiving unit 208, the selecting unit 210, the determining unit 212, the executing unit 214, the evaluating unit 216, the processing unit 218, the customizing unit 222, and the logging unit 224 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0071] FIG. 4 is an exemplary the system 108 architecture 400 for refining the AI model 220 selection and sequencing, according to one or more embodiments of the present disclosure.
[0072] The architecture 400 includes the data sources 110, a data sources integration unit 402, a pre-processing unit 404, a logic selection and sequencing unit 406, an iterative logic execution unit 408, the storage unit 206, a workflow manager 412 and the UI 306 communicably coupled to each other via the network 106.
[0073] In one embodiment, the data sources 110 are the one or more databases which are a structured collection of data that is managed and organized in a way that allows system 108 for easy access, retrieval, and manipulation. The data sources 110 are used to store, manage, and retrieve large amounts of information efficiently. In one embodiment, the data sources integration unit 402 integrates data from the various data sources 110. For example, the data sources integration unit 402 integrates data from the various data sources 110 both within organization network 106 and external sources.
[0074] In one embodiment, the pre-processing unit 404 preprocesses the integrated data from the various data sources 110. For example, the integrated data undergoes preprocessing to ensure data consistency and quality. In particular, the preprocessing involves tasks like data cleaning, normalization, and handling missing values.
[0075] In one embodiment, the logic selection and sequencing unit 406 employs the AI models selection mechanism. The logic selection and sequencing unit 406 component uses a combination of techniques such as the meta-learning, the reinforcement learning, or other decision-making models to automatically select the most suitable one or more AI models 220 for the given task. Upon selection of the one or more AI models 220, the logic selection and sequencing unit 406 determines the optimal sequence for chaining the one or more selected AI models 220 together based on the set of parameters with respect to each of the one or more selected AI models 220.
[0076] In one embodiment, upon determining the optimal sequence for chaining the one or more selected AI models 220 the iterative logic execution unit 408 is configured to feed the received data to the first AI model from sequence of the one or more selected AI models 220, then the first AI model produces the output which is feed to the next AI model as the input. The iterative logic execution unit 408 continues the process iteratively until the last AI model in the sequence is reached. When the last AI model processes the received data, the final output is produced which is represented as the system's 108 result or prediction.
[0077] Further, the final output, the information related to the sequence of the one or more AI models 220 is stored in the storage unit 206. The workflow manager 412 extracts the information related to the final output and the sequence of the one or more AI models 220 and provides the information to the UI 306. The workflow manager 412 is a tool or system 108 designed to streamline, coordinate, and automate tasks and processes within an organization. The workflow manager 412 facilities in managing complex workflows by defining, monitoring, and optimizing the flow of work from one step to another.
[0078] FIG. 5 is a signal flow diagram illustrating the flow for refining the AI model selection and sequencing, according to one or more embodiments of the present disclosure.
[0079] At step 502, the system 108 receives the data from the data sources 110 in the network 106 for executing the tasks. For example, in order to monitor health of various network elements in the network 106, the system 108 receives the data associated with the health of various network elements from the data sources 110. Further, the system 108 integrates the data received from the data sources 110.
[0080] At step 504, the system 108 selects the one or more AI models 220 from the plurality of the AI models 220 to execute the task utilizing the received data. For example, the system 108 tests the plurality of the AI models 220 by providing the integrated data as the input data and selects the one or more AI models 220 based on the comparison of the generated outputs. For example, the one or more AI models 220 with high outputs are selected for executing the task.
[0081] At step 506, the system 108 determines the optimal sequence of the one or more selected AI models 220 based on the set of parameters of each of the one or more selected AI models. For example, the at least one selected AI model 220 among the one or more selected AI models 220 with high output and efficiency is added first in the sequence.
[0082] At step 508, the system 108 executes the one or more AI models 220 in the determined sequence utilizing the received data. For example, the received data is fed to the first AI model which is processed, and the output is produced. The output generated by the first AI model is fed to the next AI model. The execution continues iteratively until the last AI model in the sequence is reached. When the last AI model produces the output, that output is considered as the final output of the system 108.
[0083] At step 510, the system 108 evaluates the set of performance parameters of the one or more selected AI models 220 by comparing the final output produced on the execution of each of the one or more selected AI models 220 with the predefined set of performance parameters. Depending on the performance evaluation, the system 108 provides the feedback to one or more units within the system 108 that refines the one or more AI models 220 selection and chaining process of the one or more selected AI models 220 for future tasks. Further, the system 108 notifies the user regarding the performance evaluation of the one or more selected AI models 220.
[0084] FIG. 6 is a flow diagram of a method 600 for refining the AI model 220 selection and sequencing, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0085] At step 602, the method 600 includes the step of receiving data from at least one data source among the data sources 110. In one embodiment, the receiving unit 208 receives data from the at least one data source among the data sources 110. In particular, the receiving unit 208 extracts data such as at least one of, but not limited to, file input, data as input stream from the data sources 110. Herein, the receiving unit 208 extracts data from at least one of, but not limited to, a Hypertext Transfer Protocol version 2 (HTTP2) request received at the processor 202, a Hadoop Distributed File System (HDFS), the NWDAF, the NMS, the servers 104, one or more databases and Network Attached Storage (NAS). For example, let us consider in order to perform the specific task such as at least one of, but not limited to, anomaly detection, the receiving unit 208 extracts the data associated with the anomaly detection task from the one or more databases.
[0086] Further, the receiving unit 208 integrates the data received from the data sources 110. In particular, the receiving unit 208 utilizes the APIs for receiving and integrating the data received from the data sources 110. For example, the data received from the data sources 110 is combined by the receiving unit 208. Thereafter, the integrated data is preprocessed by the the processing unit 218 to ensure the data consistency and quality within the system 108. Herein, preprocessing of the integrated data includes at least one of, but not limited to, data cleaning, normalization, and handling missing values.
[0087] At step 604, the method 600 includes the step of selecting one or more AI models to execute the task utilizing the received data. In one embodiment, the selecting unit 210 selects the one or more AI models to execute the task utilizing the received data. Herein, the selecting unit 210 selects one or more AI models 220 among the plurality of the AI models 220. In particular, the selecting unit 210 uses a combination of techniques such as meta-learning, reinforcement learning, or other decision-making logics to automatically select the most suitable AI models for the task. For example, let us consider there are 10 AI models which are trained on the historical data related to the task and for testing the 10 AI models, the selecting unit 210 provides some part of the received data as input to the 10 AI models. Based on the input the 10 AI models generate outputs. Based on the generated outputs the selecting unit 210 compares the outputs of the 10 AI models and selects 5 AI models who has generated the best outputs in terms of mean accuracy and consistency.
[0088] At step 606, the method 600 includes the step of determining the sequence of the one or more selected AI models 220 based on the set of parameters of each of the one or more selected AI models 220. In one embodiment, the determining unit 212 determines the sequence of the one or more selected AI models 220. For example, the determining unit 212 determines the optimal sequence of the one or more selected AI models 220 by comparing the set of parameters related to the output generated by the of the one or more selected AI models 220 while testing. Herein, the AI model with more accurate output and high efficiency is added first in the sequence and based on the accuracy and the efficiency of the other selected AI models, determining unit 212 adds the selected AI models 220 in the sequence.
[0089] At step 608, the method 600 includes the step of executing each of the one or more selected AI models in the determined sequence utilizing the received data. In one embodiment, the executing unit 214 is configured to execute each of the one or more selected AI models in the determined sequence utilizing the received data. For example, let us assume that there 5 AI models selected for the task with the determined sequence such as a AI model 1, a AI model 2,…, a AI model 5. The received data is fed to the AI model 1 which generates the output based on the fed data. Thereafter, the output generated by the AI model 1 is fed to the AI model 2 as the input. Based on the fed input the AI model 2 generates the output. This process continues iteratively until the AI model 5 in the sequence is reached. However, the output generated by the AI model 5 is considered as the final output of the system 108. The final output is used for at least one of, but not limited to, task, further analysis or application within the network 106.
[0090] At step 610, the method 600 includes the step of evaluating, a set of performance parameters of the one or more selected AI models 220 by comparing the final output of the one or more selected AI models 220 with the predefined set of performance parameters. In one embodiment, the evaluating unit 216 is configured to evaluate the overall performance of the one or more selected AI models 220 by evaluating the set of performance parameters of the one or more selected AI models 220. In particular, the evaluating unit 216 compares the set of performance parameters related to the final output of the one or more selected AI models 220 with the predefined set of performance parameters which are retrieved from the storage unit 206 by the evaluating unit 216. For example, the accuracy of the final output of the of the 5 AI models is compared with the predefined accuracy. If the accuracy of the final output of the of the 5 AI models is matching with the predefined accuracy, then selected one or more selected AI models 220 (5 AI models) are considered as suitable to perform the task. If the accuracy of the final output of the of the 5 AI models is not matching with the predefined accuracy, then selected one or more selected AI models 220 (5 AI models) are considered as not suitable to perform the task.
[0091] At step 612, the method 600 includes the step of customizing, the one or more selected AI models 220 based on the evaluation. Depending on the performance evaluation of the one or more selected AI models 220, the customizing unit 222 customizes the one or more selected AI models 220 by providing the feedback to the selecting unit 210 to refine at least one of, the selection of the one or more AI models 220 and chaining/sequencing of the one or more AI models 220 for future tasks. For example, the customizing unit 222 adjusts or update the set of parameters, retraining one or more AI models 220, update the one or more AI models 220 selection logic for refining selection and chaining of the one or more AI models 220 for the future task.
[0092] In one embodiment, the logs pertaining to the at least one of, but not limited to, the selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance evaluation of the one or more selected AI models 220 are stored in the storage unit 206. In one embodiment, the user is notified regarding the selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance evaluation of the one or more selected AI models 220 in real time. Advantageously, due to automatic selection of the one or more AI models 220 and sequencing of the one or more selected AI models 220 the accuracy and efficiency in complex tasks involving multiple models is increased due to which the overall system 108 performance is increased.
[0093] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202. The processor 202 is configured to receive data from at least one data source 110. The processor 202 is further configured to select one or more AI models 220 to execute the task utilizing the received data. The processor 202 is further configured to determine the sequence of the one or more selected AI models 220 based on the set of parameters of each of the one or more selected AI models 220. The processor 202 is further configured to execute each of the one or more selected AI models 220 in the determined sequence utilizing the received data. The processor 202 is further configured to evaluate the set of performance parameters of the one or more selected AI models 220 by comparing the final output generated on the execution of each of the one or more selected AI models 220 with the predefined set of performance parameters. The processor 202 is further configured to customize the one or more selected AI models 220 based on the evaluation, the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models 220.
[0094] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0095] The present disclosure provides technical advancements of efficient and simplified data integration which speeds up access to diverse datasets, improving analysis and decision-making. The invention provides customized algorithm chaining which is a dynamic methodology implementation for enhancing system performance based on user-specific data. Due to the enhanced system performance the processing speed is increased, and more accurate outcomes are generated. The streamlined decision making facilitates quicker development cycles and more effective solution. The invention provides optimized model sequencing which increases accuracy and efficiency in complex tasks involving multiple models.
[0096] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0097] Environment - 100;
[0098] User Equipment (UE) - 102;
[0099] Server - 104;
[00100] Network- 106;
[00101] System -108;
[00102] Data sources – 110;
[00103] Processor - 202;
[00104] Memory - 204;
[00105] Storage unit – 206;
[00106] Receiving unit – 208;
[00107] Selecting unit – 210;
[00108] Determining unit – 212;
[00109] Executing unit – 214;
[00110] Evaluating unit – 216;
[00111] Processing unit -218;
[00112] Plurality of AI models – 220;
[00113] Customizing unit – 224;
[00114] Logging unit – 224;
[00115] Primary Processor – 302;
[00116] Memory – 304;
[00117] User Interface (UI) – 306;
[00118] Data sources integration unit – 402;
[00119] Preprocessing unit – 404;
[00120] Logic selection and sequencing unit – 406;
[00121] Iterative logic execution unit – 408;
[00122] Workflow manager – 412.
,CLAIMS:CLAIMS
We Claim:
1. A method (600) of refining an Artificial Intelligence (AI) model selection and sequencing, the method (600) comprising the steps of:
receiving, by one or more processors (202), data from at least one data source (110),
selecting, by the one or more processors (202), one or more AI models (220) to execute a task utilizing the received data;
determining, by the one or more processors (202), a sequence of the one or more selected AI models (220) based on a set of parameters of each of the one or more selected AI models (220);
executing, by the one or more processors (202), each of the one or more selected AI models (220) in the determined sequence utilizing the received data;
evaluating, by the one or more processors (202), a set of performance parameters of the one or more selected AI models (220) by comparing a final output generated on the execution of each of the one or more selected AI models (220) with a predefined set of performance parameters; and
customizing, by the one or more processors (202), the one or more selected AI models (220) based on the evaluation, wherein the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models (220).
2. The method (600) as claimed in claim 1, wherein the data source (110) is at least one of, a Network File System (NFS), a Network Management System (NMS), a Network Data Analytics Function (NWDAF), an Application Programming Interface (API), and one or more databases.
3. The method (600) as claimed in claim 1, wherein the one or more AI models (220) are selected based on at least one of meta-learning and reinforcement learning.
4. The method (600) as claimed in claim 1, wherein the set of parameters corresponds to at least one of computational efficiency, data dependency, and compatibility with respect to each of the one or more selected AI models (220).
5. The method (600) as claimed in claim 1, wherein an output of each of the one or more selected AI models (220) is an input for a subsequent AI model of the one or more selected AI models (220).
6. The method (600) as claimed in claim 1, wherein the final output is generated by a last AI model (220) in the sequence of the one or more selected AI models (220) and wherein the set of performance parameters correspond to at least one of an accuracy, a precision, and a recall of the final output.
7. The method (600) as claimed in claim 1, wherein the predefined set of parameters correspond to at least one of an accuracy, a precision, and a recall of a previously trained AI model (220), wherein the predefined set of parameters is retrieved from a database (206).
8. The method (600) as claimed in claim 1, comprising the steps of:
storing, by the one or more processors (202), logs associated with the sequence and output generated by each of the one or more selected AI models (220) in the sequence.
9. A system (108) for refining an Artificial Intelligence (AI) model selection and sequencing, the system (108) comprising:
a receiving unit (208) configured to receive, data from at least one data source (110),
a selecting unit (210) configured to select, one or more AI models (220) to execute a task utilizing the received data;
a determining unit (212) configured to determine, a sequence of the one or more selected AI models (220) based on a set of parameters of each of the one or more selected AI models (220);
an executing unit (214) configured to execute, each of the one or more selected AI models (220) in the determined sequence utilizing the received data;
an evaluating unit (216) configured to evaluate, a set of performance parameters of the one or more selected AI models (220) by comparing a final output generated on the execution of each of the one or more selected AI models (220) with a predefined set of performance parameters; and
a customizing unit (222) configured to customize, the one or more selected AI models (220) based on the evaluation, wherein the customizing includes at least one of updating the set of parameters to refine the selection of the one or more AI models (220).
10. The system (108) as claimed in claim 9, wherein the data source (110) is at least one of, a Network File System (NFS), a Network Management System (NMS), a Network Data Analytics Function (NWDAF), an Application Programming Interface (API), and one or more databases.
11. The system (108) as claimed in claim 9, wherein the one or more AI models (220) are selected based on at least one of meta-learning and reinforcement learning of the received data.
12. The system (108) as claimed in claim 9, wherein the set of parameters corresponds to at least one of computational efficiency, data dependency, and compatibility with respect to each of the one or more selected AI models (220).
13. The system (108) as claimed in claim 9, wherein an output of each of the one or more selected AI models (220) is an input for a subsequent AI model (220) of the one or more selected AI models (220).
14. The system (108) as claimed in claim 9, wherein the final output is generated by a last AI model (220) in the sequence of the one or more selected AI models (220) and wherein the set of performance parameters correspond to at least one of an accuracy, a precision, and a recall of the final output.
15. The system (108) as claimed in claim 9, wherein the predefined set of parameters correspond to at least one of an accuracy, a precision, and a recall of a previously trained AI model, wherein the predefined set of parameters is retrieved from a database (206).
16. The system (108) as claimed in claim 9, comprising:
a logging unit (224) configured to store logs associated with the sequence and output generated by each of the one or more selected AI models (220) in the sequence.
| # | Name | Date |
|---|---|---|
| 1 | 202321067378-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf | 2023-10-07 |
| 2 | 202321067378-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf | 2023-10-07 |
| 3 | 202321067378-FORM 1 [07-10-2023(online)].pdf | 2023-10-07 |
| 4 | 202321067378-FIGURE OF ABSTRACT [07-10-2023(online)].pdf | 2023-10-07 |
| 5 | 202321067378-DRAWINGS [07-10-2023(online)].pdf | 2023-10-07 |
| 6 | 202321067378-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf | 2023-10-07 |
| 7 | 202321067378-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321067378-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321067378-DRAWING [06-10-2024(online)].pdf | 2024-10-06 |
| 10 | 202321067378-COMPLETE SPECIFICATION [06-10-2024(online)].pdf | 2024-10-06 |
| 11 | Abstract.jpg | 2024-12-07 |
| 12 | 202321067378-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321067378-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321067378-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321067378-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321067378-FORM 3 [27-01-2025(online)].pdf | 2025-01-27 |