Sign In to Follow Application
View All Documents & Correspondence

Method And System For Selecting At Least One Of A Plurality Of Logic Models

Abstract: ABSTRACT METHOD AND SYSTEM FOR SELECTING AT LEAST ONE OF A PLURALITY OF LOGIC MODELS The present disclosure relates to a system (120) and a method (600) for selecting (105) at least one of a plurality of logic models. The method (600) includes the step of receiving (605) a plurality of logic models and dataset related to the logic models and relevant to the task. The method (600) includes the step of assigning (610), executing (615) each of the logic models on separate processing units and computing (620) the performance metrics for each of the execution processes. The method (600) further includes comparing (625) the performance metrics and output of each of the execution and selecting (630) at least one logic model from a plurality of logic models that possesses the highest performance metrics, and which generates the best output for a given task. The method (600) includes the step of generating the output (635) utilizing the at least one selected logic model. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 October 2023
Publication Number
21/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR SELECTING AT LEAST ONE OF A PLURALITY OF LOGIC MODELS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates generally to a wireless communication system, and in particular, to a method and a system for selecting at least one of a plurality of logic models.
BACKGROUND OF THE INVENTION
[0002] Different logic models can be applied to solve a problem; however, it is not always clear which logic models will perform well. While some logic models may excel in certain situations, other logic models may underperform in the same situations. Further, achieving high accuracy is crucial in many applications, such as image recognition, natural language processing, financial analysis, and more. The choice of logic models can greatly impact the accuracy of the results. Furthermore, the traditional approaches rely on evaluating multiple logic models sequentially that can be time-consuming, especially when dealing with large datasets or complex computations.
[0003] There is, therefore, a need for effective solutions for comparing multiple logic models and finding out the logic models that can deliver the accurate and precise output thereby improving the efficiency of the performance of various tasks across a network system. In particular, there is a need for running logic models in parallel to make this comparison and selection that significantly speeds up processes of finding the most appropriate logic models to perform the given task.

SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and a system for selecting at least one of a plurality of logic models.
[0005] In one aspect of the present invention, the method for selecting at least one of the plurality of logic models is disclosed. The method includes a step of receiving, by one or more processors, the plurality of logic models and corresponding set of data to execute each of the plurality of logic models. The method includes the step of assigning, by one or more processors, each of the plurality of logic models with a section of the set of received dataset. The method further includes the step of executing each of the plurality of logic models, by one or more processors, utilizing the assigned section of the received dataset. The method further includes the step of computing by the one or more processors, evaluation metrics corresponding to the execution of each of the plurality of logic models. The method further includes the step of comparing, by one or more processors, the computed evaluation metrics corresponding to the execution of each of the plurality of logic models. The method further includes the step of selecting, by one or more processors, at least one logic model from the plurality of logic models based on the comparison.
[0006] In an embodiment, the plurality of logic models is received by a Graphical User Interface (GUI) or a Command Line Interface (CLI) via a request received from one of a service, a microservice, a software component, and an application.
[0007] In an embodiment, the dataset corresponds to dataset received from at least one of a Network Management System (NMS), Network File System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API), and databases, and wherein the plurality of logic models is at least one of a Large Language Model (LLM) and Machine Learning (ML) model.
[0008] In an embodiment, the plurality of logic models and the corresponding dataset is received from one of a User Equipment (UE) and a user interface.
[0009] In an embodiment, each of the plurality of logic models is assigned with the section of the received dataset on a dynamic basis.
[0010] In an embodiment, each of the plurality of logic models is executed independently and concurrently.
[0011] In an embodiment, the evaluation metrics correspond to at least one of accuracy, precision, and recall rate of each of the executed plurality of logic models, and the evaluation metrics is stored in a database.
[0012] In an embodiment, the at least one selected logic model is associated with higher and better evaluation metrics in comparison to other logic models of the plurality of logic models.
[0013] In an embodiment, on selection of the at least one logic model, the method comprises the step of generating, by the one or more processors, an output for the task utilizing the at least one selected logic model.
[0014] In another aspect of the present invention, the system for selecting at least one of the plurality of logic models is disclosed. The system includes a receiving unit configured to receive the plurality of logic models and corresponding set of data to execute each of the plurality of logic models. The system further includes an assigning unit configured to assign each of the plurality of logic models with a section of the received dataset. The system further includes an executing unit configured to execute, each of the plurality of logic models utilizing the assigned section of the received dataset. The system further includes a computing unit configured to compute the evaluation metrics corresponding to the execution of each of the plurality of logic models, a comparison unit configured to compare the computed evaluation metrics corresponding to the execution of each of the plurality of logic models. The system further includes a selecting unit configured to select at least one logic model from the plurality of logic models based on the comparison.
[0015] In yet another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive a plurality of logic models and corresponding dataset to execute each of the plurality of logic models and assign each of the plurality of logic models with a section of the received dataset. The processor is configured to execute each of the plurality of logic models utilizing the assigned section of the received dataset. The processor is further configured to compute evaluation metrics corresponding to the execution of each of the plurality of logic models. The processor is further configured to compare the computed evaluation metrics corresponding to the execution of each of the plurality of logic models. The processor is further configured to select at least one logic model from the plurality of logic models based on the comparison.
[0016] In yet another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors is coupled with a memory. The memory stores instructions which when executed by the primary processors causes the UE to receive plurality of logic models and corresponding dataset to execute each of the plurality of logic models.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of an environment for selecting at least one of a plurality of logic models, according to one or more embodiments of the present invention;
[0020] FIG. 2 is an exemplary block diagram of a system for selecting at least one of the plurality of logic models, according to one or more embodiments of the present invention;
[0021] FIG. 3 is a schematic representation of a workflow of the system of FIG. 1, according to the one or more embodiments of the present invention.
[0022] FIG. 4 is an exemplary block diagram of an architecture implemented in the system 120 for selecting at least one of the plurality of logic models of the FIG. 2, according to one or more embodiments of the present invention.
[0023] FIG. 5 is a signal flow diagram for selecting one of the plurality of logic models, according to one or more embodiments of the present invention.
[0024] FIG. 6 is a flowchart of a method for selecting one of the plurality of logic models, according to one or more embodiments of the present invention.
[0025] The foregoing shall be more apparent from the following detailed description of the invention.

DETAILED DESCRIPTION OF THE INVENTION
[0026] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0027] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0028] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0029] In an embodiment if a task is to be performed, a plurality of logic models is present to carry out the task. The logic models need dataset to perform the task. But the efficiency and complexity of each of the plurality of logic models to extract the most appropriate output as the result of the task to be performed varies from one logic model to another. In one of the embodiments, when a task intending to analyse large amount of data and deliver a result. One of the logic models in the plurality of logic model possess the ability to generate the appropriate result. Conventionally, the selection of logic model for the purpose of finding the logic model able to produce the best result was inefficient and time consuming. This disadvantage was due to generating the result from each of the logic model sequentially or in queues.
[0030] To address these problems, in various embodiments of the present invention, a system and a method is provided to receive and execute the logic model and dataset relevant to task to be performed, to computes and compares performance metrics of each of the plurality of logic models, to select the at least one of the logic model from the plurality of logic model. Further, the present invention generates an output utilizing the at least one of the selected logic models for the an appropriate result.
[0031] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for selecting at least one of a plurality of logic models, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 110, a server 115, a network 105 and a system 120 communicably coupled to each other for selecting at least one of the plurality of logic models.
[0032] In an embodiment, each of the plurality of logic model indicates a logic model to execute any specific task or solve any problem received at the user interface 215. The logic model refers to at least, but not limited to, one of a Large Language Model (LLM) and Machine Learning (ML) model.
[0033] As per the illustrated embodiment and for the purpose of description and illustration, the UE 110 includes, but not limited to, a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0034] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0035] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0036] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP, an Application Programming Interface or some combination thereof.
[0038] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is configured to select at least one of the plurality of logic models. As per one or more embodiments, the system 120 is adapted to be embedded within the server 115 or embedded as an individual entity.
[0039] The operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0040] FIG. 2 is an exemplary block diagram of the system 120 for selecting at least one of the plurality of logic models, according to one or more embodiments of the present invention.
[0041] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface 215, and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0042] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0043] In an embodiment, the user interface 215 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of such components include, but are not limited to, the UE 110 and the database 220.
[0044] The database 220 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, Network Management System (NMS), Network File System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API) repositories, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0045] In order for system 120 to select at least one of the plurality of logic models, the processor 205 includes one or more modules. In one embodiment, the one or more modules/units includes, but not limited to, a receiving unit 225, an assigning unit 230, an execution unit 235, a computing unit 240, a comparison unit 245, a selecting unit 250 and a generating unit 255, communicably coupled to each other for selecting at least one of the plurality of logic models.
[0046] In one embodiment, each of the receiving unit 225, the assigning unit 230, the execution unit 235, the computing unit 240, the comparison unit 245, the selecting unit 250 and the generating unit 255 can be used in combination or interchangeably for selecting at least one of the plurality of logic models.
[0047] The receiving unit 225, the assigning unit 230, the execution unit 235, the computing unit 240, the comparison unit 245, the selecting unit 250 and the generating unit 255, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store logic models that, when executed by the processing resource, implement the processor to execute a task or solve a problem given at the user interface. In such examples, the system 120 may comprise the memory 210 storing the logic models and the processing resources to execute the task, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0048] In an embodiment, the receiving unit 225 is configured to receive a plurality of logic models to perform a task. The tasks which are to be performed using the logic models refer to specific problems related to at least one of, but not limited to, an image recognition, natural language processing, financial analysis in 5th Generation (5G) telecommunications finding applications in healthcare, finance and autonomous systems. The task includes, but not limited to, network monitoring and surveillance, network optimization, fault detection and predictive maintenance, traffic management, quality of service enhancement, anomaly detection, user experience personalization, network command interpretation, predictive financial modelling, real time billing and pricing optimization.
[0049] The plurality of logic models that is received by the receiving unit 225 at the user interface 215 to perform the tasks, is at least one of, but not limited to, the LLM and the ML model. The LLM is a subset of artificial intelligence models which are trained on large number of diverse and extensive datasets to understand and generate human understanding and language. The LLM includes, but is not limited to, Generative Pre-trained Transformer (GPT), Bidirectional Encoder Representations from Transformer (BERT) and Text-To-Text Transfer Transformer (T5). The ML model is an artificial intelligence-based model, particularly, mathematical frameworks or algorithms which are trained on a given dataset to predict on a new random dataset. The ML model includes but not limited to Regression Models, Decision Trees, Random Forest, Support Vector Machines (SVM), Neural Networks, K-means Clustering, Hierarchical Clustering, Principal Component Analysis, Q-Learning, Deep Q-Networks, Generative Adversarial Networks (GANs). Each of the plurality of logic models is utilized for performing the task. In an embodiment, the receiving unit 225, is further configured to receive the dataset relevant to perform the given task. The received dataset is related to performing the task using the plurality of logic models which are received via the UE 110. The dataset is further received by accessing at least one of, a Network Management System (NMS), Network File System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API), and the database 220. The dataset includes, but not limited to, input data comprising numerical values, or text images; labels, metadata comprising descriptive information, normalization values, data quality metrics, temporal information, geospatial data; network performance logs, customer interaction data, security logs, bounding boxes, financial metrics, asset types and target values. In one embodiment, the data received is used for executing the logic models for performing the given task. In an embodiment, the dataset includes, but not limited to, real time data, historic data, simple data and data treated as training dataset for training models. The dataset is received from different data sources. The data sources include at least one of sources internal or external to the network 105.
[0050] In an embodiment, the plurality of logic models is received via a Graphical User Interface (GUI) or a Command Line Interface (CLI) or a service, or a microservice, or a software component, or an application. The GUI is a visual-based user interface 215 where the user interacts with the system 120 using icons, buttons. The CLI is a command or text-based interface where the user interacts with the system 120 by typing commands. The service, microservice, software component and an application include at least one of software-based functionalities that perform specific tasks or set of tasks over the network 105 on behalf of users or other systems. In an embodiment, the user interacts with the GUI or CLI to manually select the logic model information and corresponding dataset which is to be received by the receiving unit 225. In another embodiment, the receiving unit 225 receives requests which includes the logical model information and corresponding dataset from at least one of service, microservice, software component and application.
[0051]
[0052] Upon receiving the plurality of logic models and the related dataset, the assigning unit 230 is configured to assign each of the plurality of logic models with a section of the received dataset. Each of the plurality of logic models is assigned with a section of the received dataset by at least one of manually using GUI or automatically using some unit or component like handler. The manual assigning using GUI involves the user interacting with the Graphical User Interface and manually designating which sections of the dataset to be assigned to each logic model. The section of the dataset may be described based on the specific rows, columns of the dataset. The automatic assigning involves deploying software components including at least handler which automatically detect one or more features align better with one or more given logic models. Each of the plurality of logic models is assigned on a dynamic basis. The dynamic basis in assigning the logic models is attributed to the distribution of dataset according to the requirement for each of the plurality of logic model. For example, one of the logic models focusing on managing the traffic in a network 105 is to be assigned with traffic-based data. In certain other logic models specialized in anomaly detection is assigned with security related or anomaly detection data. The dynamic character in an embodiment is further attributed to the capacity of the system to respond compatibly with the network load. For example, if the system 120 finds an anomaly due to the altering network load, consistency in network operations is maintained by assigning or reassigning the dataset accordingly to the logic models. Therefore, the input data relevant to the given task is assigned among the multiple processing units within the system 120 ensuring that each algorithm receives its own portion of the dataset for selecting at least one of the plurality of logic models.
[0053] Upon assigning each of the plurality of logic models, each of the plurality of logic models is executed in the execution unit 235. The execution process is done utilizing each of the plurality of logic models, independently and concurrently. The independent and concurrent execution involves running each of the logic models on the assigned section of the dataset in the execution unit 235, parallelly. The parallel execution of logic models reduces the time consumed for comparison. The parallel execution further optimizes resources allocated for computation at a given time.
[0054] Upon concurrent execution of each of the plurality of logic models, the computing unit 240 is configured to compute, evaluation metrics based on the performance and outputs of the execution of each of the plurality of logic models. The process of computing involves systematic performance of a series of steps or operations by a computer or computational system. The computing process in an embodiment corresponds to calculating the evaluation metrics for the execution of the logic models. The evaluation metrics in the given invention are quantitative measures which assess performance and outcome of each execution. The computing unit 240 considers at least one of accuracy, precision, and recall rate of each of the executed plurality of logic models for calculating the evaluation metrics. The evaluation metrics which are computed by computing unit 240 is stored in the database 220.
[0055] The accuracy of the computation is determined by measuring how often a classification ML model is correct overall. The classification ML model refers to ML models that categorizes or labels data into predefined categories or classes depending on the patterns that the model learns from training dataset. The classification ML models are used in, but not limited to, network traffic classification, anomaly detection, user-device classification, subscriber churn prediction, service quality prediction and fraud detection. The correctness in the computation of the classification model can enhance the quality of network operation.
[0056] The precision in the computation process is assessing how often an ML model is correct overall. The precision gives an understanding of the reliability of logic models used in a given network operation. The recall in the computation is the process of determining whether an LLM model can find all objects of the target class. Finding objects in the target class in the present invention, includes, but not limited to, distinguishing various types of network traffic as in video streaming, voice calls, browsing and gaming; anomaly or threat detection against normal data in cybersecurity tasks, or spotting different user device types like smartphones, IoT or other wearables, detecting fraudulent against legitimate transactions. The parameters of accuracy, precision and recall in computation process are tabulated to compute the evaluation metrics. It is then stored in a database 220 for further processes for selecting at least one of the plurality of logic models. The performance metrics computed by the computing unit 240 is further processed for selecting at least one of the plurality of logic models.
[0057] Upon computing the performance metrics based on the execution of the plurality of logic models, comparison of the performance metrics is to be conducted. The comparison unit 245 is configured to compare the determined evaluated metric corresponding to the execution of each of the plurality of logic models. The comparison unit 245 collects the evaluation metrics corresponding to the execution of each of the plurality of logic models. The comparison unit 245 contrasts the collected metrics, to determine which logic model performed best in terms of the evaluation criteria which includes, but not limited to, accuracy, precision and recall parameters of each execution of each of the plurality of logic models. The comparison based on the evaluation criteria is further utilized for selecting at least one of the plurality of logic models.
[0058] Thereafter, the selecting unit 250 is configured to select at least one logic models from the plurality of logic models based on the comparison by the comparison unit 245. The at least one selected logic model is associated with higher and better evaluation metrics in comparison to other logic models of the plurality of logic models. In one of the embodiments, the logic model with the highest performance metrics including but not limited to, high accuracy, maximum precision and optimum recall is selected by the selecting unit 250.
[0059] The selection of each of the plurality of logic models is based on at least one of meta-learning and reinforcement learning of the received dataset that helps in analyzing the dataset to select at least one of a plurality of logic models. The meta-learning is a branch of machine learning that is used to design models and algorithms or mathematical frameworks capable of adapting quickly to new tasks or situations by training on various classification tasks. The meta-learning is often described as learning to learn. The meta learning further enhances the ability of a model to easily adapt to any other new tasks with very less dataset. The reinforcement learning is also a type of machine learning wherein a model learns from the interaction with the environment, performing actions based on a strategy and receiving feedback. The selected logic model is the appropriate logic model to perform the given task or solve the problem in hand as compared to the other logic models in the plurality of logic model.
[0060] Upon selection of at least one of the logic models from the plurality of logic models, the generating unit 255 generates an output. The generating unit 255 utilizes at least one selected logic model to generate the output. The output generated by the generating unit 255 is delivered as the final result for the task or problem. Therefore, the present invention generates the best possible output for a given task or problem by utilizing the at least one of the optimally performing logic model as selected from the plurality of logic models.
[0061] FIG. 3 describes a preferred embodiment of the system 120 of FIG. 2, according to various embodiments of the present invention. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0062] As mentioned earlier in FIG. 1, each of the first UE 110a, the second UE 110b, and the third UE 110c may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.
[0063] The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. The execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a receiving the plurality of logic models and corresponding dataset to execute each of the plurality of logic models.
[0064] As mentioned earlier in FIG. 2, the one or more processors 205 of the system 120 is configured for selecting at least one of a plurality of logic models. As per the illustrated embodiment, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0065] Further, the processor 205 includes the receiving unit 225, the assigning unit 230, the executing unit 235, the computing unit 240, the comparison unit 245, the selecting unit 250, and the generating unit 255. The operations and functions of the receiving unit 225, the assigning unit 230, the executing unit 235, the computing unit 240, the comparison unit 245, the selecting unit 250, and the generating unit 255 are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 3, should be read with the description provided for the system 120 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0066] FIG. 4 is an exemplary block diagram of an architecture 400 implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0067] The architecture 400 includes the user interface 215 operated by a user, the database 220, the Artificial Intelligence/ Machine Learning (AI/ML) system 405 and a Workflow Manager (WFM) 430. The AI/ML system 405 includes the parallel processing units 410, algorithm metrics result aggregator 415, algorithm metric evaluation and best algorithm selector 420 and an output generator 425.
[0068] The user interface 215 through the UE 110, comprising one or more processors coupled with the memory, interacts with the AI/ML system 405. The plurality of logic models and the corresponding dataset is received via atleast one of the user interface 215 or service, or microservice, or software component or application. The user interface 215 includes at least one of GUI or CLI. In the present embodiment, the GUI allows the user to interact with visual based interfaces to select the plurality of logic models and corresponding data set. The AI/ML system 405 further receives the plurality of logic models and the corresponding dataset via a request from atleast one of service, microservice, software component and application. The memory 210 stores instructions. The instructions are processed by the primary processors causing the UE 110 to transmit the plurality of logic models with respect to the given task to be performed, to the AI/ML system 405. The dataset corresponding to the plurality of logic models in order to perform the given task are accessed from at least one of a Network Management System (NMS), Network File System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API), and databases 220. The task to be performed, plurality of logic models and corresponding dataset are received by the parallel processing units 410 within the AI/ ML system 405.
[0069] Upon receiving, the task to be performed, plurality of logic models and the dataset from at least one of the UE 110 or the user interface 215, via the parallel processing units 410 in the AI/ML system 405, each of the plurality of logic models are assigned to the parallel processing units 410. The assigning of each of the logic models with each section of the dataset is done manually using GUI by the user or automatically by some component like handler. The logic models and the data assigned to separate parallel processing units 410 within the AI/ML system 405. Thereafter, the section of the dataset relevant in performing the given task are assigned to each of the plurality of the logic models by the separate parallel processing units 410. Therefore, the assigning process is done dynamically. The dynamic assigning of dataset for each of the plurality of logic models, is attributed to distributing section of data based on the nature of the logic model. This dynamic character further allows the AI/ML system 405 to respond in real time to any alterations in network conditions which ensures efficient and consistent resource performance. The assigning of section of datasets is to ensure each of the plurality of algorithms receive the section of the dataset relevant to the task.
[0070] Upon assigning, the parallel processing units 410 execute the logic model. The parallel processing units 410 runs each of the plurality of logic model on the section of the dataset assigned for each of the logic model. The each of the logic models are executed with the related section of the data, separately but parallelly.
[0071] Upon execution of the plurality of logic models simultaneously, the results of the executed logic models are computed by algorithm metric result aggregator 415. The algorithm metric result aggregator 415 computes the performance score of each of the plurality of logic models executed. The performance score is computed based on various performance parameters including, but not limited to, accuracy, precision and recall and the results from the execution of each of the plurality of logic models.
[0072] Upon computing the performance score, the algorithm metric evaluation and best algorithm selector 420 evaluates the performance metrics comparing the performance metrics of execution of each of the plurality of logic models. The evaluation deals with comparison of performance metrics and quality of output generated from the execution of each of the plurality of logic models. Based on the evaluation, the algorithm metric evaluation and best algorithm selector 420 selects at least one logic model which possesses appropriate performance metrics in comparison to other logic models of the plurality of logic models. The appropriate performance metrics include, but are not limited to, high accuracy, maximum precision and optimum recall rate and the best or most appropriate results out of all the executed logic models. The logic model with appropriate performance metrics is selected as the appropriate logic model from the plurality of logic models to perform the given task.
[0073] Upon selecting at least one of the logic models from the plurality of logic models, the output generator 425 generates an output The step of generating, by the one or more processors, is an output for the task. The output is generated by utilizing the at least one selected logic model.
[0074] Upon generation of output of the task, by the output generator 425, the output is transmitted to the WFM 430. The output is transmitted to the user interface 215 by the WFM . The output transmitted by the WFM 430 at the user interface 215 is transmitted as the final result of the task to be performed, as requested by the UE 110.
[0075] The generated output is further stored in the database 220. The database 220 facilitates efficient data storage for fast accessing and updating data. The WFM 430 retrieves the output from the database 220 and transmits to the user interface 215. This output is transmitted to the user interface as the final result of the task requested to be performed. The WFM 430 interacts at the user interface 215 to deliver the final result of the task to be performed as requested by the UE 110.
[0076] FIG.5 is a signal flow diagram for selecting at least one of the plurality of logic models, according to one or more embodiments of the present invention.
[0077] At step, 505, the plurality of logic models and related dataset to perform any given task are received. The plurality of logic models includes, but not limited to, LLM, ML models. The logic models and the related dataset are received by the AI/ML system 405 from at least one of the UE 110 or the user interface 215.The plurality of logic models and the corresponding dataset is received via GUI or CLI or service, microservice, software component or application.
[0078] At step 510, the logic models and the dataset related to the logic model and relevant to the task to be performed, are received from the database 220. Upon receiving the plurality of logic models and dataset related to execute a task, by the AI/ML system 405, the each of the logic models are assigned to the parallel processing units 410 and the related dataset is also assigned to the parallel processing units 410 ensuring that each of the logic model is assigned with each section of the related dataset. The assigning is done manually by the user using GUI, or automatically using one or more units or components like handler. Upon dynamic assigning, the execution of each of the logic model on each section of dataset is conducted in parallel processing units 410. The output from the execution of each of the plurality of logic models are compared with respect to its performance parameters and creates the performance score for each of them is created by the algorithm metric result aggregator 415. Thereafter, the performance metrics of each of the plurality of logic models are compared with each other. Upon comparison, at least one of the logic models from the plurality of logic models with the highest performance metrics are selected. The evaluation based on comparison and selection of logical model is done by the algorithm metric evaluation and best algorithm Selector 420. The output generator 425 generates the output that is generated by the execution of the selected logic model.
[0079] At step 515, the output from the AI/ML system 405, is fetched by the WFM 430, directly after the output generation by the output generator 425.
[0080] At step 520, the output is transmitted by the WFM 430 to the user interface 215. The output delivered to the user interface 215 is the final result of the task to be performed for UE 110.
[0081] At step 525, the output generated by the AI/ML system 405 after processing the plurality of logic models on the dataset relevant to the task to be performed, is transmitted and stored to the database 220 for storage purpose. The components of database 220 is already discussed in detail in FIG.3.
[0082] At step 530, the output stored in the database 220, is retrieved by the WFM 430.
[0083] At step 535, the output retrieved by the WFM 430 to the user interface 215. The output retrieved from the database 220 is the final result of the task to be performed.
[0084] FIG. 6 is a schematic representation of a method for selecting at least one of the logic models, according to one or more embodiments of the present invention. For the purpose of description, the method is described with the embodiments as illustrated in FIG.2 and show nowhere be construed as limiting the scope of the present disclosure.
[0085] At step 605, method 600 includes the step of receiving one or more logic models and related dataset pertaining to one or more tasks. The plurality of logic models and the dataset is received from at least one of the UE 110 or the user interface 215. The one or more logic models is at least LLMs and ML models. The plurality of logic models is received via the user interface 215 or service, microservice, one or more software components and one or more applications. The user interface 215 includes atleast one of GUI or CLI.
[0086] At step 610, the method 600 includes the step of dynamically assigning the received logic models to the given task to each section of the parallel processing units 410 for further processes. The dynamic assigning of the plurality of logic models and each section of the dataset is done manually by the user or automatically using one or more components like handler. The dataset received is also assigned to each of the plurality of logic models ensuring that each logic model receives a section of the dataset relevant to performing the task at hand.
[0087] At step 615, method 600 includes the step of executing the assigned logic models on the assigned dataset on each separate parallel processing unit 410 concurrently.
[0088] At step 620, method 600 includes the step of computing performance metrics corresponding to the execution of each of the plurality of logic models. The computing of performance metrics is to be based on performance parameters including, but not limited to accuracy, precision, recall, high performance in the execution of logic models and the output out of the execution of that particular logic model.
[0089] At step 625, method 600, includes the step of comparing the performance metrics for execution of each of the plurality of logic models based on the evaluation metrics.
[0090] At step 630, method 600, includes the step of selecting the at least one logic model from the plurality of logic models based on the comparison of performance metrics done in step 625. The criteria to select at least one of the logic models from plurality of logic models, is based on high performance metrics. The logic model that can deliver the appropriate output to perform the task is also selected.
[0091] At step 635, method 600, includes the step of generating the output. The output generated by the AI/ML system 405 from the at least one logic model selected from the plurality of logic models based on high performance metrics computed previously. The generated output is delivered as the required result of the task to be performed at the user interface 215.
[0092] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by processor 205. The processor 205 is configured to receive one or more inputs pertaining to one or more logic models and one or more dataset related to the task to be performed utilizing the received logic models from the user interface 215. The processor 205 is further configured to assign each of the plurality of logic models and dataset to each of the parallel processing units 410. The assigning ensures that each logic model gets each section of the dataset relevant to perform the tasks at hand. The processor 205 is further configured to execute each of the logic models on the assigned dataset in the parallel processing units 410 separately, independently and concurrently. The processor 205 is further configured to compute the performance metrics of the execution of logic models based on various parameters, including, but not limited to, accuracy, precision, recall and output from each execution. The processor 205 is further configured to compare the performance metrics of execution of each of the plurality of logic models in each of the parallel processing units within the processor. The processor 205 is further configured to select at least one of the logic models from the plurality of logic models by comparing the performance metrics and the selected logic model possess the high and better evaluation metrics in comparison to other logic models of the plurality of logic models. The processor 205 is further configured to generate the output utilizing at least one selected logic model. The output generated by utilizing the selected logic model is the result of the task to be performed.
[0093] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0094] The present disclosure incorporates technical advancement aimed at improving operational efficiency. The present invention is designed to optimize resource and time usage, also, minimize computational efforts. By dynamically assigning dataset and logic models and concurrently executing different logic models in different processing units, adaptable with real time network conditions, the system ensures time consumed is less with respect to conventional operations for comparing the logic models to select at least one of the logic models from the plurality of logic models that rightly fits to execute the task effectively. The Parallel processing of each of the plurality of logic models enhances the efficiency in the utilization of computational resources by decreasing the idle time and maximizing the throughput of available hardware which can ultimately minimize the overall costs. The present disclosure allows the system to automate the parallel processing of plurality of logic models, comparison of performance of each of the logic models, selection of at least one of the logic models from the plurality of logic models, and generation of the most appropriate output to be delivered as a result to the task. The system enhances the functionality by handling larger datasets and highly complex computational tasks with high accuracy and precision. This can be beneficial in small scale experiments to very large data analysis. This scalability of the application of the system brings the present disclosure to possess significant industrial applications like in healthcare, finance and autonomous system, enhancing technical and economic advancements for the users.
[0095] The present invention offers diverse advantages, offering computational processes which are efficient which can enhance the problem solving or execution of various tasks revolutionizing traditional evaluation of logic models. The parallel processing of logic models on a dataset leads to faster, accurate and more precise output generation. The present invention is scalable, adaptable with network conditions and it supports the real time performance of computational task utilizing Workflow Manager. The enhanced data access utilizing database and maximized automation using AI/ML system, the present invention improves decision making and robust performance of vast number of computational tasks.
[0096] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0097] Environment- 100
[0098] User Equipment (UE) - 110
[0099] Server – 115
[00100] System-120
[00101] Processor - 205
[00102] Memory - 210
[00103] User interface - 215
[00104] Database - 220
[00105] Receiving unit - 225
[00106] Assigning unit - 230
[00107] Execution unit - 235
[00108] Computing unit - 240
[00109] Comparison unit - 245
[00110] Selecting unit-250
[00111] Generating unit – 255
[00112] Primary processor-305
[00113] Primary Memory-310
[00114] Parallel Processing Unit- 410
[00115] Algorithm Metrics Result Aggregator-415
[00116] Algorithm Metrics Evaluation and Best Algorithm Selector-420
[00117] Output Generator-425
[00118] Workflow Manager (WFM) -430


,CLAIMS:CLAIMS
We Claim
1. A method (600) of selecting at least one of a plurality of logic models, the method comprising the steps of:
receiving (605), by the one or more processors, a plurality of logic models and corresponding dataset to execute each of the plurality of logic models;
assigning (610), by the one or more processors, each of the plurality of logic models with a section of the received dataset;
executing (615), by the one or more processors, each of the plurality of logic models utilizing the assigned section of the received dataset;
computing (620), by the one or more processors, evaluation metrics corresponding to the execution of each of the plurality of logic models.
comparing (625), by the one or more processors, the computed evaluation metrics corresponding to the execution of each of the plurality of logic models; and
selecting (630), by the one or more processors, at least one logic models from the plurality of logic models based on the comparison.

2. The method (600) as claimed in claim 1, wherein the plurality of logic models is received via atleast one of a Graphical User Interface (GUI) or a Command Line Interface (CLI) or a service, a microservice, a software component, and an application.

3. The method (600) as claimed in claim 1, wherein the dataset corresponds to dataset received from at least one of a Network Management System (NMS), Network File System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API), and databases, and wherein the plurality of logic models is at least one of a Large Language Model (LLM) and Machine Learning (ML) model.

4. The method (600) as claimed in claim 1, wherein the plurality of logic models and the corresponding dataset is received from one of a User Equipment (UE) (110) and a user interface (215).

5. The method (600) as claimed in claim 1, wherein each of the plurality of logic models is assigned with the section of the received dataset on a dynamic basis.

6. The method (600) as claimed in claim 1, wherein each of the plurality of logic models is executed independently and concurrently.

7. The method (600) as claimed in claim 1, wherein the evaluation metrics corresponds to at least one of accuracy, precision, and recall rate of each of the executed plurality of logic models, and wherein the evaluation metrics is stored in a database (220).

8. The method (600) as claimed in claim 1, wherein the at least one selected logic models is associated with higher and better evaluation metrics in comparison to other logic models of the plurality of logic models.

9. The method (600) as claimed in claim 1, wherein on selection (630) of the at least one logic models, the method comprises the step of generating, by the one or more processors, an output for the task utilizing the at least one selected logic model.

10. A system (120) for selecting at least one of a plurality of logic models, the method comprising the steps of:
a receiving unit (225) configured to receive, a plurality of logic models and corresponding dataset to execute each of the plurality of logic models;
an assigning unit (230) configured to assign, each of the plurality of logic models with a section of the received dataset;
an executing unit (235) configured to execute, each of the plurality of logic models utilizing the assigned section of the received dataset;
a computing unit (240) configured to compute evaluation metrics corresponding to the execution of each of the plurality of logic models.
a comparison unit (245) configured to compare, the computed evaluation metric corresponding to the execution of each of the plurality of logic models; and
a selecting unit (250) configured to select at least one logic models from the plurality of logic models based on the comparison.

11. The system (120) as claimed in claim 11, wherein the plurality of logic models is received atleast one of a Graphical User Interface (GUI), a Command Line Interface (CLI), a service, a microservice, a software component, and an application.
12. The system (120) as claimed in claim 11, wherein the dataset corresponds to dataset received from at least one of a Network Management System (NMS), Network Filing System (NFS), Network Data Analytics Function (NWDAF), Application Programming Interfaces (API), and databases, and wherein the plurality of logic models is at least one of a Large Language Model (LLM) and Machine Learning (ML) model.

13. The system (120) as claimed in claim 11, wherein the plurality of logic models and the corresponding dataset is received from one of a User Equipment (UE) (110) and a user interface (215).

14. The system (120) as claimed in claim 11, wherein each of the plurality of logic models is assigned with the section of the received dataset on a dynamic basis.

15. The system (120) as claimed in claim 11, wherein each of the plurality of logic models is executed independently and concurrently.

16. The system (120) as claimed in claim 11, wherein the evaluation metrics corresponds to at least one of accuracy, precision, and recall rate of each of the executed plurality of logic models, and wherein the evaluation metrics is stored in a database (220).

17. The system (120) as claimed in claim 11, wherein the at least one selected logic models is associated with higher and better evaluation metrics in comparison to other logic models of the plurality of logic models.

18. The system (120) as claimed in claim 11, comprising a generating unit configured to generate by the one or more processors, an output for the task utilizing the at least one selected logic models on selection of the at least one logic model.

19. A User Equipment (UE) (102), comprising:
one or more primary processors communicatively coupled to one or more processors (205), the one or more primary processors coupled with a memory (210), wherein said memory stores instructions which when executed by the one or more primary processors causes the UE (110) to:
receiving, a plurality of logic models and corresponding dataset to execute each of the plurality of logic models, and
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321067278-STATEMENT OF UNDERTAKING (FORM 3) [06-10-2023(online)].pdf 2023-10-06
2 202321067278-PROVISIONAL SPECIFICATION [06-10-2023(online)].pdf 2023-10-06
3 202321067278-FORM 1 [06-10-2023(online)].pdf 2023-10-06
4 202321067278-FIGURE OF ABSTRACT [06-10-2023(online)].pdf 2023-10-06
5 202321067278-DRAWINGS [06-10-2023(online)].pdf 2023-10-06
6 202321067278-DECLARATION OF INVENTORSHIP (FORM 5) [06-10-2023(online)].pdf 2023-10-06
7 202321067278-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321067278-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321067278-DRAWING [07-10-2024(online)].pdf 2024-10-07
10 202321067278-COMPLETE SPECIFICATION [07-10-2024(online)].pdf 2024-10-07
11 Abstract.jpg 2024-12-30
12 202321067278-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321067278-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321067278-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321067278-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321067278-FORM 3 [31-01-2025(online)].pdf 2025-01-31