Abstract: ABSTRACT METHOD AND SYSTEM FOR TRAINING A MODEL BASED ON A SELECTED LOGIC The present disclosure relates to a system (120) and a method (600) for training a model based on a selected logic. The method (600) receiving one or more requests from a user for training the model. The method (600) further includes the step of retrieving from a database relevant data for training the model based on one or more instructions extracted from the one or more requests. The method (600) further includes the step of executing a plurality of logics utilizing the relevant data retrieved from the database. The method (600) further includes the step of comparing output generated by each logic with outputs generated by rest of the plurality of executed logics. The method (600) further includes the step of selecting a preferred at least one of the plurality of logics based on the comparison utilizing on one or more evaluation metrics. Ref FIG. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR TRAINING A MODEL BASED ON A SELECTED LOGIC
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of artificial intelligence/machine learning, more particularly relates to a method and a system for training a model based on a selected logic.
BACKGROUND OF THE INVENTION
[0002] Artificial intelligence and machine learning have revolutionized numerous industries by providing powerful tools for data analysis, pattern recognition, and decision-making. However, effectively harnessing the potential of AI/ML algorithms requires expertise in algorithm selection. Identifying the most suitable algorithm for a specific use case is a crucial and often challenging task that demands a deep understanding of machine learning principles, algorithms, and hyperparameter tuning.
[0003] Traditionally, users without specialized knowledge faced significant obstacles in selecting the appropriate algorithm for their data. This process involved manual evaluation, comparison, and fine-tuning of different algorithms, consuming valuable time and resources. Moreover, it necessitated a thorough understanding of the nuances of each algorithm, making it inaccessible to non-experts.
[0004] There is a need to select the most suitable AI/ML algorithms for a given dataset during the training process. This problem arises due to the complex nature of machine learning principles, the intricacies of different algorithms, and the difficulties associated with hyperparameter tuning.
[0005] Without a streamlined approach to algorithm selection, users without extensive knowledge of machine learning face significant barriers. They must invest substantial time and effort in evaluating various algorithms, understanding their characteristics, and fine-tuning hyperparameters to achieve optimal performance. This manual process restricts the accessibility and efficiency of utilizing AI/ML techniques, hindering users from leveraging the full potential of their data.
[0006] Furthermore, the lack of automated algorithm selection poses challenges for users who need to focus on their specific use cases or problems without delving into the technical intricacies of machine learning. They require a user-friendly solution that can automatically evaluate their data, compare it with a range of algorithms, and identify the algorithm that is most likely to yield the most relevant results.
[0007] Therefore, there is a need for an automated algorithm selection system that simplifies the process, reduces the time required for selecting algorithms for different use cases, and empowers users without extensive expertise to leverage the power of AI/ML effectively. Such a system should seamlessly analyze user data, compare it against various algorithms, and provide clear and informative insights to facilitate informed decision-making. By addressing these challenges, the present invention aims to provide a solution that ensures efficient and effective algorithm selection tailored to the user's data, promoting wider adoption and utilization of AI/ML in various domains.
SUMMARY OF THE INVENTION
[0008] One or more embodiments of the present invention provides a method and a system for training a model based on a selected logic.
[0009] In one aspect of the present invention, the method for training the model based on the selected logic is disclosed. The method includes the step of receiving one or more requests from a user for training the model. The method further includes the step of retrieving from a database relevant data for training the model based on one or more instructions extracted from the one or more requests. The method further includes the step of executing a plurality of logics utilizing the relevant data retrieved from the database. The method further includes the step of comparing output generated by each logic with outputs generated by rest of the plurality of executed logics. The method further includes the step of selecting a preferred logic from at least one of the plurality of logics based on the comparison utilizing one or more evaluation metrics.
[0010] In one embodiment, the request is received in the form of a Hypertext Transfer Protocol (HTTP).
[0011] In one embodiment, the one or more instructions extracted from the one or more requests includes at least one of, information pertaining to a training period, a test period, one or more features, logical partitioning, and a logic name.
[0012] In an embodiment, the plurality of logics includes at least one of, two-factor regression, multitude decision tree, periodicity logic, scalar boost logic, and heuristic gain logic.
[0013] In one embodiment, the output generated by each of the plurality logics is stored in the database.
[0014] In one embodiment, the method further includes the steps of generating a visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics. The method further includes displaying the visual representation on the UI of a User Equipment (UE).
[0015] In an embodiment, the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics include at least one of, graphs and tables.
[0016] In an embodiment, the one or more evaluation metrics include at least one, accuracy, precision and recall.
[0017] In another aspect of the present invention, a system for training the model based on the selected logic is disclosed. The system includes a transceiver, configured to receive one or more requests from a user for training the model. The system further includes a retrieving unit configured to retrieve from a database relevant data for training the model based on one or more instructions extracted from the one or more requests. The system further includes an execution unit configured to execute a plurality of logic utilizing the relevant data retrieved from the database. The system further includes a performance analyser configured to compare output generated by each logic with outputs generated by the rest of the plurality of executed logic. The system further includes a selection unit configured to select a preferred logic from at least one of the plurality of logics based on the comparison utilizing one or more evaluation metrics.
[0018] In another aspect of the present invention, a UE is disclosed. One or more primary processors of the UE is communicatively coupled to one or more processors. The one or more primary processors are coupled with a memory. The memory stores instructions which when executed by the one or more primary processors causes the UE to transmit one or more requests to the one or more processors via a user interface to train a model.
[0019] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor, causes the processor to receive, one or more requests from a user for training the model. The processor is further configured to retrieve, from a database, relevant data for training the model based on one or more instructions extracted from the one or more requests. The processor is further configured to execute a plurality of logics utilizing the relevant data retrieved from the database compare output generated by each logic with outputs generated by rest of the plurality of executed logics. The processor is further configured to select, a preferred logic from at least one of, the plurality of logics based on the comparison utilizing on one or more evaluation metrics.
[0020] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0022] FIG. 1 is an exemplary block diagram of a communication system for training a model based on a selected logic, according to one or more embodiments of the present disclosure;
[0023] FIG. 2 is an exemplary block diagram of a system for training the model based on the selected logic, according to one or more embodiments of the present disclosure;
[0024] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2 communicably coupled with a User equipment (UE), according to one or more embodiments of the present disclosure
[0025] FIG. 4 is an exemplary diagram of an architecture of the system of the FIG. 2, according to one or more embodiments of the present disclosure;
[0026] FIG. 5 is a signal flow diagram for training the model based on the selected logic, according to one or more embodiments of the present disclosure; and
[0027] FIG. 6 is a flow chart illustrating a method for training a model based on a selected logic, according to one or more embodiments of the present disclosure.
[0028] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0029] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0030] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0031] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0032] The present disclosure addresses the challenges faced in established technologies in selection of a preferred logic from the plurality of logics for training the model such as but not limited to, Artificial Intelligence and Machine Learning (AI/ML) model. The solution further provides auto selection of a relevant logic from the plurality of logics for training the AI/ML model, thereby saving on time and reducing complexity. The end users can make appropriate decisions in selecting the preferred logic from the plurality of logics to train the model based on their requirements. The present invention further facilitates the users to focus on their specific problem, without the deep knowledge about the technical aspects of AI/ML techniques to select the preferred logic for the model training. The present invention provides visualization of the graphs and tables, which presents the result in a clear and visually appealing format obtained from analysis of each of the plurality of logics executed. By presenting the graph visualization, the users such as but limited to network operator can identify the preferred logic from the plurality of logics that outperforms to train the model. The visualization further facilitates the user to compare the performance of the each of the plurality of logics to be applied to train the model based on metrics such as accuracy, precision, recall, or any other relevant evaluation criteria.
[0033] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of a communication system 100 for training the model based on the selected logic, according to one or more embodiments of the present disclosure. The communication system 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120. In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, smartphones, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In alternate embodiments, the UE 110 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 110a, the second UE 110b, and the third UE 110c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0035] The network 105 may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0036] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0037] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0038] The communication system 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0039] The communication system 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 120 is illustrated as remotely coupled with the server 115, without deviating from the scope of the present disclosure.
[0040] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0041] FIG. 2 illustrates an exemplary block diagram of the system 120 for training a model based on a selected logic, according to one or more embodiments of the present disclosure.
[0042] As per the illustrated embodiment, the system 120 includes one or more processors 205, a memory 210, a user interface 215 and a database 220. For the purpose of description and explanation, the description will be explained with respect to one processor 205 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 120 may include more than one processor 205 as per the requirement of the network 105. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0043] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0044] In an embodiment, the user interface 215 includes a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120.
[0045] In an embodiment, the database 220 is one of, but not limited to, an Elastic search database. The Elastic search database refers to a distributed, scalable, and fault-tolerant database system designed for the storage, retrieval, and real-time analysis of extensive datasets. In an alternate embodiment the database 220 is one of, but not limited to a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 220 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0046] In order for the system 120 to train the model based on the selected logic, the processor 205 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a transceiver 225, a retrieving unit 230, an execution unit 235, a performance analyser 240, a selection unit 245 and a depiction unit 250 communicably coupled to each other.
[0047] The transceiver 225, the retrieving unit 230, the execution unit 235, the performance analyser 240, the selection unit 245 and the depiction unit 250 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, thesystem 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0048] The transceiver 225 of the system 120 is configured to receive one or more requests from the user for training the model. In an exemplary embodiment, the model is at least one of, but not limited to, an Artificial intelligence/ Machine Learning (AI/ML) model. In an embodiment, the AI/ML model refers to computational logics designed to extract patterns and insights from the data provided by the user. In an embodiment, the one or more requests includes input variables required to train the model. In an embodiment, the input variables refer to the measurable attributes such as but not limited to, numerical attributes, categorical attributes, ordinal attributes, binary attributes that are used as inputs to a model The one or more requests is received via a communication protocol. The communication protocol is one of, but not limited to, a Hypertext Transfer Protocol (HTTP). The HTTP refers to a network protocol used for communication on the World Wide Web. The HTTP facilitates critical functions essential for web communication such as but not limited to, enhanced content delivery, Application Programming Interface (API) integration, streaming and multimedia, security and encryption.
[0049] On the receipt of the one or more requests from the transceiver 225, the retrieving unit 230 is configured to retrieve relevant data from the database 220. The relevant data is utilized for training the model based on one or more instructions extracted from the one or more requests. In one embodiment, the relevant data is at least one of, but not limited to, numerical attributes and categorical attributes specified in the user request for training the model. The one or more instructions extracted from the one or more requests includes at least one of, but not limited to, information pertaining to a training period of the model, a test period of the trained model, one or more features, logical partitioning of the model, and a logic name to train the model.
[0050] In one embodiment, the training period refers to a specific period, wherein the model is trained using relevant data present in the database 220 during the training period of the model. The test period refers to a distinct temporal phase designated for evaluating the performance and accuracy of the model after the training process. The one or more features of the model refers to the input variables that are used as inputs to train the model. The logical partitioning of the model refers to the structured organization of the input variables for training the model based on specific criteria or rules such as but not limited to domain knowledge, correlation and redundancy, feature importance, data type and scale, hierarchical or structural grouping.
[0051] Upon retrieving the relevant data for training the model, the execution unit 235 is configured to execute a plurality of logics utilizing the relevant data retrieved from the database 220. More specifically, the execution unit 235 is configured to execute each of the plurality of logics utilizing the relevant data.
[0052] The plurality of logic includes at least one of, but is not limited to, two-factor regression, multitude decision tree, periodicity logic, scalar boost logic, and heuristic gain logic. In one embodiment, the two-factor regression refers to a statistical modeling technique used to analyze relationships between variables such as but not limited to, network load, environmental conditions, user density factors that influence the performance, behavior, and/or operational conditions of the network 105. The multitude decision tree refers to ensemble learning technique. The multitude decision tree facilitates multiple decision trees to solve complex problems related to network 105 management and optimization. The periodicity logic refers to a set of techniques such as but not limited to, time-series analysis, machine learning, and signal processing to detect and analyze periodic patterns such as but not limited to traffic patterns, resource utilization, performance monitoring in network data. The scalar boost logic refers to the method for enhancing the performance metrics associated with scalar values in the network 105.
[0053] In one embodiment, the execution unit 235 is configured to apply each of the plurality of logics to the relevant data. Accordingly, upon execution of each of the plurality of logics by the execution unit 235, an output is generated. Thereafter, the output generated is stored in at least one of the database 220. Further, the output generated is utilized as a fundamental data for subsequent analysis and comparison.
[0054] The performance analyser 240 fetches the output generated by the each of the plurality of logics stored in the database 220. Subsequent to fetching the output generated from the database 220, the performance analyser 240 is configured to compare the output generated by each logic of the plurality of logics with outputs generated by rest of the plurality of executed logics. In this regard, the performance analyser 240 compares and analyses the outputs generated by each of the plurality of logics to evaluate the performance of each of the plurality of logics.
[0055] The performance analyser 240 assesses the performance of each of the plurality of logics based on one or more evaluation metrics. The one or more evaluation metrics includes at least one of, but not limited to, accuracy, precision, and recall. In one embodiment, the accuracy facilitates a user, such as a network operator, to assess the classification related to the network 105 performance, resource allocation. The precision assesses an effectiveness of the model which is trained, and further facilitates minimizing the deceptive positive outcomes during decision-making processes related to tasks, such as, but not limited to, traffic routing or anomaly detection in the network 105. In an embodiment, the recall evaluates the trained model's ability to accurately identify all pertinent instances of interest, such as but not limited to, detecting network failures and/or ensuring Quality of Service (QoS) commitments are achieved. Subsequent to the comparison by the performance analyzer 240, the selection unit 245 of the system 120 is configured to select a preferred logic from the plurality of logics. Accordingly, the preferred logic is the selected logic. The selection unit 245 selects the preferred logic based on the comparison utilizing the one or more evaluation metrics by the performance analyzer 240. Further, in one embodiment, the selection unit 245 sets the preferred logic from the plurality of logics as the default to train the model. Further, the preferred logic as selected by the selection unit 245 is configured to train the model.
[0056] Further, the depiction unit 250 facilitates a visual representation of the preferred logic from the plurality of logics as selected by the selection unit 245. The depiction unit 250 is configured to generate the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics. The visual representation includes at least one of, but not limited to, graphs and tables. Further the depiction unit 250 displays the visual representation on the user interface 215 of the UE 110.
[0057] FIG. 3 is a schematic representation of a workflow of the system of FIG. 2 communicably coupled with the User equipment (UE) 110, according to one or more embodiments of the present disclosure. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the first UE 110a and the system 120 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0058] As mentioned earlier in FIG. 1, the UE 110a may include an external storage device, a bus, a main memory, a read-only memory, a mass storage device, communication port(s), and a processor. The exemplary embodiment as illustrated in FIG. 3 will be explained with respect to the first UE 110a without deviating from the scope of the present disclosure and limiting the scope of the present disclosure. The first UE 110a includes one or more primary processors 305 communicably coupled to the one or more processors 205 of the system 120.The one or more primary processors 305 are coupled with a memory 310 storing instructions which are executed by the one or more primary processors 305. Execution of the stored instructions by the one or more primary processors 305 enables the first UE 110a to transmit, one or more requests to the one or more processors via the user interface 215 to train the model. In an embodiment, the one or more requests includes user data such as but not limited to, Artificial intelligence/Machine Learning (AI/ML) techniques to train a model. The AI/ML techniques can include but not limited to, AI/ML algorithms. The one or more requests is received in the form of a Hypertext Transfer Protocol (HTTP).
[0059] As illustrated in FIG. 2, the system 120 includes the one or more processors 205, the memory 210, the user interface 215, and the database 220. The operations and functions of the one or more processors 205, the memory 210, the user interface 215, and the database 220 are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition. Further, the processor 205 includes the transceiver 225, the retrieving unit 230, the execution unit 235, the performance analyser 240, the selection unit 245 and the depiction unit 250. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0060] FIG. 4 is an exemplary architecture 400 which can be implemented in the system 120 of the FIG.2) for training the model based on the selected logic, according to one or more embodiments of the present invention. The exemplary embodiment as illustrated in the FIG. 4 includes the user interface 215, a load balancer 405, a model creation module 410, an accuracy analyser 415, the database 220, a selection module 420, and a visualization module 425.
[0061] The user interface 215 serves as the front-end component, which facilitates the user, such as, but not limited to, the network operator, a subscriber, and a network admin to interact with the architecture 400. The user sends the one or more requests to train the model via the user interface to the architecture 400. In an embodiment the model can be but not limited to Artificial Intelligence/Machine Learning (AI/ML) model. Further the user interface inputs the received request to the load balancer 405.
[0062] On receipt of the one or more requests, the load balancer 405 of the architecture 400 facilitates a balanced distribution of the received requests from the user via the user interface 215 to the model creation module 410.
[0063] Thereafter, the model creation module 410 of the architecture 400 initiates the model creation process. The model creation process includes configuring the model based on user-provided information, such as but not limited to, the training period, test period, features, logical partitioning, and algorithm name.
[0064] Upon creation of the model, the model creation module 410 retrieves the relevant data from the database 220. Retrieving the relevant data ensures data required for training and test periods are fetched. The model creation module 410 further executes a range of the plurality of the logics utilizing the relevant data retrieved from the database 220. In an embodiment the plurality of logics includes at least one of, but not limited to, two-factor regression, multitude decision tree, periodicity algorithm, scalar boost algorithm, and heuristic gain algorithm, for training the model. Further the executed plurality of logics are applied, or more specifically executed utilizing the relevant data,
[0065] On execution of each of the plurality of logic utilizing the relevant, output is generated. The output generated is thereafter stored in the database 220, such as, but not limited to, Elastic search database for facilitating data storage and retrieval.
[0066] In an embodiment, the accuracy analyser 415 of the architecture 400 fetches the output generated by the each of the plurality of the logics, which are stored in the database 220. Further, the accuracy analyser 415 compares and analyses the output generated by each of the plurality of logics to evaluate the performance of each of the plurality of logics. The evaluation is processed based on the one or more evaluation metrics such as but not limited to, accuracy, precision, and recall. The analysis performed by the accuracy analyser facilitates the architecture 400 to identify the preferred at least one of, the plurality of logics, which performs most relevant for the specific use case as specified by the user. Further the output of the accuracy analyser 415 is transmitted to the selection module 420.
[0067] The selection module 420 of the architecture 400 is configured to select the preferred logic from the plurality of logics based on the results obtained from the accuracy analyser 415. Further the selection module 420 sets the selected at least one of, the plurality of logics as the default logic to train the model.
[0068] Upon selecting the preferred logic from the plurality of logics, the visualization module 425 of the architecture 400 generates a visual representation of the selected preferred logic. In an embodiment, the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics include at least one of, graphs and tables. The visualization module 425 displays the visual representation on the user interface 215. The visualization module 425 facilitates the users such as but not limited to network operator to compare and understand the results obtained from each of the plurality of logics applied on the relevant data retrieved from the database 220 to train the model. Further the visualization module 425 facilitates the user to assess the results effectively for informed decision-making.
[0069] FIG. 5 is a signal flow diagram for training the model based on the selected logic, according to one or more embodiments of the present invention. For the purpose of description, the signal flow diagram is described with the embodiments as illustrated in FIG. 2 and FIG. 4 and should nowhere be construed as limiting the scope of the present disclosure.
[0070] At step 505, the user sends the one or more requests to train the model via the user interface 215. In an embodiment, the one or more requests includes input variables required to train the model. In an embodiment, the input variables refer to the measurable attributes such as but not limited to, numerical attributes, categorical attributes, ordinal attributes, binary attributes that are used as inputs to the model. The one or more requests is received via a communication protocol. The communication protocol is one of, but not limited to, a Hypertext Transfer Protocol (HTTP)At step 510, upon receiving the request from the user the user interface 215 inputs the HTTP request to the load balancer 405. The request includes essential information such as but not limited to, the training period, test period, features, logical partitioning, and algorithm name.
[0071] At step 515, the load balancer 405 facilitates balanced distribution of the received HTTP request to the model creation module 410 to configure the model.
[0072] At step 520, upon configuring the model based on the received request, the model creation module 410 retrieves the relevant data from the database. The data retrieval step facilitates the necessary data for the specified training and test periods is fetched. The model creation module 410 further executes a range of the plurality of logics such as but not limited to, AI/ML algorithms on the retrieved data. In an embodiment the plurality of logics includes but not limited to two-factor regression, multitude decision tree, periodicity algorithm, scalar boost algorithm, and heuristic gain algorithm, for training the model. Further the plurality of logics is applied to the retrieved data and outputs are generated for each of the AI/ML techniques applied. The resulting outputs are stored in the at least one of database 220. In an embodiment the database 220 includes but is not limited to an Elastic search database facilitating data storage and retrieval.
[0073] At step 525, the accuracy analyser 415 fetches the generated output from the Elastic search database. Further, the accuracy analyser 415 compares and analyses the outputs generated by each of the plurality of logics by evaluating the performance. The evaluation is processed based on relevant evaluation metrics such as but not limited to, accuracy, precision, recall. The analysis facilitated the architecture 400 to identify the preferred logic from the plurality of logics that performs most relevant for the specific use case as specified by the user. Further the outputs of the accuracy analyser 415 are transmitted to the selection module 420.
[0074] At step 530, the output of the accuracy analyser 415 is processed via the selection module 420 to select the preferred logic from the plurality of logics based on the on the comparison utilizing on one or more evaluation metrics. The selection module 420 determines the preferred logic from the plurality of logics, which is relevant to return the optimal results as requested by the user. Further the selection module 420 sets the selected most relevant logic as the default logic.
[0075] At step 535, in accordance with the user request to train the model, the visualization module 425 generates the visual representation of the each of the executed plurality of logics. The Visualization module 425 presents the data pertaining to the executed plurality of s logics in the form of visually informative graphs and tables.
[0076] At step 540, the visualization module 425 displays the visual representation on the user interface 215. The user, such as, but not limited to, network operator can interpret and compare the performance of each of the plurality of logics based on the provided visualizations.
[0077] FIG. 6 is a flow diagram illustrating a method for training the model based on the selected logic, according to one or more embodiments of the present disclosure.
[0078] At step 605, the method 600 includes the step of receiving one or more requests from a user for training the model, such as, but not limited to, Artificial intelligence/ Machine Learning (AI/ML).
[0079] At step 610, the method 600 includes the step of retrieving from the database 220 relevant data for training the model based on one or more instructions extracted from the one or more requests. The one or more instructions extracted from the one or more requests includes at least one of, but not limited to, information pertaining to the training period of the model, the test period of the trained model, one or more features logical partitioning of the model, and the logic name to train the model.
[0080] At step 615, the method 600 includes the step of executing a plurality of logics utilizing the relevant data retrieved from the database 220. The plurality of logic includes at least one of, but is not limited to, the two-factor regression, the multitude decision tree, the periodicity logic, the scalar boost logic, and the heuristic gain logic.
[0081] At step 620, the method 600 includes the step of comparing output generated by each logic with outputs generated by the rest of the plurality of executed logics. The performance analyser 240 compares and analyses the outputs generated by each of the plurality of logics to evaluate the performance of each of the plurality of logics. The performance analyser 240 assesses the performance of each of the plurality of logics based on the one or more evaluation metrics. The output of the performance analyser 240 is transmitted to the selection unit 245.
[0082] At step 620, the method 600 includes the step of selecting the preferred logic from the plurality of logics based on the comparison utilizing one or more evaluation metrics. The selection unit 245 sets the selected at least one of, the plurality of logics as the default to train the model. The depiction unit 250 facilitates a visual representation of the preferred logic from the at least one of the plurality of logics selected by the selection unit 245. The depiction unit 250 of the system 120 is configured to generate the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics, which include at least one of, graphs and tables. Further the depiction unit 250 displays the visual representation on the user interface 215 of the UE 110.
[0083] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0084] The present disclosure incorporates technical advancement that facilitates the end users to make decisions in selecting the preferred logic from the plurality of logics to train the model without requiring in-depth knowledge of Machine Learning (ML) principles. By automating the logic selection from the plurality of the logics based on the user requirements, the present invention facilitates reduced time and complexity in the selection process. The solution further, facilitates auto evaluation of the user provided data for model training. The solution compares the user provided data such as input variables to train the model with a range of plurality of logics and identifies the preferred logics from the plurality of logics, which are most relevant to yield the appropriate results as per the user requirements. The present invention further facilitates the users to focus on their specific problem, without the deep knowledge about the technical aspects of AI/ML techniques to select the logic for the model training.
[0085] The present invention provides various advantages, including optimal resource utilization and reduced execution time. By eliminating time required and complexity for selecting preferred logic from the plurality of logics to train a model, the solution efficiently utilizes AI/ML techniques such as but not limited to network optimization and management, Self-Organizing Networks (SON), predictive maintenance for providing optimal solution. Further the solution facilitates effective logic selection from the plurality of the logics, where the selected logic is tailored to the user data requirements. The solution provides comprehensive graph visualization to illustrate the performance metrics of plurality of logics applied to train the model. The visualization presents the results obtained in a clear and visually appealing manner of each of the plurality of the logics executed. The solution selects a preferred logic from the plurality of the logics based on the comparison and analysis performed by the performance analyzer utilizing one or more evaluation metrics such as but not limited to, accuracy, precision, recall. Further the solution sets the selected preferred logic from the plurality of logics as the default logic to train the model. The solution empowers decision-makers to make informed choices and select the most effective logic to train the model based on their specific requirements, ensuring optimal performance and accurate anomaly detection.
[0086] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0087] Communication system – 100
[0088] Network – 105
[0089] User Equipment – 110
[0090] Server – 115
[0091] System – 120
[0092] Processor -205
[0093] Memory – 210
[0094] User Interface– 215
[0095] Database- 220
[0096] Transceiver - 225
[0097] Retrieving unit - 230
[0098] Execution unit - 235
[0099] Performance analyser – 240
[00100] Selection unit – 245
[00101] Depiction unit - 250
[00102] Load balancer - 405
[00103] Model creation unit - 410
[00104] Accuracy analyser - 415
[00105] Selection module – 420
[00106] Visualization module – 425
,CLAIMS:CLAIMS
We Claim
1. A method (600) for training a model based on a selected logic, the method comprising the steps of:
receiving (605) by one or more processors (205), one or more requests from a user for training the model;
retrieving (610), by the one or more processors (205), from a database, relevant data for training the model based on one or more instructions extracted from the one or more requests;
executing (615), by the one or more processors (205), a plurality of logics utilizing the relevant data retrieved from the database;
comparing (620), by the one or more processors (205), output generated by each logic with outputs generated by rest of the plurality of executed logics; and
selecting (625), by the one or more processors (205), a preferred logic from at least one of the plurality of logics based on the comparison utilizing on one or more evaluation metrics.
2. The method (600) as claimed in claim 1, wherein the request is received in the form of a Hypertext Transfer Protocol (HTTP).
3. The method (600) as claimed in claim 1, wherein the one or more instructions extracted from the one or more requests includes at least one of, information pertaining to a training period, a test period, one or more features, logical partitioning, and a logic name.
4. The method (600) as claimed in claim 1, wherein the plurality of logics include at least one of, two-factor regression, multitude decision tree, periodicity logic, scalar boost logic, and heuristic gain logic.
5. The method (600) as claimed in claim 1, wherein the output generated by each of the plurality logics is stored in the database.
6. The method (600) as claimed in claim 1, wherein the method further comprises the steps of:
generating, by the one or more processors, a visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics; and
displaying, by the one or more processors, the visual representation on the UI of a User Equipment (UE).
7. The method (600) as claimed in claim 6, wherein the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics include at least one of, graphs and tables.
8. The method (600) as claimed in claim 1, wherein the one or more evaluation metrics include at least one, accuracy, precision and recall.
9. A system (120) for training a model based on a selected logic, the system comprising:
a transceiver (225), configured to, receive, one or more requests from a user for training the model;
a retrieving unit (230), configured to, retrieve, from a database, relevant data for training the model based on one or more instructions extracted from the one or more requests;
an execution unit (235), configured to, execute, a plurality of logics utilizing the relevant data retrieved from the database;
a performance analyser (240), configured to compare, output generated by each logic with outputs generated by rest of the plurality of executed logics; and
a selection unit (245), configured to select, a preferred logic from the plurality of logics based on the comparison utilizing on one or more evaluation metrics.
10. The system (120) as claimed in claim 9, wherein the request is received in the form of a Hypertext Transfer Protocol (HTTP).
11. The system (120) as claimed in claim 9, wherein the one or more instructions extracted from the one or more requests includes at least one of, information pertaining to a training period, a test period, one or more features, logical partitioning, and a logic name.
12. The system (120) as claimed in claim 9, wherein the plurality of logics include at least one of, two-factor regression, multitude decision tree, periodicity logic, scalar boost logic, and heuristic gain logic.
13. The system (120) as claimed in claim 9, wherein the output generated by each of the plurality logics is stored in the database.
14. The system (120) as claimed in claim 9, wherein a visual depiction unit (250) of the system is configured to:
generate, a visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics; and
display, the visual representation on a user interface of a User Equipment (UE) (110).
15. The system (120) as claimed in claim 14, wherein the visual representation pertaining to performance of each of the plurality of logics based on analysis of output generated by each of the plurality of logics include at least one of, graphs and tables.
16. The system (120) as claimed in claim 9, wherein the one or more evaluation metrics include at least one, accuracy, precision and recall.
17. A User Equipment (UE) (110), comprising:
one or more primary processors (305) communicatively coupled to one or more processors (205), the one or more primary processors (305) coupled with a memory (310), wherein said memory stores instructions which when executed by the one or more primary processors causes the UE (110) to:
transmit, one or more requests to the one or more processors via a user interface to train a model; and
wherein the one or more processors (205) is configured to perform the steps as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321049116-STATEMENT OF UNDERTAKING (FORM 3) [20-07-2023(online)].pdf | 2023-07-20 |
| 2 | 202321049116-PROVISIONAL SPECIFICATION [20-07-2023(online)].pdf | 2023-07-20 |
| 3 | 202321049116-FORM 1 [20-07-2023(online)].pdf | 2023-07-20 |
| 4 | 202321049116-FIGURE OF ABSTRACT [20-07-2023(online)].pdf | 2023-07-20 |
| 5 | 202321049116-DRAWINGS [20-07-2023(online)].pdf | 2023-07-20 |
| 6 | 202321049116-DECLARATION OF INVENTORSHIP (FORM 5) [20-07-2023(online)].pdf | 2023-07-20 |
| 7 | 202321049116-FORM-26 [03-10-2023(online)].pdf | 2023-10-03 |
| 8 | 202321049116-Proof of Right [08-01-2024(online)].pdf | 2024-01-08 |
| 9 | 202321049116-DRAWING [19-07-2024(online)].pdf | 2024-07-19 |
| 10 | 202321049116-COMPLETE SPECIFICATION [19-07-2024(online)].pdf | 2024-07-19 |
| 11 | Abstract-1.jpg | 2024-10-01 |
| 12 | 202321049116-Power of Attorney [25-10-2024(online)].pdf | 2024-10-25 |
| 13 | 202321049116-Form 1 (Submitted on date of filing) [25-10-2024(online)].pdf | 2024-10-25 |
| 14 | 202321049116-Covering Letter [25-10-2024(online)].pdf | 2024-10-25 |
| 15 | 202321049116-CERTIFIED COPIES TRANSMISSION TO IB [25-10-2024(online)].pdf | 2024-10-25 |
| 16 | 202321049116-FORM 3 [06-12-2024(online)].pdf | 2024-12-06 |
| 17 | 202321049116-FORM 18 [20-03-2025(online)].pdf | 2025-03-20 |