Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING SELECTION AND EXECUTION SEQUENCE OF ONE OR MORE ARTIFICIAL INTELLIGENCE (AI) MODELS The present invention relates to a system (108) and a method (600) for managing selection and execution sequence of one or more Artificial Intelligence (AI) models (220). The method (600) includes step of analysing a request received from a user to identify at least a type of task to be performed. Thereafter, generating a list comprising the one or more AI models (220) to perform the task based on the analysis of the request. Furthermore, receiving an input from the user corresponding to selection of the one or more AI models (220) from the generated list and an execution sequence of the selected one or more AI models (220). The method (600) includes step of providing feedback corresponding to the selection of the one or more AI models (220) and the execution sequence of the one or more AI models (220) so as to modify the selection and the execution sequence of the one or more AI models (220). Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING SELECTION AND EXECUTION SEQUENCE OF ONE OR MORE ARTIFICIAL INTELLIGENCE (AI) MODELS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for managing selection and execution sequence of one or more Artificial Intelligence (AI) models.
BACKGROUND OF THE INVENTION
[0002] In general, with increase in number of users, the network service provisions have been implementing to up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up to date features into the scope of provision so as to enhance user experience. For this purpose, integrating AI (artificial Intelligence) and ML (machine learning) for various network practices like estimating network performance, tracking health of a network, enhancing user interactive features, and monitoring security has become essential. Incorporating advanced AI/ML methodology has become a priority to keep up with rapidly evolving telecom sector. The AI/ML incorporation usually performed by introducing training models with specific data set to enable them to recognize patterns, trends and based on these, to predict required output. ML training for the given data extracted from data source is performed by a specifically constructed system.
[0003] However, there may be some specialized tasks or industries requiring unique training methodology or approaches that are not catered to by existing systems. Existing systems may use a uniform set of training methodology for all users and tasks, regardless of individual needs or preferences. This can lead to less effective outcomes for specific use cases.
[0004] Presently, there is no mechanism available where a user can customize methodology for training an AI/ML model for efficient, rapid and accurate task execution. Presently the users face limitations in customizing methodology sequences, hindering their ability to tailor machine learning workflows to specific tasks or datasets and users may not have insight into which methodology are being used in existing systems. This lack of transparency can make it difficult to understand or trust the results.
[0005] Then again, the contemporary approach has limited adaptability to evolving needs as technology and user requirements are constantly evolving and an inflexible system may struggle to adapt to new challenges or opportunities. There is a need of a mechanism which would allow a user to customize training methodology and the sequence as per the requirement.
[0006] There is need of a system and a method thereof to allow a user to have insight into which methodology are being used in existing system and to customize training methodology and the sequence as per the requirement.
SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provides a method and a system for managing selection and execution sequence of one or more AI models.
[0008] In one aspect of the present invention, the method for managing selection and execution sequence of one or more Artificial Intelligence (AI) models is disclosed. The method includes the step of analysing, by one or more processors, a request received from a user to identify at least a type of task to be performed. The method further includes the step of generating, by the one or more processors, a list comprising the one or more AI models to perform the task based on the analysis of the request. The method further includes the step of receiving, by the one or more processors, an input from the user corresponding to selection of the one or more AI models from the generated list and an execution sequence of the selected one or more AI models. The method further includes the step of providing, by the one or more processors, feedback corresponding to the selection of the one or more AI models and the execution sequence of the one or more AI models so as to modify the selection and the execution sequence of the AI models.
[0009] In another embodiment, the request comprises a dataset and one or more characteristics corresponding to each of the dataset are identified by the one or more processors based on analysis
[0010] In yet another embodiment, the type of task is identified based on the analysis of the one or more characteristics corresponding to each of the dataset of the request and wherein the one or more characteristics of the dataset comprises size, dimensionality, and datatypes.
[0011] In yet another embodiment, the type of the task is at least, classification, regression, and clustering.
[0012] In yet another embodiment, the list of the one or more AI models comprises the one or more AI models, wherein the generated list is transmitted to the UE.
[0013] In yet another embodiment, on selection of the one or more AI models, the method comprises the step of generating, by the one or more processors, a visual representation of the execution sequence of one or more selected AI models on at least one of the UE and the UI.
[0014] In yet another embodiment, modifying the selection and the execution sequence of the one or more AI models comprises the step of receiving, by the one or more processors, a modification input from the user based on the feedback.
[0015] In yet another embodiment, the modification input corresponds to modification of one of the execution sequence and one or more parameters of each of the one or more selected AI models, the one or more parameters are at least a learning rate, a regularization strength, and a batch size of each of the one or more AI models.
[0016] In yet another embodiment, the method comprises step of storing, by the one or more processors, logs related to performance metrics corresponding to training of the one or more selected AI model utilizing the dataset and wherein the performance metrics are at least one of an accuracy, a loss, and a convergence rate.
[0017] In another aspect of the present invention, the system for managing selection and execution sequence of one or more AI models is disclosed. The system includes an analysing unit configured to analyse, a request received from a user to identify at least a type of task to be performed. The system further includes a generating unit configured to generate, a list comprising the one or more AI models to perform the task based on the analysis of the request. The system further includes a receiving unit configured to receive, an input from the user corresponding to selection of one or more AI models from the generated list and an execution sequence of the selected one or more AI models. The system further includes a feedback unit configured to provide, feedback corresponding to the selection of the one or more AI models and the execution sequence of the one or more AI models so as to modify the selection and the execution sequence of the one or more AI models.
[0018] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to analyse, a request received from at least one of a user to identify at least a type of task to be performed. The processor is further configured to generate, a list comprising the one or more AI models to perform the task based on the analysis of the request. The processor is further configured to receive, an input from the user corresponding to selection of one or more AI models from the generated list and an execution sequence of the selected one or more AI models. The processor is further configured to provide, feedback corresponding to the selection of the one or more AI models and the execution sequence of the one or more AI models so as to modify the selection and the execution sequence of the AI models.
[0019] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0021] FIG. 1 is an exemplary block diagram of an environment for managing selection and execution sequence of one or more AI models, according to one or more embodiments of the present invention;
[0022] FIG. 2 is an exemplary block diagram of a system for managing the selection and the execution sequence of one or more AI models, according to one or more embodiments of the present invention;
[0023] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0024] FIG. 4 is an exemplary architecture for managing the selection and the execution sequence of one or more AI models, according to one or more embodiments of the present disclosure;
[0025] FIG. 5 is an exemplary signal flow diagram illustrating the flow for managing the selection and the execution sequence of one or more AI models, according to one or more embodiments of the present disclosure; and
[0026] FIG. 6 is a flow diagram of a method for managing the selection and the execution sequence of one or more AI models, according to one or more embodiments of the present invention.
[0027] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0028] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0029] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0030] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0031] Various embodiments of the present invention provide a system and a method for managing a selection and execution sequence of one or more Artificial Intelligence (AI) models. The present invention is including an interface which provides users with a transparent and intuitive way to select the one or more AI models and arrange the one or more selected AI models in an optimal execution sequence, leading to better decision-making and potentially higher one or more AI models performance. The present invention enhances the system ability by enabling the user to adjust or fine tune the execution sequence of the one or more selected AI models as per the requirements.
[0032] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a selection and execution sequence of one or more Artificial Intelligence (AI) models 220, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, and a system 108. In an embodiment, the one or more AI models refer to different frameworks or paradigms for solving problems or performing tasks using logic. The one or more AI models 220 represent various approaches to logic design and problem-solving, each suited to different types of tasks. The disclosed system 108 enables a user to select the one or more AI models 220 among a plurality of the AI models within the system 108 for the given task. Herein, the system 108 enables the user to arrange the selected one or more AI models 220 in the optimal execution sequence for chaining the one or more AI models 220 together in order to execute the task.
[0033] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0034] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0035] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0036] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0037] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0038] The environment 100 further includes the system 108 communicably coupled to the server 104, and the UE 102, via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0039] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0040] FIG. 2 is an exemplary block diagram of the system 108 for managing the selection and the execution sequence of one or more AI models 220, according to one or more embodiments of the present invention.
[0041] As per the illustrated and preferred embodiment, the system 108 for the managing the selection and the execution sequence of one or more AI models 220, includes one or more processors 202, a memory 204, a storage unit 206, a plurality of AI models 220 and a User Interface (UI) 222. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0042] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the selection and the execution sequence of one or more AI models 220. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0043] As per the illustrated embodiment, the storage unit 206 is configured to store data associated with the plurality of AI models 220. The storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the storage unit 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0044] As per the illustrated embodiment, the system 108 includes the plurality of AI models 220. Herein, the plurality of AI models 220 are systematic procedures or formulas for solving problems or performing tasks which are used to process data, make decisions, and perform various operations. In an alternate embodiment, the plurality of AI models 220 which selects suitable logics for the particular tasks are generally an Artificial Intelligence/Machine Leaning (AI/ML) models. Herein, the tasks are related to the machine learning tasks. For example, the model 220 facilitates in solving real-world problems without extensive manual intervention. Herein the plurality of AI models 220 are referred to one or more AI models 220 and can be used interchangeably without limiting scope of the invention.
[0045] As per the illustrated embodiment, the system 108 includes the UI 222. In an alternate embodiment, the UI 222 is included in the UE 102. In an embodiment, the UI 222 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 222 allows a user to transmit request to the system 108 for performing the task. Herein, the user acts as the data source. In one embodiment, the user may be at least one of, but not limited to, a network operator. In one embodiment, the UI 222 allows the users to select the one or more AI models 220 and arrange the one or more selected AI models 220 in the optimal execution sequence. Further, the UI 222 allows the users to quickly adjust or fine tune the execution sequence of the one or more selected AI models 220 as per the requirements of the user which enhances the systems 108 adaptability. In one embodiment, the UI 222 is the embedded within the system 108 or the UI 222 is the embedded within the UE 102. Herein UI 222 of the system 108 and the UI 222 of the UE 102 can be used interchangeably without limiting scope of the invention.
[0046] As per the illustrated embodiment, the system 108 includes the processor 202 for managing the selection and the execution sequence of one or more AI models 220. The processor 202 includes a receiving unit 208, an analysing unit 210, a generating unit 212, a transmitting unit 214, a processing unit 216, an executing unit 218, a feedback unit 224, and a logging unit 226. The processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204, the storage unit 206, the plurality of AI models 220 and the UI 222. In an embodiment, operations and functionalities of the receiving unit 208, the analysing unit 210, the generating unit 212, the transmitting unit 214, the processing unit 216, the executing unit 218, the feedback unit 224, the logging unit 226, and the one or more components of the system 108 can be used in combination or interchangeably.
[0047] In one embodiment, initially the receiving unit 208 of the processor 202 is configured to receive request from a user via the UE 102 for performing the task. In particular, the user transmits the request from via the UI 222 of the UE 102 for performing the task. In one embodiment, the request includes datasets and one or more characteristics corresponding to each of the datasets. Herein, the one or more characteristics of the dataset comprises at least one of, but not limited to, size, dimensionality, and datatypes. In one embodiment, the task is at least one of, but not limited to, a classification, a regression, and a clustering of the datasets received in the request.
[0048] In one embodiment, the receiving unit 208 receives request from the users by the UE 102 via an interface specifically constructed for the purpose of connectivity between the system and the UE 102. The interface includes at least one of, but not limited to, an Application Programming Interfaces (APIs). The APIs are a set of rules and protocols that allow different software applications to communicate with each other. In particular, the APIs are essential for integrating different systems, accessing services, and extending functionality.
[0049] Upon receiving the request from the UE 102, more particularly from the UI 222, the analysing unit 210 of the processor 202 is configured to analyse the request received from at least one of, the UE 102 to identify at least a type of task to be performed. Herein, the type of the task is at least one of, but not limited to, classification, regression, and clustering of the dataset received in the request. The type of task is identified based on the analysis of the one or more characteristics corresponding to each of the dataset of the request. In one embodiment, the analysing unit 210 of the assesses the user's input such as the dataset included in the received request to understand the nature of the task.
[0050] In one embodiment, by analysing the dataset’s structure particularly the nature of the target variable and the task or problem to be solved, the analysing unit 210 identifies whether at least one of, the classification, the regression, or the clustering task is to be performed. The nature of the target variable refers to the type of processor 202 is trying to predict or estimate in a machine learning task. The nature of the target variable plays a crucial role in determining the type of problem the system 108 is dealing with. Let us consider an example of the problem associated with image processing. Further, let us assume the dataset provided by the user is associated with the image and the images includes different objects like cats, dogs, and cars, with labels. Herein, the objective is to determine what objects are included in the images. In particular, the analysing unit 210 checks the target variables and if each image in the dataset is labeled with a category (e.g., "cat," "dog," "car"), and there is a requirement of predicting the category of new images, then the analysing unit 218 identifies that the classification task needs to be performed.
[0051] In an alternate embodiment, for example, if the dataset provided by the user is associated with the numerical or the target variables in the dataset are numericals and the objective is to predict the value, then the analysing unit 210 identifies that the regression task needs to be performed. In yet another embodiment, for example, if the dataset provided by the user includes similar data in the dataset and the objective is to combine the similar data, then the analysing unit 210 identifies that the clustering task needs to be performed.
[0052] Upon identifying the type of task to be performed, the generating unit 212 of the processor 202 is configured to generate a list which includes the one or more AI models 220 to perform the identified task based on the analysis of the request. In one embodiment, if the classification task is to be performed, then the one or more AI models 220 such as at least one of, but not limited to, a neural networks and a decision trees AI models 220 will be included in the list by the generating unit 212. In another embodiment, if the regression task is to be performed, then the one or more AI models 220 such as at least one of, but not limited to, a linear regression and polynomial regression AI models 220 will be included in the list by the generating unit 212. In yet another embodiment, if the clustering task is to be performed, then the one or more AI models 220 such as at least one of, but not limited to, a K-means clustering and hierarchical clustering AI models 220 will be included in the list by the generating unit 212.
[0053] Upon generating the list including the one or more AI models 220, the transmitting unit 214 of the processor 202 is configured to transmit the generated list to at least one of, the UE 102. Thereafter, the UI 222 of the UE 102 generates a visual representation of the list including the one or more AI models 220 utilizing the generating unit 212.
[0054] Upon generating the visual representation of the generated list, the user selects the preferred one or more AI models 220 from the generated list represented on the UI 222 of the UE 102. Upon selection of the one or more AI models 220, the generating unit 212 generates the visual representation of the execution sequence of one or more selected AI models on the UI 222 of the UE 102. Further, the user arranges the execution sequence of the selected one or more AI models 220 on the least one of, the UE 102 and the UI 222.
[0055] In one embodiment, the user to fine tunes one or more parameters related to each of the selected one or more AI models 220 via the UE 102. In particular, the UI 222 of the UE 102 incudes a plurality of intuitive controls which are designed to make user interactions seamless and effortless. Herein, the intuitive controls are used by the user to fine tune one or more parameters of each of the selected one or more AI models 220. In one embodiment, the one or more parameters is at least one of, but not limited to, learning rates, regularization strengths, and batch sizes of each of the selected one or more AI models 220.
[0056] In one embodiment, the learning rate of the selected one or more AI models 220 is a crucial hyperparameter that influences how quickly or slowly the one or more AI models 220 learns during training. In one embodiment, the regularization strength is a hyperparameter that controls the amount of regularization applied to each of the selected one or more AI models 220 to prevent overfitting. The overfitting is a common problem in machine learning where the one or more AI models 220 learns to perform very well on the training dataset but fails to generalize effectively to new, unseen dataset. In one embodiment, batch size is a crucial hyperparameter which refers to the number of training examples utilized in one iteration of the selected one or more AI models 220 training process. Instead of processing the entire dataset at once, the selected one or more AI models 220 processes smaller subsets (batches) of the dataset.
[0057] Upon fine tuning the one or more parameters of each of the selected one or more AI models 220, the receiving unit 208 of the processor 202 is configured to receive an input from the user corresponding to selection of the one or more AI models 220 from the generated list and the execution sequence of the selected one or more AI models 220 arranged by the user.
[0058] Upon receiving the input from the user corresponding to selection of the one or more AI models 220 and the execution sequence of the selected one or more AI models 220, the processing unit 216 of the processor 202 is configured to preprocess the dataset received from request. In one embodiment, the processing unit 216 is configured to perform at least one of, but not limited to, data scaling, encoding, feature selection and normalization of the dataset to ensure the data consistency and quality within the system 108.
[0059] The data normalization is the process of at least one of, but not limited to, reorganizing the data within the dataset, removing the redundant data within the dataset, formatting the data within the dataset, removing null values from the dataset, handling missing values from the dataset. The main goal of the the processing unit 216 is to achieve a standardized data format across the system 108. The processing unit 216 eliminates duplicate data and inconsistencies which reduces manual efforts. The processing unit 216 ensures that the preprocessed dataset is stored appropriately in at least one of, the storage unit 206 for subsequent retrieval and analysis.
[0060] In one embodiment, the data scaling refers to the process of normalizing or standardizing the range of independent variables (features) in the dataset. When features are on a similar scale, the selected one or more AI models 220 performs better or converge faster. In one embodiment, encoding is the process of converting variables into a numerical format that can be used by the selected one or more AI models 220. In one embodiment, the feature selection involves choosing a subset of relevant variables for building the one or more AI models 220 which facilitates in improving the performance of the selected one or more AI models 220, reduce overfitting, and decrease computational cost.
[0061] Upon preprocessing the received dataset, the executing unit 218 of the processor 202 is configured to chaining the one or more selected AI models 220 together which ensures that the dataset flows seamlessly from one AI model 220 to the next AI model 220 in the specified execution sequence. In one embodiment, the executing unit 218 includes an algorithmic framework that manages the sequencing of the selected one or more AI models 220. The algorithmic framework is structured approach that provides a set of guidelines, tools, for designing, implementing, and managing the selected one or more AI models 220 within a specific context, such as machine learning, optimization, or data processing. For example, the executing unit 218 links the one or more selected AI models 220 in a sequential manner to form a pipeline of the one or more selected AI models 220.
[0062] In one embodiment, the executing unit 218 of the processor 202 is configured to execute each of the one or more selected AI models 220 in the determined execution sequence utilizing the dataset. In particular, the dataset is fed to a first AI model in the execution sequence of the one or more selected AI models 220. Further, the first AI model processes the dataset and produces an output. Thereafter, the executing unit 218 provides the produced output to the next AI model present in the execution sequence of the one or more selected AI models 220. In particular, the output of each of the one or more selected AI models 220 is an input for a subsequent AI model of the one or more selected AI models 220. This process continues iteratively until the last AI model in the execution sequence is reached.
[0063] For example, while executing, the one or more selected AI models 220 processes the dataset. In one embodiment, the one or more selected AI models 220 are trained on historical data associated with the previously executed task. Based on training, the one or more selected AI models 220 processes the dataset.
[0064] In an embodiment, the executing unit 218 executes each of the one or more selected AI models 220 iteratively. While iteratively processing the one or more selected AI models 220, when the last AI model in the execution sequence of the one or more selected AI models 220 produces output, the output produced by the last AI model in the execution sequence is inferred as a final output. In particular, the final output is used for further analysis or application within the network 106.
[0065] Upon execution of each of the selected one or more AI models 220 iteratively, the feedback unit 224 of the processor 202 is configured to provide feedback to the user via the UE 102. In one embodiment, the feedback include information related to the performance of each of the selected one or more AI models 220. In particular, the final output is provided in the feedback to the user.
[0066] Upon receiving the feedback from the feedback unit 224, the user analyses the performance of each of the selected one or more AI models 220 on the UI 222. In one embodiment, the user analyses the performance of each of the selected one or more AI models 220 by comparing performance metrics of the each of the selected one or more AI models 220 with a predefined set of performance metrics. Herein, the predefined set of performance metrics are at least one of, but not limited to, an accuracy, a loss, and a convergence rate. In one embodiment, the predefined set of performance metrics are defined by the user based on the historical data related to the tasks.
[0067] In one embodiment, based on comparison when the user determines that the performance metrics of the each of the selected one or more AI models 220 are similar or within a range of the predefined set of performance metrics, then the user infers that performance of the one or more selected AI models 220 is suitable to perform the task utilizing the dataset. Thereafter, the user stores the one or more selected AI models 220 along with the execution sequence in the storage unit 206.
[0068] In an alternate embodiment, upon receiving the feedback from the feedback unit 224, the user analyses the performance of each of the selected one or more AI models 220. In one embodiment, based on comparison when the user determines that the performance metrics of the each of the selected one or more AI models 220 not similar or within a range of the predefined set of performance metrics, then the user infers that performance of the one or more selected AI models 220 is not suitable to perform the task utilizing the dataset. Then the user transmits a modification input to the processor 202 via one of the UE 102 and the UI 222. In particular, the receiving unit 228 of the processor 202 is configured to receive the modification input. Herein, the modification input corresponds to modification of one of the execution sequence and one or more parameters of each of the one or more selected AI models 220 via one of the UE 102 and the UI 222.
[0069] For example, the user transmits the modification input to the system 108 in order to at least one of, but not limited to, select different one or more one or more AI models 220 and one or more one or more AI models 220 one or more one or more AI models 220 so that the selected one or more AI models 220 are suitable for performing the task utilizing the dataset provided by the user.
[0070] In one embodiment, the logging unit 226 of the processor 202 is configured to store logs pertaining to at least one of, but not limited to, selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance metrics of the one or more selected AI models 220 in the storage unit 206. The logs facilitate at least one of, but not limited to, monitoring, analysing of system behaviour and performance of the system over time. In an alternate embodiment, the logs pertaining to at least one of, but not limited to, the selection of the one or more AI models 220, the output produced by each of the one or more selected AI models 220 and the performance metrics of the one or more selected AI models 220 are notified to the user in real time. Advantageously, due to automatic selection of the one or more AI models 220 and sequencing of the one or more selected AI models 220 the accuracy and efficiency in complex tasks involving plurality of AI models 220 is increased due to which the overall system 108 performance is increased.
[0071] The receiving unit 208, the analysing unit 210, the generating unit 212, the transmitting unit 214, the processing unit 216, the executing unit 218, the feedback unit 224, and the logging unit 226 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0072] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing the selection and the execution sequence of one or more AI models 220. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0073] FIG. 3 shows communication between the UE 102, and the system 108. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, and the system 108 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0074] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and the UI 222. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0075] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the selection and the execution sequence of the one or more AI models 220. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0076] In an embodiment, the UI 222 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 222 of the UE 102 allows the user to select the one or more AI models 220 and arrange the one or more selected AI models 220 in the optimal execution sequence. Herein the UI 222 is included in the UE 102. The UI 222 is further configured to provides the visual representation of the list including the one or more AI models 220 to the user. The UI 222 also provides the visual representation of the execution sequence of one or more selected AI models 220 to the user.
[0077] As mentioned earlier in FIG.2, the system 108 includes the processors 202, and the memory 204, for managing the selection and the execution sequence of the one or more AI models 220, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0078] Further, as mentioned earlier the processor 202 includes the receiving unit 208, the analysing unit 210, the generating unit 212, the transmitting unit 214, the processing unit 216, the executing unit 218, the feedback unit 224, and the logging unit 226 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0079] FIG. 4 is an exemplary the system 108 architecture 400 for managing the selection and the execution sequence of one or more AI models 220, according to one or more embodiments of the present disclosure.
[0080] The architecture 400 includes the UI 222, an Integrated Performance Management (IPM) 402, the processor 202, the storage unit 206, and a workflow manager 404 communicably coupled to each other via the network 106.
[0081] In one embodiment, the UI 222 enables the users transmit the dataset to perform the task. Further, the UI 222 enables the users to select the one or more AI models 220 from the list of and arrange the selected one or more AI models 220 in the execution sequence within the UI 222. The UI 222 provides visual representations of the selected one or more AI models 220 along with the execution sequence of the selected one or more AI models 220 in order to review and adjust the execution sequence by the user. Utilizing the UI 222 the user fine tunes the one or more parameters of each of the selected one or more AI models 220 using the intuitive controls in the UI 222.
[0082] In one embodiment, the Integrated Performance Management (IPM) 402 refers to the systematic approach to managing and enhancing performance using various one or more AI models 220 and methodologies. This integration helps organizations align their strategies with operational execution and improve decision-making. The IPM 402, when linked to one or more AI models 220 and tasks, provides the comprehensive framework for enhancing performance of the system 108.
[0083] In one embodiment, upon selection of the one or more AI models 220 and the execution sequence. The system 108 is configured to feed the dataset to the first AI model from execution sequence of the selected one or more AI models 220, then the first AI model produces the output which is feed to the next AI model as the input. The system 108 continues the process iteratively until the last AI model in the execution sequence is reached. When the last AI model processes the received data, the final output is produced which is represented as the system 108 result or prediction.
[0084] Further, the logs related to the final output and the execution sequence of the one or more AI models 220 is stored in the storage unit 206. The workflow manager 412 extracts the information related to the final output and the execution sequence of the one or more AI models 220 and provides the information as the feedback to the user via the UI 222. The workflow manager 404 is a tool or system 108 designed to streamline, coordinate, and automate tasks and processes within an organization. The workflow manager 412 facilities in managing complex workflows by defining, monitoring, and optimizing the flow of work from one step to another.
[0085] Thereafter, based on the feedback, the user provides the modification input to the system 108 utilizing the UI 222 in which the user had fine-tuned the one or more parameters of each of the selected one or more AI models 220 and changed the execution sequence of the selected one or more AI models 220.
[0086] FIG. 5 is a signal flow diagram illustrating the flow for managing the selection and the execution sequence of one or more AI models 220, according to one or more embodiments of the present disclosure.
[0087] At step 502, the system 108 receives the request from the user for executing the tasks. In particular, the request includes the dataset.
[0088] At step 504, the system 108 identifies the task to be performed based on identifying the characteristics of the dataset included in the request. In one embodiment, the task is at least one of, but not limited to, the classification, the regression, and the clustering of the datasets received in the request. Identifying tasks based on the characteristics of a dataset involves analyzing the dataset to determine the task to be performed.
[0089] At step 506, the system 108 generates the list of the one or more AI models based on the identified tasks to be performed. For example, if the classification task is to be performed then, the system 108 generates the list such as at least one of, but not limited to, the neural networks and the decision trees AI models 220. Further, the system 108 displays the generated list of the one or more AI models 220 on the UI 222 of the UE 102.
[0090] At step 508, the user selects the one or more AI models 220 from the generated list which is displayed on the UI 222 and the user arranges the execution sequence of the one or more AI models 220 via the UI 222.
[0091] At step 510, the system 108 executes the selected one or more AI models 220 in the arranged execution sequence utilizing the dataset. For example, the received data is fed to the first AI model which is processed, and the output is produced. The output generated by the first AI model is fed to the next AI model. The execution continues iteratively until the last AI model in the sequence is reached. When the last AI model produces the output, that output is considered as the final output of the system 108.
[0092] Herein, the final output of the one or more AI models 220 and information related to the performance of each of the selected one or more AI models 220 is provided as the feedback to the user. Further, the user analyses the performance of each of the selected one or more AI models 220 and the user transmits the modification input so as to modify the selection and the execution sequence of the one or more AI models.
[0093] FIG. 6 is a flow diagram of a method 600 for managing the selection and the execution sequence of one or more AI models 220, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0094] At step 602, the method 600 includes the step of analysing the request received from the user to identify at least the type of task to be performed. In one embodiment, the receiving unit 208 is configured to receive request from the user for performing the task. Herein, the request includes the dataset provided by the user. For example, the request is a Hypertext Transfer Protocol version 2 (HTTP2) request. Further, the analysing unit 210 is configured to analyse the request received from the user to identify at least a type of task to be performed. In particular, the analysing unit 210 identifies the characteristics of the dataset and based on the identifies characteristics, the analysing unit 210 is identify the type of task to be performed. Fro example, the analysing unit 210 checks the size and the target variables in the dataset, if the dataset includes the numerical values, them the analysing unit 210 identifies that the regression task needs to be performed.
[0095] At step 604, the method 600 includes the step of generating the list comprising the one or more AI models 220 to perform the task based on the analysis of the request. In one embodiment, the generating unit 212 is configured to generate the list which includes the one or more AI models 220 to perform the identified task based on the analysis of the request. For example, based on analysis of the request, if the analysing unit 210 had identified that the regression task needs to be performed, then the generating unit 212 generates the list including the one or more AI models 220 such as at least one of, but not limited to, a linear regression and polynomial regression AI models 220 which are able to perform the regression task.
[0096] At step 606, the method 600 includes the step of receiving, an input from the user corresponding to selection of the one or more AI models 220 from the generated list and the execution sequence of the selected one or more AI models 220. In one embodiment, the receiving unit 208 is configured to receive the input from the user corresponding to the selection of the one or more AI models 220 from the generated list and the execution sequence of the selected one or more AI models 220. Upon generation of the list, the generated list is displayed on the UI 222 of the UE 102. Thereafter the user selects the preferred one or more AI models 220 from the generated list which is displayed on the UI 222 and then the user arranges the execution sequence of one or more selected AI models 220. For example, let us consider there are 10 AI models which are displayed on the UI 222 as the generated list. Based on the user preference, the user selects at least 5 AI models to perform the task and then the user arranges the execution sequence of the 5 AI models.
[0097] At step 608, the method 600 includes the step of providing feedback corresponding to the selection of the one or more AI models 220 and the execution sequence of the one or more AI models 220 so as to modify the selection and the execution sequence of the AI models 220. In one embodiment, the feedback unit 224 is configured provide the feedback corresponding to the selection of the one or more AI models 220 and the execution sequence of the one or more AI models 220.
[0098] Based on the selection of the one or more AI models 220 from the generated list and the execution sequence of the selected one or more AI models 220, the processing unit 216 preprocess the dataset received from request and then the executing unit 218 chains the one or more selected AI models 220 together. Further, the executing unit 218 executes each of the one or more selected AI models 220 in the determined execution sequence using the dataset. For example, let us assume that there 5 AI models selected for the task with the determined execution sequence such as a AI model 1, a AI model 2,…, a AI model 5. The dataset is fed to the AI model 1 which generates the output based on the fed dataset. Thereafter, the output generated by the AI model 1 is fed to the AI model 2 as the input. Based on the fed input the AI model 2 generates the output. This process continues iteratively until the AI model 5 in the sequence is reached. The output generated by the last AI model which id AI model 5 is considered as the final output.
[0099] Depending on the execution of each of the selected one or more AI models 220, the feedback unit 224 is configured to provide feedback to the user, Herein, the feedback includes the, the selection of the one or more AI models 220, the execution sequence of the one or more AI models 220, and the final output. Based on the performance metrics of the each of the selected one or more AI models 220, the user transmits the modification input to modify at least one of, the selection of the one or more AI models 220 and execution sequence of the one or more AI models 220. Advantageously, the UI 222 provides the user a transparent and intuitive way for the selection of the one or more AI models 220 and sequencing of the one or more selected AI models 220 which leads to better decision making and potentially higher one or more AI models 220 performance.
[00100] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202. The processor 202 is configured to analyse the request received from at least one of the user to identify at least the type of task to be performed. The processor 202 is further configured to generate the list comprising the one or more AI models 220 to perform the task based on the analysis of the request. The processor 202 is further configured to receive, an input from the user corresponding to selection of one or more AI models 220 from the generated list and an execution sequence of the selected one or more AI models 220. The processor 202 is further configured to provide feedback corresponding to the selection of the one or more AI models 220 and the execution sequence of the one or more AI models 220 so as to modify the selection and the execution sequence of the one or more AI models 220.
[00101] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00102] The present disclosure provides technical advancements of customization and flexibility to users to tailor machine learning workflows based on the users specific needs and datasets, allowing for more precise control over model training and data processing. The invention improvs decision making by providing a user interactive interface to users with a transparent and intuitive way to make informed choices about the sequence of models, leading to better decision-making and potentially higher model performance. The present invention enables quick adjustments to models sequences in response to changing data or evolving project requirements, enhancing the system's adaptability. The invention empowers users with varying levels of machine learning expertise to actively participate in the model development process, democratizing machine learning capabilities.
[00103] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00104] Environment - 100;
[00105] User Equipment (UE) - 102;
[00106] Server - 104;
[00107] Network- 106;
[00108] System -108;
[00109] Processor - 202;
[00110] Memory - 204;
[00111] Storage unit – 206;
[00112] Receiving unit – 208;
[00113] Analysing unit – 210;
[00114] Generating unit – 212;
[00115] Transmitting unit – 214;
[00116] Processing unit – 216;
[00117] Executing unit -218;
[00118] Plurality of AI models – 220
[00119] User Interface – 222;
[00120] Feedback unit – 224;
[00121] Logging unit – 226
[00122] Primary Processor – 302;
[00123] Memory – 304;
[00124] IPM – 402;
[00125] Workflow manager – 404;
,CLAIMS:CLAIMS
We Claim:
1. A method (600) of managing selection and execution sequence of one or more Artificial Intelligence (AI) models (220), the method (600) comprising the steps of:
analysing, by one or more processors (202), a request received from a user to identify at least a type of task to be performed;
generating, by the one or more processors (202), a list comprising the one or more AI models (220) to perform the task based on the analysis of the request;
receiving, by the one or more processors (202), an input from the user corresponding to selection of the one or more AI models (220) from the generated list and an execution sequence of the selected one or more AI models (220); and
providing, by the one or more processors (202), feedback corresponding to the selection of the one or more AI models (220) and the execution sequence of the one or more AI models (220) so as to modify the selection and the execution sequence of the one or more AI models (220).
2. The method (600) as claimed in claim 1, wherein the request comprises a dataset and wherein one or more characteristics corresponding to each of the dataset are identified by the one or more processors (202) based on analysis.
3. The method (600) as claimed in claim 1, wherein the type of task is identified based on the analysis of the one or more characteristics corresponding to the dataset of the request and wherein the one or more characteristics of the dataset comprise size, dimensionality, and datatypes.
4. The method (600) as claimed in claim 1, wherein the type of the task is at least one of, classification, regression, and clustering.
5. The method (600) as claimed in claim 1, wherein the list of one or more AI models (220) comprises the one or more AI models (220) , wherein the generated list is transmitted to the UE (102).
6. The method (600) as claimed in claim 1, wherein on selection of the one or more AI models (220), the method comprises the step of generating, by the one or more processors (202), a visual representation of the sequence of one or more selected AI models (220) on UE (102).
7. The method (600) as claimed in claim 1, wherein modifying the selection and the execution sequence of the one or more AI models (220) comprises the step of receiving, by the one or more processors (202), a modification input from the user based on the feedback,
8. The method (600) as claimed in claim 7, wherein the modification input corresponds to modification of one of the sequence and one or more parameters of each of the one or more selected AI models (220), wherein the one or more parameters are at least a learning rate, a regularization strength, and a batch size of each of the one or more AI models (220).
9. The method (600) as claimed in claim 1, wherein the method (600) comprises of storing, by the one or more processors (202), logs related to performance metrics corresponding to training of the one or more selected AI models (220) utilizing the dataset and wherein the performance metrics are at least one of an accuracy, a loss, and a convergence rate.
10. A system (108) for managing selection and execution sequence of one or more AI models (220), the system (108) comprising:
an analysing unit (210) configured to analyse, a request received from a user to identify at least a type of task to be performed;
a generating unit (212) configured to generate, a list comprising the one or more AI models (220) to perform the task based on the analysis of the request;
a receiving unit (208) configured to receive, an input from the user corresponding to selection of one or more AI models (220) from the generated list and an execution sequence of the selected one or more AI models (220); and
a feedback unit (224) configured to provide, feedback corresponding to the selection of the one or more AI models (220) and the execution sequence of the one or more AI models (220) so as to modify the selection and the execution sequence of the one or more AI models (220).
11. The system (108) as claimed in claim 10, wherein the request comprises a dataset and wherein one or more characteristics corresponding to each of the dataset are identified by the analysis unit (210) based on analysis.
12. The system (108) as claimed in claim 10, wherein the type of task is identified based on the analysis of the one or more characteristics corresponding to each of the dataset of the request and wherein the one or more characteristics of the dataset comprises size, dimensionality, and datatypes.
13. The system (108) as claimed in claim 10, wherein the type of the task is at least, classification, regression, and clustering.
14. The system (108) as claimed in claim 10, wherein the list of the one or more AI models (220) comprises the one or more AI models (220), wherein the generated list is transmitted to the UE (102).
15. The system (108) as claimed in claim 10, wherein on selection of the one or more AI models (220), the generating unit (212) is configured to generate, a visual representation of the sequence of one or more selected AI models (220) on the UE (102).
16. The system (108) as claimed in claim 10, wherein modifying the selection and the execution sequence of the one or more AI models (220) is performed by the receiving unit (208) which is configured to receive, a modification input based on the feedback.
17. The system (108) as claimed in claim 16, wherein the modification corresponds to modification of one of the sequence and one or more parameters of each of the one or more selected AI models, and wherein the one or more parameters are at least a learning rate, a regularization strength, and a batch size of each of the one or more AI models (220).
18. The system (108) as claimed in claim 10, wherein a logging unit (226) is configured to, store logs related to performance metrics corresponding to training of the one or more selected AI models (220) utilizing the dataset and wherein the performance metrics are at least one of an accuracy, a loss, and a convergence rate.
| # | Name | Date |
|---|---|---|
| 1 | 202321067390-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf | 2023-10-07 |
| 2 | 202321067390-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf | 2023-10-07 |
| 3 | 202321067390-FORM 1 [07-10-2023(online)].pdf | 2023-10-07 |
| 4 | 202321067390-FIGURE OF ABSTRACT [07-10-2023(online)].pdf | 2023-10-07 |
| 5 | 202321067390-DRAWINGS [07-10-2023(online)].pdf | 2023-10-07 |
| 6 | 202321067390-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf | 2023-10-07 |
| 7 | 202321067390-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321067390-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321067390-DRAWING [06-10-2024(online)].pdf | 2024-10-06 |
| 10 | 202321067390-COMPLETE SPECIFICATION [06-10-2024(online)].pdf | 2024-10-06 |
| 11 | Abstract.jpg | 2024-12-07 |
| 12 | 202321067390-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321067390-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321067390-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321067390-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321067390-FORM 3 [27-01-2025(online)].pdf | 2025-01-27 |