Abstract: A method (300) for generating and executing action sequence codes is disclosed. The method (300) includes receiving (302), via a user interface (110), a user query (224); validating (304) the user query (224) based on a complexity criterion; Upon successful validation, inputting (306) the user query (224) to a first AI model (220); generating (308), via the first AI model (220), a first response (226) to the user query (224); validating (310) the first response (226) based on a scoring criterion; Upon successful validation, rendering (312) the first response (226) on the user interface (110); Upon unsuccessful validation of the user query (224) or the first response (226), inputting (314) the user query (224) to a second AI model (222); generating (316), via the second AI model (222), a second response (228) to the user query (224); rendering (318) the second response (228) on the user interface (110). [To be published with FIG. 2]
Description:DESCRIPTION
Technical Field
[0001] This disclosure relates generally to Artificial Intelligence (AI), and more particularly to a method and system for generating and executing action sequence codes.
Background
[0002] Traditional Artificial Intelligence (AI) models are fast and efficient tools to provide curated responses to user queries. However, the curated responses may not always be aligned with the user requirements. One the other hand, Generative AI (GenAI) models are recent advancements over traditional AI models that can generate human-like responses to the user queries. However, while the GenAI models are suitable for generating (creative) responses, the GenAI models are not suitable to make decisions by themselves. Currently, most Information Technology (IT) infrastructures are configured with the traditional AI models. Replacing the traditional AI models in well-established IT infrastructures with the GenAI models is cost-intensive and, in some cases, impractical.
[0003] There is, therefore, a need in the present state of art for techniques for efficient and cost-effective integration of GenAI models in traditional AI-driven IT infrastructures. The present invention is directed to overcome one or more limitations stated above or any other limitations associated with the known arts.
SUMMARY
[0004] In one embodiment, a method for generating and executing action sequence codes is disclosed. In one example, the method may include receiving, via a user interface, a user query. It should be noted that the user query may include a user requirement for an action sequence. It should also be noted that the action sequence may include one or more sequentially linked tasks. The method may further include validating the user query based on a predefined complexity criterion. Upon successful validation of the user query, the method may further include inputting the user query to a first Artificial Intelligence (AI) model. It should be noted that the first AI model may be a non-generative AI model. The method may further include generating, via the first AI model, a first response to the user query. The method may further include validating the first response based on a scoring criterion. Upon successful validation of the first response, the method may further include rendering the first response on the user interface. Upon unsuccessful validation of one of the user query or the first response, the method may further include inputting the user query to a second AI model. It should be noted that the second AI model may be a generative AI model. The method may further include generating, via the second AI model, a second response to the user query. The method may further include rendering the second response on the user interface.
[0005] In another embodiment, a system for generating and executing action sequence codes is disclosed. In one example, the system may include a processor and a computer-readable medium communicatively coupled to the processor. The computer-readable medium may store processor-executable instructions, which, on execution, may cause the processor to receive, via a user interface, a user query. It should be noted that the user query may include a user requirement for an action sequence. It should also be noted that the action sequence may include one or more sequentially linked tasks. The processor-executable instructions, on execution, may further cause the processor to validate the user query based on a predefined complexity criterion. Upon successful validation of the user query, the processor-executable instructions, on execution, may further cause the processor to input the user query to a first Artificial Intelligence (AI) model. It should be noted that the first AI model may be a non-generative AI model. The processor-executable instructions, on execution, may further cause the processor to generate, via the first AI model, a first response to the user query. The processor-executable instructions, on execution, may further cause the processor to validate the first response based on a scoring criterion. Upon successful validation of the first response, the processor-executable instructions, on execution, may further cause the processor to render the first response on the user interface. Upon unsuccessful validation of one of the user query or the first response, the processor-executable instructions, on execution, may further cause the processor to input the user query to a second AI model. It should be noted that the second AI model may be a generative AI model. The processor-executable instructions, on execution, may further cause the processor to generate, via the second AI model, a second response to the user query. The processor-executable instructions, on execution, may further cause the processor to render the second response on the user interface.
[0006] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
[0008] FIG. 1 is a block diagram of an exemplary system for generating and executing action code sequence, in accordance with some embodiments of the present disclosure.
[0009] FIG. 2 illustrates a functional block diagram of various modules within a memory of a computing device configured for generate and execute action code sequence, in accordance with some embodiments of the present disclosure.
[0010] FIG. 3 illustrates a flow diagram of an exemplary process for generating and executing action code sequence, in accordance with some embodiments of the present disclosure.
[0011] FIG. 4 illustrates a flow diagram of an exemplary process for training each of the first AI model and the second AI model, in accordance with some embodiments of the present disclosure.
[0012] FIG. 5 illustrates a detailed exemplary process for generating and executing action code sequence, in accordance with some embodiments of the present disclosure.
[0013] FIG. 6 illustrates a detailed exemplary process for GenAI-based response generation, in accordance with some embodiments of the present disclosure.
[0014] FIG. 7 is a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
DETAILED DESCRIPTION
[0015] Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
[0016] Referring now to FIG. 1, an exemplary system 100 for generating and executing action code sequence is illustrated, in accordance with some embodiments of the present disclosure. The system 100 may include a computing device 102. The computing device 102 may be, for example, but may not be limited to, server, desktop, laptop, notebook, netbook, tablet, smartphone, mobile phone, or any other computing device. The computing device 102 may generate and execute action sequence codes via a traditional AI model and a generative AI model.
[0017] As will be described in greater detail in conjunction with FIGS. 2 – 6, the computing device 102 may receive, via a user interface, a user query. It should be noted that the user query may include a user requirement for an action sequence. It should also be noted that the action sequence may include one or more sequentially linked tasks. The computing device 102 may further validate the user query based on a pre-defined complexity criterion. Upon successful validation of the user query, the computing device 102 may further input the user query to a first Artificial Intelligence (AI) model. It should also be noted that the first AI model may be a non-generative AI model. The computing device 102 may further generate, via the first AI model, a first response to the user query. The computing device 102 may further validate the first response based on a scoring criterion. Upon successful validation of the first response, the computing device 102 may further render the first response on the user interface. Upon unsuccessful validation of one of the user query or the first response, the computing device 102 may further input the user query to a second AI model. It should be noted that the second AI model may be a generative AI model. The computing device 102 may further generate, via the second AI model, a second response to the user query. The computing device 102 may further render the second response on the user interface.
[0018] In some embodiments, the computing device 102 may include one or more processors 104 and a memory 106. Further, the memory 106 may store instructions that, when executed by the one or more processors 104, may cause the one or more processors 104 to generate and execute action sequence codes, in accordance with aspects of the present disclosure. The memory 106 may also store various data (for example, a user query, a user feedback, a historical data, first AI model parameters, second AI model parameters, a first response, and a second response, and the like) that may be captured, processed, and/or required by the system 100. The memory 106 may be a non-volatile memory (e.g., flash memory, Read Only Memory (ROM), Programmable ROM (PROM), Erasable PROM (EPROM), Electrically EPROM (EEPROM) memory, etc.) or a volatile memory (e.g., Dynamic Random Access Memory (DRAM), Static Random-Access memory (SRAM), etc.).
[0019] The system 100 may further include a display 108. A user may interact with the system 100 via a user interface 110 accessible via the display 108. The system 100 may also include one or more external devices 112. In some embodiments, the computing device 102 may interact with the one or more external devices 112 over a communication network 114 for sending or receiving various data. The communication network 114 may include, for example, but may not be limited to, a wireless fidelity (Wi-Fi) network, a light fidelity (Li-Fi) network, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a satellite network, the internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. The one or more external devices 112 may include, but may not be limited to, the user device, a remote server, a laptop, a netbook, a notebook, a smartphone, a mobile phone, a tablet, or any other computing device.
[0020] In an embodiment, the user interface 110 may be a web interface (cloud-based). In the web interface, the user may access the system 100 through the cloud-based web interface, providing flexibility and remote accessibility. Further, the web-interface may be logged in from any location using a standard web browser to interact with the system features seamlessly. In another embodiment, the user interface 110 may be embedded in a web-based IDE. When embedded in a web-based IDE, the user interface 110 may enable a unified development environment. In another example, the user interface 110 may deployed as an extension linked to standalone IDEs (such as VS code). When deployed as the extension of a standalone IDE, the user interface 110 may enhance local development environments via seamlessly integrated system extensions.
[0021] Referring now to FIG. 2, a functional block diagram of a system 200 for generating and executing action sequence codes is illustrated, in accordance with some embodiments of the present disclosure. FIG. 2 is explained in conjunction with FIG. 1. The system 200 may be analogous to the system 100. The system 200 may implement the computing device 102. The system 100 may include, within a memory (such as the memory 106), a query receiving module 202, a query validation module 204, a first AI module 206, a second AI module 208, a response validation module 210, a code execution module 212, a rendering module 214, a training module 216, and a database 218. The first AI module 206 may include a first AI model 220. The first AI model 220 may be a non-generative AI model (i.e., a traditional AI model that is not a generative AI model). The second AI module 208 may include a second AI model 222. The second AI model 222 may be a generative AI model (such as a Large Language Model (LLM)).
[0022] The query receiving module 202 may receive a user query 224 via a user interface (such as the user interface 110). The user query 224 may include a user requirement for an action sequence. The action sequence may include one or more sequentially linked tasks. In other words, the action sequence may include a series of tasks to be executed in a given order. For example, an action sequence for stock analysis may include a task for retrieving the fundamental data of an input stock, a task for calculating a set of derived parameters based on the fundamental data, and a task for determining a buy recommendation for the input stock. Thus, an input of a task in an action sequence may be a user input or an output of a previous sequentially linked task in the action sequence. In an embodiment, the user query 224 may be provided in a natural language. Further, the query receiving module 202 may send the user query 224 to the query validation module 204. Additionally, the query receiving module 202 may store the user query 224 in the database 218. The database 218 may include historical user data. The historical user data may include contextual data corresponding to the user query 224.
[0023] Further, the query validation module 204 may validate the user query 224 based on a pre-defined complexity criterion. In an embodiment, the pre-defined complexity criterion may be based on at least one of complexity, ambiguity, or the uniqueness of the user query 224. When the validation is successful, the query validation module 204 may transmit the user query 224 to the first AI module 206. When the validation is unsuccessful, the query validation module 204 may transmit the user query 224 to the second AI module 208. In other words, if the user query 224 fails to meet the pre-defined complexity criterion, the user query 224 is determined as too complex to be appropriately addressed by the first AI model 220. Thus, the user query 224 is transmitted to the second AI module 208.
[0024] In a scenario of successful validation of the user query 224, the first AI module 206 may input the user query 224 to the first AI model 220. The first AI module 206 may retrieve the contextual data from the historical user data stored in the database 218. Further, the first AI module 206 may generate, via the first AI model 220, a first response 226 to the user query 224 using the contextual data. Further, the first AI module 206 may send the first response 226 to the response validation module 210.
[0025] The response validation module 210 may validate the first response 226 based on a scoring criterion. Upon successful validation of the first response 226, the response validation module 210 may post-process the first response 226 through a data abstraction technique (for example, mermaidJS) based on the user requirement. The data abstraction technique may be an intermediary layer that may format the first response 226 (i.e., unprocessed data) into an intended outcome, which may include a variety of forms such as visual representations, diagrams, or programming code. Further, the response validation module 210 may send the first response 226 to the rendering module 214 and the code execution module 212. The rendering module 214 may render the post-processed first response 226 on the user interface. Upon unsuccessful validation of the first response 226, the response validation module 210 may transmit the user query 224 to the second AI module 208.
[0026] Thus, in a scenario of unsuccessful validation of one of the user query 224 or the first response 226, the second AI module 208 may input the user query 224 to the second AI model 222. Further, the second AI module 208 may input the user query 224 to the second AI model 222. The second AI module 208 may retrieve the contextual data from the historical user data stored in the database 218. Further, the second AI module 208 may generate, via the second AI model 222, a second response 228 to the user query 224 using the contextual data. Further, the second AI module 208 may send the second response 228 to the response validation module 210.
[0027] The response validation module 210 may validate the second response 228 based on the scoring criterion. Upon successful validation of the second response 228, the response validation module 210 may post-process the second response 228 through the data abstraction technique based on the user requirement (similar to the post-processing of the first response 226). Further, the response validation module 210 may send the second response 228 to the rendering module 214 and the code execution module 212. The rendering module 214 may render the post-processed second response 228 on the user interface. Upon unsuccessful validation of the second response 228, the response validation module 210 may notify an administrator for a failed response generation through a notification.
[0028] It may be noted that the first response 226 and the second response 228 may include an executable code for the action sequence provided in the user query 224. Upon receiving one of the first response 226 or the second response 228, the code execution module 212 may sequentially execute the executable code corresponding to each of the one or more sequentially linked tasks in the action sequence to obtain an output for each of the one or more sequentially linked tasks. Further, the code execution module 212 may send a combined output 230 to the rendering module 214. The combined output 230 may include the output for each of the one or more sequentially linked tasks. The rendering module 214 may then render the combined output 230 on the user interface in an order of generation.
[0029] Upon rendering the combined output 230, the query receiving module 202 may receive a user feedback 232 corresponding to the combined output 230 via the user interface. In an embodiment, the user feedback 232 may be in natural language. Further, the query receiving module 202 may send the user feedback 232 to the training module 216. The training module 216 may then iteratively train each of the first AI model 220 and the second AI model 222 based on the user feedback 232 though a reinforcement learning technique.
[0030] It should be noted that all such aforementioned modules 202 – 216 may be represented as a single module or a combination of different modules. Further, as will be appreciated by those skilled in the art, each of the modules 202 – 216 may reside, in whole or in parts, on one device or multiple devices in communication with each other. In some embodiments, each of the modules 202 – 216 may be implemented as dedicated hardware circuit comprising custom application-specific integrated circuit (ASIC) or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. Each of the modules 202 – 216 may also be implemented in a programmable hardware device such as a field programmable gate array (FPGA), programmable array logic, programmable logic device, and so forth. Alternatively, each of the modules 202 – 216 may be implemented in software for execution by various types of processors (e.g., processor 104). An identified module of executable code may, for instance, include one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, function, or other construct. Nevertheless, the executables of an identified module or component need not be physically located together but may include disparate instructions stored in different locations which, when joined logically together, include the module and achieve the stated purpose of the module. Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different applications, and across several memory devices.
[0031] As will be appreciated by one skilled in the art, a variety of processes may be employed for generating and executing action sequence codes. For example, the exemplary system 100 and the associated computing device 102 may generate and execute action code sequence by the processes discussed herein. In particular, as will be appreciated by those of ordinary skill in the art, control logic and/or automated routines for performing the techniques and steps described herein may be implemented by the system 100 and the associated computing device 102 either by hardware, software, or combinations of hardware and software. For example, suitable code may be accessed and executed by the one or more processors on the system 100 to perform some or all of the techniques described herein. Similarly, application specific integrated circuits (ASICs) configured to perform some, or all of the processes described herein may be included in the one or more processors on the system 100.
[0032] Referring now to FIG. 3, an exemplary process 300 for generating and executing an action code sequence is illustrated via a flowchart, in accordance with some embodiments of the present disclosure. FIG. 3 is explained in conjunction with FIGS. 1 and 2. The process 300 may be implemented by the computing device 102 of the system 100. The process 300 may include receiving, by a query receiving module (for example, the query receiving module 202), via a user interface (for example, the user interface 110), a user query (for example, the user query 224), at step 302. The user query may include a user requirement for an action sequence. The action sequence may include one or more sequentially linked tasks.
[0033] Further, at step 304 of the process 300, a check may be performed by a query validation module (for example, the query validation module 204) to validate the user query based on a pre-defined complexity criterion. When the user query is successfully validated (i.e., ‘successful’ path), the process 300 proceeds to step 306, and if the user query is unsuccessfully validated (i.e., ‘unsuccessful’ path) the process 300 may proceed to step 314. The pre-defined complexity criterion may be based on one or more of complexity, ambiguity, or uniqueness criteria. In an embodiment, a weighted score may be calculated corresponding to one or more of the complexity, ambiguity, or uniqueness of the user query. The weighted score may then be compared to a predefined threshold score to validate the user query based on the pre-defined complexity criterion. It should be noted that a successful validation may correspond to the user query successfully meeting the pre-defined complexity criterion (for example, when the weighted score of the user query is less than the predefined threshold or, in other words, when the user query is less complex). On the other hand, an unsuccessful validation may correspond to the user query failing to meet the pre-defined complexity criterion (for example, when the weighted score of the user query is more than the predefined threshold or, in other words, when the user query is more complex).
[0034] Thus, upon successful validation of the user query, the process 300 may include inputting, by a first AI module (for example, the first AI module 206), the user query to a first AI model (for example, the first AI model 220), at step 306. The first AI model may be a non-generative AI model (such as an Artificial Neural Network (ANN) model, a Convolutional Neural Network (CNN) model, a Recurrent Neural Network (RNN) model, and the like). Further, the process 300 may include generating, by the first AI module via the first AI model, a first response (for example, the first response 226) to the user query, at step 308. It may be noted that the first response may be based on contextual data for the user query retrieved by the first AI module from historical user data stored in a database (such as the database 218).
[0035] At step 310 of the process 300, a check may be performed by a response validation module (for example, the response validation module 210) to validate the first response based on a scoring criterion. When the first response is successfully validated (i.e., ‘successful’ path), the process 300 proceeds to step 312, and when the first response is unsuccessfully validated (i.e., ‘unsuccessful’ path), the process 300 proceeds to step 314. The scoring criterion may be based on one or more of context, relevance, or consistency criteria of a response. In an embodiment, a weighted score may be calculated corresponding to one or more of the context, relevance, or consistency of the first response. The weighted score may then be compared to a predefined threshold score to validate the first response based on the scoring criterion. It should be noted that a successful validation may correspond to the first response successfully meeting the scoring criterion (for example, when the weighted score of the first response is more than the predefined threshold or, in other words, when the first response satisfactorily addresses the user query). On the other hand, an unsuccessful validation may correspond to the first response failing to meet the scoring criterion (for example, when the weighted score of the first response is less than the predefined threshold or, in other words, when the first response fails to satisfactorily address the user query).
[0036] Thus, upon successful validation of the first response, the process 300 may include rendering, by a rendering module (for example, the rendering module 214), the first response on the user interface, at step 312. In some embodiments, prior to rendering, the rendering module may post-process the first response through a data abstraction technique based on the user requirement.
[0037] Upon unsuccessful validation of the user query or the first response, the process 300 may include inputting by a second AI module (for example, the second AI module 208) the user query to a second AI model (for example, the second AI model 222), at step 314. The second AI model may be a generative AI model (for example, a Large Language Model (such as GPT-4 (by OpenAI®), LLaMA (by Meta®), Gemini (by Google® DeepMind®), Claude (by Anthropic®), Mistral, and the like), an image generation model (such as DALL·E (by OpenAI®)), an audio generation model (such as Whisper (by OpenAI®), VALL-E (by Microsoft®), MusicLM (by Google®)), or a Large Multimodal Model (LMM)). Further, the process 300 may include generating, by the second AI module via the second AI model, a second response (for example, the second response 228) to the user query, at step 316. It may be noted that the second response may be based on contextual data for the user query retrieved by the second AI module from historical user data stored in the database.
[0038] In some embodiments, the process 300 may include validating, by the response validation module, the second response generated via the second AI model based on the scoring criterion. Upon successful validation, the process 300 may include rendering, by the rendering module, the second response on the user interface, at step 318. In some embodiments, prior to rendering the rendering module may post-process the second response through the data abstraction technique based on the user requirement. Upon unsuccessful validation, the process 300 may include notifying, by the rendering module, the administrator for the failed response generation.
[0039] Each of the first response and the second response may include an executable code for the action sequence provided in the user query. Upon generating one of the first response or the second response, the process 300 may include sequentially executing, by a code execution module (for example, the code execution module 212), the executable code corresponding to each of the one or more sequentially linked tasks in the action sequence to obtain the output for each of the one or more sequentially linked tasks, at step 320. Further, the process 300 may include rendering, by the rendering module, a combined output (for example, the combined output 230) on the user interface in the order of generation. The combined output may include the output for each of the one or more sequentially linked tasks.
[0040] Referring now to FIG. 4, an exemplary process 400 for training each of the first AI model and the second AI model via a flowchart, in accordance with some embodiments of the present disclosure. FIG. 4 is explained in conjunction with FIGS. 1, 2, and 3. The process 400 may include receiving, by a query receiving module (for example, the query receiving module 202), a user feedback (for example, the user feedback 232) corresponding to a combined output (for example, the combined output 230), via a user interface (for example, the user interface 110), at step 402. The combined output may include the output for each of the one or more sequentially linked tasks provided in the user query. The user feedback may be in natural language. The process 400 may further include iteratively training, by a training module (for example, the training module 216), each of a first AI model (i.e., a non-generative AI model, for example, the first AI model 220) and a second AI model (i.e., a generative AI model, for example, the second AI model 222) based on the user feedback through a reinforcement learning technique, at step 404.
[0041] Referring now to FIG. 5, a detailed exemplary process 500 for generating and executing action code sequence is depicted via a flowchart, in accordance with some embodiments of the present disclosure. FIG. 5 is explained in conjunction with FIGS. 1, 2, 3, and 4. The process 500 may include user input reception by the query receiving module 202, at step 502. The user input may be analogous to the user query 224. In an embodiment, the user input may be provided through a chat window of a user interface (such as the user interface 110). The user input may initiate a discussion and may include requests for pertinent information or help. Further, the process 500 may include traditional AI processing of the user input by the first AI module 206, at step 504. The traditional AI processing may include generating a traditional AI response 506 to the user input through a conventional AI model (such as the first AI model 220). The conventional AI model may be built to manage regular queries, and interactions based on rules and simple user queries that can be addressed by curated responses. The conventional AI model may act as the primary (or default) responder to such uncomplicated user queries, delivering quick and precise answers. The traditional AI response 506 may include an executable code for the action sequence provided in the user input. In an embodiment, the user input may be priorly evaluated based on a predefined complexity criterion to determine the complexity of the user input. If the user input is within a predefined range of complexity, then the user input may be sent for traditional AI processing.
[0042] Further, a check may be performed at step 508 of the process 500 by the response validation module 210 to validate the traditional AI response 506. When the conventional AI model may deliver a satisfactory reply, a generative AI model (such as the second AI model 222) may not be engaged. Use of the conventional AI model may ensure efficiency and rapidity in addressing user inputs that pertain to regular exchanges and may be addressed by curated responses. In some embodiments, the traditional AI response may be validated based on a scoring criterion. The scoring criterion may include calculation of a confidence score to determine whether the response is satisfactory or not. When the response is satisfactory (i.e., ‘satisfactory result’ path), the process 500 may proceed to step 512. When the response is unsatisfactory (i.e., ‘unsatisfactory result’ path), the process 500 may proceed to step 510
[0043] Thus, when the traditional AI response 506 fails the validation based on scoring criterion or when the user input fails validation based on the predefined complexity criterion, the process 500 may include GenAI response generation by the second AI module 208, at step 510. In other words, when the user input is not successfully addressed by the conventional AI model, the user input is sent to the generative AI model (such as the second AI model 222) for response generation. This is explained in greater detail in conjunction with FIG. 6.
[0044] Further, the process 500 may include showing, by the rendering module 214, the generated response (i.e., the traditional AI response 506 or the GenAI-generated response) on the chat window of the user interface, at step 512.
[0045] Referring now to FIG. 6, a detailed exemplary process 600 for GenAI-based response generation is depicted via a flowchart, in accordance with some embodiments of the present disclosure. FIG. 6 is explained in conjunction with FIG. 5. The process 600 may include GenAI response generation corresponding to the user input 602 by the second AI module 208, at the step 510. It should be noted that the user input 602 may have failed validation based on the predefined complexity criterion. Alternatively, the traditional AI response 506 generated by the traditional AI processing of the user input 602 may have failed validation based on the scoring criterion. The step 510 may include steps 604, 606, 608, and 610. For the GenAI response generation, the process 600 may include advanced natural language processing of the user input 602 by the second AI module 208 through a GenAI model (such as the second AI model 222), at step 604. Through advanced natural language processing, the Gen AI model may comprehend and produce replies in natural language that resemble a human-generated response. The Gen AI model may surpass pre-established rules by using machine learning algorithms to comprehend the context, sentiment, and subtleties of the user input 602. When one of the sequentially linked tasks in the user input 602 may include document retrieval, re-rankers may be used to find relevant document for generating the response through the GenAI model.
[0046] Further, the process 600 may include context retention and understanding by the second AI module 208, at step 606. The Gen AI model is capable of retaining contextual information during the current user session. The GenAI model takes into account the entire conversation history, ensuring consistent and contextually appropriate answer to the user input 602. This may improve the overall fluidity of the session and may increase user involvement.
[0047] Further, the process 600 may include generating, by the second AI module 208, a code response, at step 608. It may be noted that the code response may be analogous to the second response 228. The code response may include an executable code for the action sequence provided in the user input 602. The Gen AI model may produce a response that may be contextually comprehensive and customized to meet the user requirement after analysing the user input 602. The GenAI model may be integrated with external data sources to provide real-time and up-to-date information. As explained previously, the GenAI model may use the re-rankers to find the most relevant sources for generated the required result. Additionally, the GenAI model may be integrated with use case-specific (or domain-specific) modules and databases for retrieving more relevant information for response generation.
[0048] Further, a check may be performed by the response validation module 210 at step 610 of the process 600 to validate the code response. The validation may include assessing the pertinence and logical consistency of the GenAI-generated code response to ascertain the effectiveness in addressing the user input 602. In some embodiments, when the GenAI model may fail to generate (or find) a satisfactory result, then the response validation module 210 may route the user input 602 to a human agent for providing the required output.
[0049] Upon successful validation of the code response, the process 600 may include invoking the code response in the embedded environment by the code execution module 212, at step 612. The code response (i.e., the executable code) may be executed to obtain a combined output (such as the combined output 230). At step 614, a check may be performed to validate the combined output based on a user feedback. The user feedback may be indicative of user satisfaction level with respect to the combined output. When the user feedback corresponds to satisfactory (i.e., ‘satisfied’ path), the process 600 may proceed to step 616. Alternatively, when the user feedback corresponds to unsatisfactory (i.e., ‘dissatisfied’ path), the process 600 may move back to the step 606. When the user feedback corresponds to satisfactory, the process 600 may include displaying, by the rendering module 214), the code and the combined output on the user interface. The selected answer (i.e., the combined output), generated either through the conventional AI model or through the GenAI model, may be incorporated into the current conversation (i.e., the user session). This may guarantee a seamless and organic progression while preserving the conversational context and user involvement.
[0050] The GenAI model may engage with users, constantly assimilating knowledge from the inputs and comments (i.e., the user feedback) through reinforcement learning, and therefore, enhancing the conventional AI model and the GenAI model. The iterative learning process may provide an enhanced capacity to effectively manage a diverse array of customer inquiries.
[0051] As will be also appreciated, the above-described techniques may take the form of computer or controller implemented processes and apparatuses for practicing those processes. The disclosure can also be embodied in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, solid state drives, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer or controller, the computer becomes an apparatus for practicing the invention. The disclosure may also be embodied in the form of computer program code or signal, for example, whether stored in a storage medium, loaded into and/or executed by a computer or controller, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
[0052] The disclosed methods and systems may be implemented on a conventional or a general-purpose computer system, such as a personal computer (PC) or server computer. Referring now to FIG. 7, an exemplary computing system 700 that may be employed to implement processing functionality for various embodiments (e.g., as a SIMD device, client device, server device, one or more processors, or the like) is illustrated. Those skilled in the relevant art will also recognize how to implement the invention using other computer systems or architectures. The computing system 700 may represent, for example, a user device such as a desktop, a laptop, a mobile phone, personal entertainment device, DVR, and so on, or any other type of special or general-purpose computing device as may be desirable or appropriate for a given application or environment. The computing system 700 may include one or more processors, such as a processor 702 that may be implemented using a general or special purpose processing engine such as, for example, a microprocessor, microcontroller or other control logic. In this example, the processor 702 is connected to a bus 704 or other communication medium. In some embodiments, the processor 702 may be an Artificial Intelligence (AI) processor, which may be implemented as a Tensor Processing Unit (TPU), or a graphical processor unit, or a custom programmable solution Field-Programmable Gate Array (FPGA).
[0053] The computing system 700 may also include a memory 706 (main memory), for example, Random Access Memory (RAM) or other dynamic memory, for storing information and instructions to be executed by the processor 702. The memory 706 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor 702. The computing system 700 may likewise include a read only memory (“ROM”) or other static storage device coupled to bus 704 for storing static information and instructions for the processor 702.
[0054] The computing system 700 may also include a storage devices 708, which may include, for example, a media drive 710 and a removable storage interface. The media drive 710 may include a drive or other mechanism to support fixed or removable storage media, such as a hard disk drive, a floppy disk drive, a magnetic tape drive, an SD card port, a USB port, a micro USB, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive. A storage media 712 may include, for example, a hard disk, magnetic tape, flash drive, or other fixed or removable medium that is read by and written to by the media drive 710. As these examples illustrate, the storage media 712 may include a computer-readable storage medium having stored therein particular computer software or data.
[0055] In alternative embodiments, the storage devices 708 may include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into the computing system 700. Such instrumentalities may include, for example, a removable storage unit 714 and a storage unit interface 716, such as a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, and other removable storage units and interfaces that allow software and data to be transferred from the removable storage unit 714 to the computing system 700.
[0056] The computing system 700 may also include a communications interface 718. The communications interface 718 may be used to allow software and data to be transferred between the computing system 700 and external devices. Examples of the communications interface 718 may include a network interface (such as an Ethernet or other NIC card), a communications port (such as for example, a USB port, a micro USB port), Near field Communication (NFC), etc. Software and data transferred via the communications interface 718 are in the form of signals which may be electronic, electromagnetic, optical, or other signals capable of being received by the communications interface 718. These signals are provided to the communications interface 718 via a channel 720. The channel 720 may carry signals and may be implemented using a wireless medium, wire or cable, fiber optics, or other communications medium. Some examples of the channel 720 may include a phone line, a cellular phone link, an RF link, a Bluetooth link, a network interface, a local or wide area network, and other communications channels.
[0057] The computing system 700 may further include Input/Output (I/O) devices 722. Examples may include, but are not limited to a display, keypad, microphone, audio speakers, vibrating motor, LED lights, etc. The I/O devices 722 may receive input from a user and also display an output of the computation performed by the processor 702. In this document, the terms “computer program product” and “computer-readable medium” may be used generally to refer to media such as, for example, the memory 706, the storage devices 708, the removable storage unit 714, or signal(s) on the channel 720. These and other forms of computer-readable media may be involved in providing one or more sequences of one or more instructions to the processor 702 for execution. Such instructions, generally referred to as “computer program code” (which may be grouped in the form of computer programs or other groupings), when executed, enable the computing system 700 to perform features or functions of embodiments of the present invention.
[0058] In an embodiment where the elements are implemented using software, the software may be stored in a computer-readable medium and loaded into the computing system 700 using, for example, the removable storage unit 714, the media drive 710 or the communications interface 718. The control logic (in this example, software instructions or computer program code), when executed by the processor 702, causes the processor 702 to perform the functions of the invention as described herein.
[0059] Various embodiments provide method and system for generating and executing action sequence codes. The disclosed method and system may receive, via a user interface, a user query. The user query may include a user requirement for an action sequence. Further, the action sequence may include one or more sequentially linked tasks. Further, the disclosed method and system may validate the user query based on a pre-defined complexity criterion. Upon successful validation of the user query, the disclosed method and system may further input the user query to a first Artificial Intelligence (AI) model. The first AI model may be a non-generative AI model. Further, the disclosed method and system may generate, via the first AI model, a first response to the user query. Further, the disclosed method and system may validate the first response based on a scoring criterion. Upon successful validation of the first response, the disclosed method and system may further render the first response on the user interface. Upon unsuccessful validation of one of the user query or the first response, the disclosed method and system may further input the user query to a second AI model. The second AI model may be a generative AI model. Further, the disclosed method and system may generate, via the second AI model, a second response to the user query. Moreover, the disclosed method and system may render the second response on the user interface.
[0060] Thus, the disclosed method and system try to overcome the technical problem of generating and executing action sequence codes. The method and system may use a GenAI module that is an advanced platform which enables users to create customized and interactive conversational experiences. Further, the method and system may allow users to carefully customize a sequence of chains, enabling them to control the type and direction of conversations in a chat window. Further, the GenAI module customization flexibility is one of the main features. The User may possess the capability to precisely adjust various aspects of the chat window's responses to correspond with the requirements and preferences. Further, the method and system may allow users to customize the length of replies, allowing them to regulate the extent and intricacy of the information offered. Further, the method and system may enable users to choose the length of the produced material, whether the users like brief and direct encounters or comprehensive extensive replies. With the creative implementation, the method and system may give users a smooth and easy-to-use no-code/low code experience.
[0061] The method and system may allow the Natural Language Processing (NLP) execution layer to function in an environment and switch between customizable chains dynamically as needed. With the help of the features, the method and system may easily alter the system's behaviour without knowing a lot of code. The method and system may stand out due to the extensive range of capabilities. Further, the method and system may use a no-code approach to empower users to generate customized outputs, all inside the chat window. The method and system may be helpful for users irrespective of the level of expertise in coding, easily customize the system to meet the needs of the user, and adjust the chatbot's replies and functionality to correspond with specific use cases or preferences.
[0062] Further, the method and system may facilitate the widespread access and utilization of natural language processing (NLP) capabilities thus enabling a wider range of users to effectively employ for various purposes, such as content generation (Casual conversations, educational content, Creative Writing, Coding Help, Language translation, etc., and task automation (Customer Support chatbots, Idea generation, Automated social media engagement.
[0063] The method and system may use a flexible system that can easily adjust to the nuances of real language while keeping the accuracy necessary for producing precise visualizations, diagrams, or code snippets by fusing fuzzy logic transformation with powerful language processing. This method and system may not only improve the gen AI model's overall performance but also mark an advancement in intelligent and flexible information processing.
[0064] Utilizing the key features of the latest AI models, the method and system may be exceptional at producing customized outputs in response to user commands. When the system gets a prompt from a user, the gen AI model and the conventional AI may exchange the contextual data in the prompt at first. The method and system may ensure a comprehensive exploration of available knowledge and resources.
[0065] The method and system may include a strategic decision-making process to maximize efficiency. The method and system may determine if a standard/traditional AI sufficiently answer the user's question and provide the intended result before starting the gen AI module. If the traditional AI module may provide the requested information, the method and system may automatically switch to traditional AI capabilities, streamlining the response generation.
[0066] When the traditional AI module is deemed inadequate, the method and system may dynamically reroute the user query via the gen AI module, acknowledging the inherent capabilities of the gen AI module in performing challenging queries. In addition to maximizing the use of already-existing knowledge bases, the dynamic combination of the classical AI model and the gen AI model ensures that the method and system may adjust to the subtleties and complexity of user queries, providing a thorough and efficient response mechanism. The method and system may perfectly balance the capabilities of both conventional and advanced AI models.
[0067] The GenAI module may offer users a full platform to effortlessly construct and personalize task-oriented sequences for the AI models. The method and system may enable users to create complex sequences of actions customized according to the needs of the user. The main feature of the method and system is the inclusion of queries into a sequence, each accompanied by a range of adjustable choices, allowing users to coordinate exact activities for the artificial intelligence module.
[0068] The method and system may work on the execution of the chains. When a chain is triggered, the Users may have the ability to provide search variants designed to start certain chains. Further, the method and system may also include counter variants, which serve as barriers to stop unwanted chains from starting. The method and system may enable users to easily include tasks into each chain. In this sense, tasks are instructions that specify what has to be done.
[0069] Through OTB (Out of the Box) connections, the user may choose a predefined action code, which expedites the procedure. Moreover, users may be free to choose from pre-existing action codes or create unique ones for every activity, which further increases the orchestration process's flexibility and customization. With the strong structure, chain orchestration may be guaranteed to be dynamic and user-focused, offering a complete solution for a range of operational requirements. The method and system may have the capacity to effortlessly obtain Python packages and autonomously run the complete solution on a laptop independently.
[0070] In addition, the method and system may provide the flexibility to establish connections with other integrations, enabling the generation of tickets and the execution of customizable healing scripts to solve user-specific issues. The built-in functionality of the system may enable users to easily retrieve Python packages, guaranteeing the solution's self-sufficiency for execution on personal laptops. Moreover, the method and system may integrate capabilities that enable the system to interact with external systems, granting users the ability to generate requests and execute personalized healing scripts with ease. The integration of automation and adaptability may enhance the efficiency and effectiveness of problem resolution for users.
[0071] The method and system may consist of an advanced transformation module that is made to enable smooth data retrieval by generating SQL queries and executing code that can be modified. The transformation module may demonstrate proficiency in creating SQL queries that are customized to meet particular needs, then generates flexible code segments to execute these queries, guaranteeing the effective retrieval and distribution of the intended outcomes. The transformation module may be a crucial component of our system since the system abstracts and automates the complexities of building SQL queries. Through the use of the transformation module, users may establish the criteria for retrieving data either via a user-friendly interface or programmatically, thus enabling a very adaptable and instinctive experience.
[0072] The transformation module may create SQL queries that contain the required data parameters based on the specified retrieval criteria. In addition, the module may take an additional step by automatically generating code snippets that may execute these queries, thereby simplifying the process of retrieving and delivering the necessary results.
[0073] This method and system may improve the effectiveness of retrieving data and equip users with the adaptability to effortlessly incorporate the produced code into the current systems or applications. The system is adaptable, allowing the system to be used in various situations and handle various data retrieval circumstances. Further, the method and system may be a flexible and powerful tool for optimizing data access and manipulation.
[0074] Further, the method and system may be overseen and strengthened using a two-tiered strategy that incorporates the Language Model (LLM) and the cloud service being used. The LLM may act as a vigilant model, implementing rigorous security protocols throughout the code creation. Further, the system may not only detect and corrects any weaknesses but also includes the most effective security methods, enhancing the overall durability of the produced code. The selected cloud service, with a strong security architecture, simultaneously plays a crucial role in protecting the installed apps. The method and system may include different security measures, from network security and encryption technologies to identity and access management. The cloud service, supported by top-notch security standards, acts as an extra stronghold, guaranteeing that the code functions in a safe and compliant setting.
[0075] When writing instructions (prompts) and creating chains and tasks within the GenAI model, the user may participate in an iterative process. During the iterative process, the users may continually improve the instructions to evaluate if the produced output matches the expected results and is based on the queries. The User may customize and optimize the instructions inside the model interface to get the intended outcomes. The Gen AI model may adjust the prompts to guarantee that the predefined sequences are activated across different scenarios.
[0076] The Gen AI model may process of linking is carried out without any errors in accordance with the user's precise criteria for completing the desired activity. With the systematic approach, the user may actively engage in the process of generating content, giving them exact authority over the model's replies. To efficiently optimize the interaction with the models, the user may refine instructions and evaluate the execution of chains to satisfy the user’s queries.
[0077] Further, the method and system may use a model customization. In the model customization, an essential aspect of a low/no-code setup may be the capacity to modify and personalize models without the need for intricate coding. The Users may easily choose and optimize pre-existing models to align with the particular use cases.
[0078] Further, the method and system provide a plug-and-play integration of GenAI models within traditional AI-driven IT infrastructures. The low/no code environment may enable a seamless and effortless method for integrating models using a plug-and-play approach. The users may easily integrate pre-trained models into the processes, therefore lowering the obstacles for those without considerable coding proficiency. Thus, the method and system may enable rapid testing with many models to identify the optimal match for the user queries.
[0079] The GenAI model may operate on the premise of model chaining, which is the core concept that ensures smooth task execution. The architectural technique may entail the progressive linkage of many models, each specifically tailored to carry out separate functions. The Model chaining may be characterized by the continuous transfer of information, where the result of one job may be used as the input for the next tasks in the sequence. Further, the model may guarantee a thorough and logical handling of user questions or requests.
[0080] The GenAI model may act as a mediator, enabling users to create complex sequences of activities customized to their requirements. The commencement of this chain may be triggered by a user-defined input or query, which initiates the process. The first step in the sequence, often entailing a specialized model, analyses the initial input and produces an output. Importantly, the result may not be only a conclusive answer but also functions as an active input for future tasks in the sequence.
[0081] With each step in the chain, every task may add a level of processing, improvement, or alteration to the output of the previous query. The systematic transmission of information across multiple models may ensure a sophisticated and thoroughly improved understanding of the user query. After doing all assigned tasks in the given order, the final result may be a comprehensive and organized output, shown in the chat window in an interface (i.e., the user interface).
[0082] The Gen AI model may utilize model chaining to leverage the unique strengths of several models, with each model contributing specialized skills to the overall processing pipeline. The architectural paradigm may not only improve the system’s capacity to adapt to different user circumstances but also enable a modular and extensible approach to integrate new functions. The ultimate objective may be to provide users to a system that effortlessly incorporates many models, handles intricate information, and furnishes nuanced and contextually appropriate replies within the interactive chat setting.
[0083] The AI modules may have the ability to meet a wide range of analytical requirements, as shown by the complex combination of jobs/tasks, including SQL query development, data visualization, and predictive modelling. The platform may enable users to customize task sequences and output formats, allowing them to extract valuable insights, make educated choices, and fully explore the possibilities of AI-driven data analysis in their unique fields. Within the GenAI studio interface, users can store "chains" for later use. The chains may be activated by users, prompting the model to generate replies. The chat window may display the results of the exchanges, allowing users to obtain the model's responses to their specific inquiries. Further, the feature may enable users to systematically save, activate, and analyse pre-established conversation sequences, thereby enhancing the interactive and iterative nature of the interaction with the current models. Therefore, the user may see the changes are taking place in the chat window when the system may make changes in the gen AI module.
[0084] The AI model aims to derive significant insights and context from the documentation, allowing the model to provide replies that specifically answer the user's requests. The GenAI model may enable a smooth and user-friendly experience, enabling users to use the capabilities of AI to improve the apps. The system may facilitate efficient utilization of AI capabilities by offering a well-organized setting for creating tasks, referencing documents, and generating responses. The system is particularly beneficial in situations where achieving desired outcomes depends on nuanced and specialized details. Within the gen AI model, the Users may create task-oriented sequences to get precise analyses or summaries from organized datasets, such as balance sheets, scorecards, or comprehensive logs. The User may use the functionality to leverage the potential of AI for executing intricate tasks such as producing SQL queries, visualizing data, and making predictions.
[0085] In light of the above-mentioned advantages and the technical advancements provided by the disclosed method and system, the claimed steps as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Further, the claimed steps clearly bring an improvement in the functioning of the device itself as the claimed steps provide a technical solution to a technical problem.
[0086] The specification has described method and system for generating and executing action sequence codes. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0087] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[0088] It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims. , Claims:CLAIMS
I/We Claim:
1. A method (300) for generating and executing action sequence codes, the method (300) comprising:
receiving (302), by a processor (104), via a user interface (110), a user query (224), wherein the user query (224) comprises a user requirement for an action sequence, and wherein the action sequence comprises one or more sequentially linked tasks;
validating (304), by the processor (104), the user query (224) based on a predefined complexity criterion;
upon successful validation of the user query (224),
inputting (306), by the processor (104), the user query (224) to a first Artificial Intelligence (AI) model (220), wherein the first AI model (220) is a non-generative AI model;
generating (308), by the processor (104) via the first AI model (220), a first response (226) to the user query (224); and
validating (310), by the processor (104), the first response (226) based on a scoring criterion;
upon successful validation of the first response (226), rendering (312), by the processor (104), the first response (226) on the user interface (110);
upon unsuccessful validation of one of the user query (224) or the first response (226),
inputting (314), by the processor (104), the user query (224) to a second AI model (222), wherein the second AI model (222) is a generative AI model;
generating (316), by the processor (104) via the second AI model (222), a second response (228) to the user query (224); and
rendering (318), by the processor (104), the second response (228) on the user interface (110).
2. The method (300) as claimed in claim 1, wherein each of the first response (226) and the second response (228) comprises an executable code for the action sequence.
3. The method (300) as claimed in claim 2, comprising:
sequentially executing (320) the executable code corresponding to each of the one or more sequentially linked tasks in the action sequence to obtain an output for each of the one or more sequentially linked tasks; and
rendering (322) a combined output (230) on the user interface (110) in an order of generation, wherein the combined output (230) comprises the output for each of the one or more sequentially linked tasks.
4. The method (300) as claimed in claim 3, comprising:
receiving (402), via the user interface (110), a user feedback (232) corresponding to the combined output (230), wherein the user feedback (232) is in natural language; and
iteratively training (404) each of the first AI model (220) and the second AI model (222) based on the user feedback (232) through a reinforcement learning technique.
5. The method (300) as claimed in claim 1, comprising:
validating the second response (228) generated via the second AI model (222) based on the scoring criterion;
upon successful validation, rendering the second response (228) on the user interface (110); and
upon unsuccessful validation, notifying an administrator for a failed response generation.
6. The method (300) as claimed in claim 1, comprising prior to rendering, post-processing one of the first response (226) or the second response (228) through a data abstraction technique based on the user requirement.
7. The method (300) as claimed in claim 1, comprising:
storing historical user data in a database (218), wherein the historical user data comprises contextual data corresponding to the user query (224); and
prior to generating each of the first response (226) and the second response (228), retrieving the contextual data from the historical user data.
8. A system (100) for generating and executing action sequence codes, the system (100) comprising:
a processor (104); and
a memory (106) communicatively coupled to the processor (104), wherein the memory (106) stores processor instructions, which when executed by the processor (104), cause the processor (104) to:
receive (302), via a user interface (110), a user query (224), wherein the user query (224) comprises a user requirement for an action sequence, and wherein the action sequence comprises one or more sequentially linked tasks;
validate (304) the user query (224) based on a predefined complexity criterion;
upon successful validation of the user query (224),
input (306) the user query (224) to a first Artificial Intelligence (AI) model (220), wherein the first AI model (220) is a non-generative AI model;
generate (308), via the first AI model (220), a first response (226) to the user query (224); and
validate (310) the first response (226) based on a scoring criterion;
upon successful validation of the first response (226), render (312) the first response (226) on the user interface (110);
upon unsuccessful validation of one of the user query (224) or the first response (226),
input (314) the user query (224) to a second AI model (222), wherein the second AI model (222) is a generative AI model;
generate (316), via the second AI model (222), a second response (228) to the user query (224); and
render (318) the second response (228) on the user interface (110).
9. The system (100) as claimed in claim 8, wherein each of the first response (226) and the second response (228) comprises an executable code for the action sequence.
10. The system (100) as claimed in claim 9, wherein the processor instructions, on execution, cause the processor (104) to:
sequentially execute (320) the executable code corresponding to each of the one or more sequentially linked tasks in the action sequence to obtain an output for each of the one or more sequentially linked tasks; and
render (322) a combined output (230) on the user interface (110) in an order of generation, wherein the combined output (230) comprises the output for each of the one or more sequentially linked tasks.
11. The system (100) as claimed in claim 10, wherein the processor instructions, on execution, cause the processor (104) to:
receive (402), via the user interface (110), a user feedback (232) corresponding to the combined output (230), wherein the user feedback (232) is in natural language; and
iteratively training (404) each of the first AI model (220) and the second AI model (222) based on the user feedback (232) through a reinforcement learning technique.
12. The system (100) as claimed in claim 8, wherein the processor instructions, on execution, cause the processor (104) to:
validate the second response (228) generated via the second AI model (222) based on the scoring criterion;
upon successful validation, render the second response (228) on the user interface (110); and
upon unsuccessful validation, notify an administrator for a failed response generation.
13. The system (100) as claimed in claim 8, wherein the processor instructions, on execution, cause the processor (104) to: prior to rendering, post-process one of the first response (226) or the second response (228) through a data abstraction technique based on the user requirement.
14. The system (100) as claimed in claim 8, wherein the processor instructions, on execution, cause the processor (104) to:
store historical user data in a database (218), wherein the historical user data comprises contextual data corresponding to the user query (224); and
prior to generating each of the first response (226) and the second response (228), retrieve the contextual data from the historical user data.
| # | Name | Date |
|---|---|---|
| 1 | 202511085763-STATEMENT OF UNDERTAKING (FORM 3) [09-09-2025(online)].pdf | 2025-09-09 |
| 2 | 202511085763-REQUEST FOR EXAMINATION (FORM-18) [09-09-2025(online)].pdf | 2025-09-09 |
| 3 | 202511085763-REQUEST FOR EARLY PUBLICATION(FORM-9) [09-09-2025(online)].pdf | 2025-09-09 |
| 4 | 202511085763-PROOF OF RIGHT [09-09-2025(online)].pdf | 2025-09-09 |
| 5 | 202511085763-POWER OF AUTHORITY [09-09-2025(online)].pdf | 2025-09-09 |
| 6 | 202511085763-FORM-9 [09-09-2025(online)].pdf | 2025-09-09 |
| 7 | 202511085763-FORM 18 [09-09-2025(online)].pdf | 2025-09-09 |
| 8 | 202511085763-FORM 1 [09-09-2025(online)].pdf | 2025-09-09 |
| 9 | 202511085763-FIGURE OF ABSTRACT [09-09-2025(online)].pdf | 2025-09-09 |
| 10 | 202511085763-DRAWINGS [09-09-2025(online)].pdf | 2025-09-09 |
| 11 | 202511085763-DECLARATION OF INVENTORSHIP (FORM 5) [09-09-2025(online)].pdf | 2025-09-09 |
| 12 | 202511085763-COMPLETE SPECIFICATION [09-09-2025(online)].pdf | 2025-09-09 |