Abstract: A MULTI-AGENT AI SYSTEM TO HANDLES NATURAL LANGUAGE QUERIES OF A USER AND METHODS THEREOF ABSTRACT A multi-agent artificial intelligence (AI) system 100 to handles natural language queries of a user, the system 100 comprises a server 106 connected to a computing device 102 via a network 104. The server 106 is configured to generating a query by a user via a computing device 102; classifying the user query via a routing agent 114; selecting a processing path by the routing agent, wherein the routing agent assign a QUIN agent 116 for structured query, a ERYL agent 118 for unstructured query or a hybrid agent; evaluating the answer for relevance, completeness groundedness from the QUIN agent 116, the ERYL agent 118 or hybrid agent; and delivering the final answer via the network on the computing device 102. <>
Description:A MULTI-AGENT AI SYSTEM TO HANDLES NATURAL LANGUAGE QUERIES OF A USER AND METHODS THEREOF
TECHNOLOGICAL FIELD OF THE INVENTION
[0001] The present invention generally pertains to the field of resolving user query via an agentic flow and, in particular relates to, a multi-agent artificial intelligence (AI) system to handles natural language queries of a user and methods thereof.
BACKGROUND OF THE INVENTION
[0002] Modern enterprises are confronted with the challenge of managing and extracting value from vast volumes of data stored across diverse platforms and formats. This data landscape includes structured systems such as SQL databases, SAP, and Salesforce, alongside unstructured repositories like documents, knowledge bases, emails, and support logs. Traditional question-answering (QA) systems and search interfaces, however, are limited by their design to operate effectively on only a single data modality, whether structured or unstructured. This limitation leads to fragmented information access and hinders decision-making capabilities. Existing technologies suffer from several drawbacks, including siloed data access that forces users to manually integrate data across platforms, lack of unified query interpretation resulting in incomplete responses, and inadequate contextual understanding where structured data lacks narrative depth and unstructured data may lack precision. Moreover, legacy systems rely on static logic with no dynamic routing, failing to evolve alongside changing business environments due to their lack of self-learning and adaptability. They also struggle with scalability, resulting in performance bottlenecks when deployed on a large scale.
[0003] To overcome these challenges, the proposed invention presents a Unified View Approach, GenAI and AgenticAI powered solution capable of processing queries across structured, unstructured, and hybrid data sources. Introduced innovations include Agentic AI-powered multi-agent orchestration for distributed task handling, a smart Routing Agent leveraging schema metadata and NLP, and QUIN and ERYL agents for SQL and semantic processing, respectively. The support for multi-modal inputs and real-time data access, coupled with continuous learning and adaptation mechanisms, enables this architecture to deliver unified, accurate, and context-aware insights, significantly improving decision quality, response time, and operational efficiency compared to existing QA technologies.
SUMMARY OF THE INVENTION
[0004] According to another aspect of the present invention, a multi-agent artificial intelligence (AI) system to handles natural language queries of a user is disclosed. The system comprises a server connected to a computing device via a network. The server is configured to generate a query by a user via a computing device, classify the user query via a routing agent, select a processing path by the routing agent as a QUIN agent for structured query, a ERYL agent for unstructured query or a hybrid agent, evaluate the answer for relevance, completeness groundedness from the QUIN agent, the ERYL agent or hybrid agent, and deliver the final answer via the network on the computing device.
[0005] In some additional, alternative, or selectively cumulative embodiments, the routing agent classify the query using schema/entity matching, semantic cue detection or historical patterns.
[0006] In some other additional, alternative, or selectively cumulative embodiments, the QUIN agent process the user query to a query generator agent, wherein the query generator agent may include a plurality of sub-query generator agents such as a SAP query generator agent, a SQL query generator agent and a salesforce query generator agent. Further, the QUIN agent process query generated by the query generator agent in respective database tool via a query executor agent. Further, the QUIN agent validate and refine the results of the user query in a feedback loop to the query generator agent via a query executor critic agent. Further, the QUIN agent share the result from the query executor critic agent with an insights generator agent to interpret user inputs often in natural language and generate meaningful analyses from underlying datasets. Further, the QUIN agent evaluate the result from the insights generator agent by an evaluation agent and shares feedback with the QUIN agent to provide a final answer to the user query.
[0007] In some additional, alternative, or selectively cumulative embodiments, the ERYL agent process the user query to a vector retrieval-augmented generation (RAG), a graph RAG or a web search. Further, the ERYL agent processing the query by the vector RAG, the graph RAG and the web search to a retriever agent. Further, the ERYL agent fetch the data from an artificial intelligence search or graph data base or web search. Further, the ERYL agent send query results from the artificial intelligence or graph data base or web search to an answer agent. Further, the ERYL agent evaluating the results received from the answer agent via an evaluation agent. Further, the ERYL agent share a feedback via evaluation agent, to provide a final answer to the user query.
[0008] In some more additional, alternative, or selectively cumulative embodiments, the routing agent process the user request at least based on natural language understanding (NLU), schema metadata, and ontological. Further, the database tool include the SAP database, SQL database or Salesforce database etc. Further, the hybrid agent is configured to run first QUIN agent and the result from the QUIN agent is passed through ERYL agent to provide a combine answer.
[0009] According to an aspect of the present invention, the present embodiments provide a method for handling natural language queries of a user. The method steps include generating a query by a user via a computing device. Further, the method steps include classifying the user query via a routing agent. Further, the method steps include selecting a processing path by the routing agent, wherein the routing agent assign a QUIN agent for structured query, a ERYL agent for unstructured query or a hybrid agent. Further, the method steps include evaluating the answer for relevance, completeness groundedness from the QUIN agent, the ERYL agent or hybrid agent. Further, the method steps include delivering the final answer via the network on the computing device.
BRIEF DESCRIPTION OF DRAWINGS
[0010] The accompanying drawings illustrate various embodiments of systems, methods, and embodiments of various other aspects of the disclosure. Any person with ordinary skills in the art will appreciate that the illustrated element boundaries (e.g. boxes, groups of boxes, or other shapes) in the figures represent one example of the boundaries. It may be that in some examples one element may be designed as multiple elements or that multiple elements may be designed as one element. In some examples, an element shown as an internal component of one element may be implemented as an external component in another, and vice versa. Furthermore, elements may not be drawn to scale. Non-limiting and non-exhaustive descriptions are described with reference to the following drawings. Having thus described example embodiments of the invention in general terms, illustrative embodiments of the present invention are described herein with reference to the accompanying drawings, for which the components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating principles, and wherein:
[0011] FIG. 1, illustrates a block diagram of a multi-agent artificial intelligence (AI) system 100 to handles natural language queries of a user, in accordance with an embodiment of the disclosure;
[0012] FIG. 2, illustrates a flow diagram to handle query multi-agent artificial intelligence (AI) system, in accordance with an embodiment of the disclosure;
[0013] FIG. 3, illustrates a block diagram illustrating physical components (e.g., hardware) of the computing device with which aspects of the disclosure may be practiced, in accordance with an embodiment of the disclosure;
[0014] FIG. 4, illustrates a sequence diagram is illustrated that depicts a method for handling natural language queries of a user, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
[0015] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, systems and methods are shown in block diagram form only in order to avoid obscuring the present disclosure. Example embodiments are described below with reference to the accompanying drawings. Unless otherwise expressly stated in the drawings, the sizes, positions, etc., of components, features, elements, etc., as well as any distances therebetween, are not necessarily to scale, and may be disproportionate and/or exaggerated for clarity.
[0016] The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be recognized that the terms “comprise,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise specified, a range of values, when recited, includes both the upper and lower limits of the range, as well as any sub-ranges therebetween. Unless indicated otherwise, terms such as “first,” “second,” etc., are only used to distinguish one element from another. For example, one element could be termed a “first element” and similarly, another element could be termed a “second element,” or vice versa. The section headings used herein are for organizational purposes only and are not to be construed as limiting the subject matter described.
[0017] Unless indicated otherwise, the terms “about,” “thereabout,” “substantially,” etc. mean that amounts, sizes, formulations, parameters, and other quantities and characteristics are not and need not be exact, but may be approximate and/or larger or smaller, as desired, reflecting tolerances, conversion factors, rounding off, measurement error and the like, and other factors known to those of skill in the art.
[0018] Spatially relative terms, such as “right,” left,” “below,” “beneath,” “lower,” “above,” and “upper,” and the like, may be used herein for ease of description to describe one element's or feature's relationship to another element or feature, as illustrated in the drawings. It should be recognized that the spatially relative terms are intended to encompass different orientations (e.g. portrait or landscape) in addition to the orientation depicted in the figures. For example, if an object in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the term “below” can, for example, encompass both an orientation of above and below. An object may be otherwise oriented (e.g., rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may be interpreted accordingly.
[0019] Unless clearly indicated otherwise, all connections and all operative connections may be direct or indirect. Similarly, unless clearly indicated otherwise, all connections and all operative connections may be rigid or non-rigid.
[0020] Like numbers refer to like elements throughout. Thus, the same or similar numbers may be described with reference to other drawings even if they are neither mentioned nor described in the corresponding drawing. Also, even elements that are not denoted by reference numbers may be described with reference to other drawings.
[0021] Many different forms and embodiments are possible without deviating from the spirit and teachings of this disclosure and so this disclosure should not be construed as limited to the example embodiments set forth herein. Rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will convey the scope of the disclosure to those skilled in the art.
[0022] Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments.
[0023] The present invention disclose a multi-agent artificial intelligence (AI) system for unified query processing that intelligently handles natural language queries across structured (e.g., SQL, SAP, Salesforce) and unstructured (e.g., documents, emails, knowledge graphs) data sources. The system include a dynamic routing agent that classifies queries based on intent and modality, and processes them via specialized agents — QUIN (structured) and ERYL (semantic) or through a hybrid pipeline that fuses bot.
[0024] Further, the system supports multi-modal inputs (text, audio, image), integrates advanced retrieval techniques including Vector RAG, GraphRAG, and web search, and continuously improves through feedback-driven self-learning. The present invention focus on context-aware, accurate, and scalable query responses, with automated orchestration, cross-source synthesis, and adaptive evolution.
[0025] Referring initially to FIG. 1, a block diagram of a multi-agent artificial intelligence (AI) system 100 to handles natural language queries of a user is illustrated, in accordance with an embodiment of the disclosure. The system 100 includes various equipment; for example, the system 100 includes a computing device 102 to generate a natural language query that can be processed by a server 106. The computing device 102 may include its own processing circuitry to capture the natural language query. It may be noted that the natural language query may be in the form of text, image, or audio. The natural language query is shared with the server 106 over the network 108. The network 108 may be a network which includes a plurality of servers (not shown) to process the natural language query from multiple users. Furthermore, the computing device 102 may include a software application (not shown) to generate the natural language query. In exemplary embodiment, the request may be generated by the user or an artificial (AI) agent having access to the software application.
[0026] In some embodiment, the server 106 may include a processing circuitry 108 which include a memory 110 and a database 112. As, one or more natural language query request is received by the server 106. The processing circuitry 108 may fetch the data from the database 112 into the memory to run a pre-determined set of instructions. The database 112 may store the pre-determined parameters such as routing the natural language query request associate with the user. The processing circuitry 108 is configured to read the instruction from the memory 110. The processing circuitry 108 at least based on the instructions, may process the request received from the computing device 102. The memory 110 may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. The memory 110 may include RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information, and which can be accessed by the server 106. Furthermore, the processing circuitry 108 may fetch the data from the database 112 into the memory to run a pre-determined set of instructions. The database 112 may store the pre-determined parameters such as name, date, age, gender, location, payment receipt, access key and product information etc. associate with the user.
[0027] Furthermore, the server 106 may include a routing agent 114. The routing agent 114 identify an optimal path to address the query. The natural language query may be classified via natural language understanding (NLU), schema metadata, and ontological cues. The routing agent 114 dynamically classifies the incoming query as structured (SQL), semantic (unstructured), or hybrid. The routing agent 114 may be classified as a QUIN agent 116, a ERYL agent 118 or a hybrid agent (combination of QUIN agent 116 and ERYL agent 118). The QUIN agent 116 is configured to process the structured query (such as SQL, SAP, Salesforce) and the ERYL agent 118 is configured to process the unstructured query (such as PDFs, documents, audio). In an embodiment, the QUIN agent 116 is configured to convert the natural language query into the structured query such as SQL query. The structured query is executed using schema-aware text-to-SQL models. The QUIN agent 116 summarizes structured output into human-readable insights. Furthermore, the ERYL agent 118 is configured to use a vector RAG, graphRAG, and web search to retrieve content. The retrieval may be done via nearest neighbor search in vector space, often powered by libraries like FAISS or Annoy that enable efficient similarity search. The ERYL agent 118 applies a semantic LLM for summarization and response synthesis. Furthermore, the routing agent may utilize hybrid approaches to combine query resolution technique of QUIN agent 116 and ERYL agent 118.
[0028] Now turning to FIG. 2, a flow diagram to handle query multi-agent artificial intelligence (AI) system 100 is illustrated, in accordance with an embodiment of the disclosure. FIG. 2 is explained in conjunction with elements of FIG. 1.
[0029] The network 104 may be configured to act as proxy between the computing device 102 and the server 106. The network 104 may be configured to maintain privacy, anonymity, security, content filtering, caching, or load balancing etc. The network 104 share a user query 202 with the routing agent 114. The routing agent 114 is configured to process the user request 202 at least based on natural language understanding (NLU), schema metadata, and ontological. The routing agent 114 process the user request 202 via the QUIN agent 116 and the ERYL agent 118. In an embodiment, the QUIN agent 116 is configured to process the user query 202 to a query generator agent 204. The query generator agent 204 may include a plurality of sub-query generator agents such as a SAP query generator agent 206, a SQL query generator agent 208 and a salesforce query generator agent 210 etc.
[0030] The query generator agent 204 is configured to user query into executable database queries using different agents that are trained on the specific databases such as SAP, SQL or Salesforce etc. The query generator agent 204 further leverage artificial intelligence or large language models to provide meaningful query tailored for desired a database tool 214. For example, a user generate a query to track an order on an ecommerce platform. The query generator agent 204 may is configured to process the query and convert into a format suitable to be understood by the database tool 214. In an exemplary embodiment, the SAP query generator agent 206 may include the database named Amazon database with a table named Products having columns: Product ID, Product Name, Category, Price, and Stock Quantity etc.
[0031] Further, a query executor agent 212 is configured to process query generated by the query generator agent 204. The query executor agent 212 further execute the query in the database (DB) tool 212. The database tool 212 may include the SAP database, SQL database or Salesforce database etc. For example, the query executor agent 212 is configured to execute the SAP query generator agent 206 commands in the SAP database tool. The SAP database tool may include distributor info, product pricing, or product stock etc. Similarly, the query executor agent 212 is configured to execute the SQL query generator agent 208 commands in the SQL database tool. The SQL database tool may include orders details, invoice information, shipment information, products or payment details etc. Similarly, the query executor agent 212 is configured to execute the Salesforce query generator agent 210 commands in the Salesforce database tool. The salesforce database toll may include user inventory etc.
[0032] In an embodiment, the collected information from the database tool 214 is shared with a query executor critic agent 216. The query executor critic agent 216 refers to a specialized component plays a crucial role in validating and refining the results of the user query 202 execution. The query executor critic agent 216 may evaluate and validate the information received to ensures the output aligns with the user's intent and meets specified criteria for accuracy and completeness. The query executor critic agent 216 may act as a quality control mechanism, the critic agent helps to reduce errors, enhance the reliability of the system's responses, and ultimately raise the standard of the final product or analysis. Further, the query executor critic agent 216 may flag highlight or identify potential issues such as syntactic errors (e.g., in generated code or SQL) and semantic errors (where the logic of the executed query does not align with the user's intended meaning). If the query executor critic agent 216 finds the issues can provide feedback or suggests modifications to improve the quality, efficiency, or readability of the executed query or generated output back to the query generator agent 204.
[0033] In an embodiment, the result from the query executor critic agent 216 is shared with an insights generator agent 218. The insights generator agent 218 is typically interpret user inputs—often in natural language—and generate meaningful analyses from underlying datasets. The insights generator agent 218 automate data querying, perform statistical or pattern analysis, and present results through reports, visualizations, or narrative summaries, supporting faster and deeper decision-making. Furthermore, the result from the insights generator agent 218 is evaluated by an evaluation agent 220. The evaluation agent 220 evaluate the results against predefined KPIs such as groundness, relevance etc. The evaluation agent 220 shares feedback with the QUIN agent 116. The QUIN agent 116 provide a final answer to the user query 202.
[0034] In an embodiment, the ERYL agent 118 is configured to process the user query 202 to a vector retrieval-augmented generation (RAG) 222, a graph RAG 224 or a web search 226. The RAG systems, highlighting vector-based, graph-based, and hybrid methodologies. The vector-based RAG 222 approach relies on dense semantic embeddings, utilizing models like BERT to represent and retrieve documents by converting queries into vectors and ranking document similarity through scalable indexing methods such as ANN or HNSW. The graph-based RAG 224 systems employ structured graphs and graph neural networks (GNN) to process queries by entity linking, multi-hop path ranking, and subgraph matching, making them suitable for relationship-heavy queries and domains requiring deep reasoning, exemplified by Google's Knowledge Graph Search API. In another embodiment, a Hybrid RAG (not shown) approaches merge vector and graph-based methods, with dual-representation systems combining text semantics with structured links and hierarchical retrieval systems that unite fast vector search with subsequent graph-based reasoning, aiming for efficient semantic retrieval alongside enhanced relationship modeling and reasoning capabilities. Furthermore, the web search 226 may be used to convert the query into database readable instructions.
[0035] In an embodiment, the query processed by the vector RAG 222, the graph RAG 224 and the web search 226 to a retriever agent 228. The retriever agent is commonly used in information retrieval and natural language processing (NLP) systems. It refers to a component or an AI agent whose primary responsibility is to search, find, and fetch relevant documents or pieces of data in response to the user query 202. Further, the data may be fetched from an artificial intelligence search 230 or graph data base 232 or web search 234. For example, the AI search 230 may include product name, image and description etc. Similarly, the graph database 232 may include contracts, discounts, policies or offers etc. Similarly, the web search 234 may include news, future goals or market analysis etc.
[0036] Further, the query results from the artificial intelligence 230 or graph data base 232 or web search 234 are sent to an answer agent 236. The answer agent 236 leverage artificial intelligence and natural language processing to interpret user queries, break down complex questions, and generate relevant answers or summaries based on internal or external data sources. The evaluation agent 240 evaluate the results received from the answer agent 238. The evaluation agent 220 shares feedback with the ERYL agent 118. The ERYL agent 118 provide a final answer to the user query 202.
[0037] In an embodiment, the QUIN agent 116 and ERYL 118 may be combined to provide more accurate result.
[0038] Now turning to FIG. 3, a block diagram illustrating physical components (e.g., hardware) of the computing device 102 with which aspects of the disclosure may be practiced, in accordance with an embodiment of the disclosure. FIG. 3 is explained in conjunction with elements of FIG. 1 and 2. In a basic configuration, the computing device 102 may include at least one processing unit 302 and a system memory 304. Depending on the configuration and type of computing device 102, the system memory 302 may comprise, but is not limited to volatile storage (e.g., random access memory), non-volatile storage (e.g., read-only memory), flash memory, or any combined of such memories.
[0039] In some aspects, the system memory 304 may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, or program modules. The system memory 304 may include RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other article of manufacture which can be used to store information, and which can be accessed by the computing device 102. Any such system memory 304 may be part of the computing device 102. The system memory 304 does not include a carrier wave or other propagated or modulated data signal.
[0040] The computing device 102 may run an operating system 306 at least based on the instruction stored in the system memory 304, via the processing unit 202. The operating system 306 may comprise one or more program application suitable for running a software application 308. The operating system 306, for example, may be suitable for controlling the operation of the computing device 102. Furthermore, aspects of the disclosure may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated in FIG. 2 by those components within a dashed line 312. The computing device 102 may have additional features or functionality. For example, the computing device 102 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
[0041] As stated above, the processing unit 302 may run the software application 308 of the operating system 306, that includes an interface 312 for handling natural language queries of a user. It may be noted that the computing device 102 of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. For example, aspects of the disclosure may be practiced via a system-on-a-chip (SOC) where each or many of the components illustrated in FIG. 2 may be integrated onto a single integrated circuit. Such an SOC device may include one or more processing units, graphics units, communications units, system virtualization units and various application functionality all of which are integrated (or “burned”) onto the chip substrate as a single integrated circuit. When operating via an SOC, the functionality, described herein, with respect to the capability of client to switch protocols may be operated via application-specific logic integrated with other components of the computing device 102 on the single integrated circuit (chip).
[0042] Aspects, of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. In addition, aspects of the disclosure may be practiced within a general purpose computer or in any other circuits or systems.
[0043] As stated above, the software application 308 and data files may be stored in the system memory 304. Furthermore, the operating system 306 may include the software application 308 that may be used in accordance with aspects of the present disclosure, and in particular to generate request for one or more products, etc. In some aspect, the interface 310 allows a user to select one or more products from the above referenced application in more effective, more efficient, and improved manner.
[0044] Further, the software application 308 may receive the input from the user. The input may be processed by the processing unit 302. Further, the processing unit 302 is electronically connected to one or more communication connections 314 allowing communications with other computing devices 102. Examples of suitable communication connections 314 include, but are not limited to, RF transmitter, receiver, and/or transceiver circuitry, universal serial bus (USB), parallel, and/or serial ports. The communication connections 314 may generate a link for the computing device 102 to communicate with the network 104. The network 104 may communicate with the server 106 to validate the request generated by the user.
[0045] Further, the communication module 210 may be embodied with the processing unit 202 to communicate computer readable instructions, data structures, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media.
[0046] Referring now to FIG. 4, a sequence diagram is illustrated that depicts a method 400 for handling natural language queries of a user, in accordance with an embodiment of the present invention. FIG. 4 is explained in conjunction with elements of FIG. 1 and FIG. 2. The system 100, as depicted in FIG. 1 and FIG. 2, may be configured to implement the method 400 as shown in FIG. 4. The method 400 display for handling natural language queries of a user, as elaborated below.
[0047] At step S402, the user 112 generates a query via the computing device 102. The user 112 may generate the query from the interface 312 of the software application 308 installed in the computing device 102. The computing device 102 process the request and shares with the server 106 on the network 104. The query may include text, audio or image. The audio and image may be transcribed or OCR-processed into text using standard AI modules.
[0048] At step S404, the server 106 classify the query received from the computing device 102 via the routing agent 114. The routing agent 114 classify the query using schema/entity matching, semantic cue detection or historical patterns.
[0049] At step S406, the server 106 after the classification may select the processing path from the QUIN agent 116, the ERYL agent 118 or Hybrid agent (combination of QUIN 116, the ERYL agent 118). Furthermore, the QUIN agent 116 convert query from text to SQL. The SQL query is executed in the database. The tabulated output is extracted from the SQL database. The tabulated data is converted into a summary as the answer. Furthermore, the ERYL agent 118 conduct a RAG search which is aggregated to generate the context. The context is processed from database to generate the answer. Furthermore, the hybrid agent is configured to run first QUIN agent 116. The result from the QUIN agent 116 is passed through ERYL agent 118 to provide a combine answer.
[0050] At step S408, the server 106 evaluate the answer for relevance, completeness groundedness etc. In an embodiment, the server 106 may share the feedback if answer is unsatisfactory to QUIN agent 116, the ERYL agent 118 to refine the answer.
[0051] At step S410, the server 106 deliver the final answer via the network 104 on to the computing device 102. The final answer receive unified, explainable response in conversational format or dashboard view.
[0052] The multi-agent artificial intelligence (AI) system 100 provide unified query processing across modalities and represents a significant innovation in data handling. Traditionally, systems were designed to process either structured data, like that managed by SQL-based business intelligence tools, or unstructured data, such as that navigated by document search engines. This new invention bridges the gap between these two data types, allowing for the seamless integration and processing of hybrid queries that include both structured (tabular) and unstructured (textual, audio, image) data. The result is the generation of a single, coherent, and context-aware response that effectively combines diverse data forms, enhancing query handling capabilities and delivering more comprehensive insights.
[0053] Further, the multi-agent artificial intelligence (AI) system 100 provide intelligent query routing and classification marks a transformative shift from traditional methods that relied on static rules or hardcoded logic for determining query paths. This innovation features a dynamic Routing Agent capable of classifying and directing queries using advanced natural language understanding (NLU), schema metadata, and ontological patterns. By leveraging these sophisticated techniques, the system can ensure more accurate and contextually relevant processing paths for queries, greatly enhancing the efficiency and precision of data handling and retrieval processes.
[0054] Further, the multi-agent artificial intelligence (AI) system 100 provide multi-model input capability represents a significant advancement over earlier systems that were limited to text-only input. This innovation allows for the acceptance of diverse query forms, including text, image, and audio, thereby broadening the system's usability. It facilitates more natural user interactions and enhances functionality across multi-device, multi-format enterprise environments. This expanded capability supports a more intuitive and versatile user experience, catering to the varied ways in which users interact with technology today.
[0055] Further, the multi-agent artificial intelligence (AI) system 100 provide cross-source data fusion represents a significant leap forward from traditional systems, which typically provided fragmented answers from isolated sources, burdening users with the task of data interpretation. This innovation processes real-time data synthesis, effectively integrating information from diverse heterogeneous sources. By combining numeric metrics with semantic context, the system delivers complete and actionable insights, thereby streamlining decision-making processes and enhancing the overall value derived from data. This comprehensive approach ensures that users receive fully synthesized information, eliminating the silos and offering a more holistic understanding of data landscapes.
[0056] Further, the multi-agent artificial intelligence (AI) system 100 provide adaptive learning and self-optimization marks a departure from traditional static systems that necessitated manual reconfiguration to maintain their relevance. This innovation incorporates self-learning feedback loops, which facilitate continuous system improvement. By evaluating performance against predefined key performance indicators (KPIs) such as groundedness and relevance, the system can refine queries and automatically retrain its models. This dynamic approach ensures that the system evolves and adapts over time, maintaining optimal performance and enhancing its ability to deliver accurate and contextually relevant results without the need for manual intervention.
[0057] Further, the multi-agent artificial intelligence (AI) system 100 provide scalable and modular architecture introduced by this invention addresses the limitations of earlier systems, which often faced challenges in scaling efficiently across various departments, datasets, and use cases. Designed for horizontal scalability, this architecture supports distributed processing and cloud orchestration, facilitating seamless expansion. Its plug-and-play modules enable easy adaptation to new domains or data types, offering flexible scalability and versatility. This approach ensures that the system can grow and adapt in line with organizational needs, providing robust support for a wide range of applications and facilitating the integration of diverse datasets within a unified framework.
[0058] Further, the multi-agent artificial intelligence (AI) system 100 provide enhanced accuracy and contextual relevance addresses the shortcomings of earlier systems that, while factually correct, often failed to provide complete contextual understanding. This invention utilizes hybrid reasoning by integrating techniques like Vector RAG and GraphRAG, complemented by post-processing agents such as ERYL and QUIN. This combination enhances the precision and relevance of responses, aligning them more closely with specific business contexts. By ensuring domain-specific accuracy, the system delivers more comprehensive and context-aware answers, thereby significantly improving the quality and applicability of information provided to users.
[0059] It should be understood that the foregoing description is only illustrative of the aspects of the disclosed embodiments. Various alternatives and modifications may be devised by those skilled in the art without departing from the aspects of the disclosed embodiments.
[0060] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of reactants and/or functions, it should be appreciated that different combinations of reactants and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of components and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
, C , Claims:We Claim:
1. A multi-agent artificial intelligence (AI) system 100 to handles natural language queries of a user, the system 100 comprises:
a server 106 connected to a computing device 102 via a network 104, the server 106 is configured to:
generating a query by a user via a computing device 102;
classifying the user query via a routing agent 114;
selecting a processing path by the routing agent, wherein the routing agent assign a QUIN agent 116 for structured query, a ERYL agent 118 for unstructured query or a hybrid agent;
evaluating the answer for relevance, completeness groundedness from the QUIN agent 116, the ERYL agent 118 or hybrid agent; and
delivering the final answer via the network on the computing device 102.
2. The system 100 of claim 1, wherein the routing agent classify the query using schema/entity matching, semantic cue detection or historical patterns.
3. The system 100 of claim 1, wherein the QUIN agent 116
processing the user query 202 to a query generator agent 204, wherein the query generator agent 204 may include a plurality of sub-query generator agents such as a SAP query generator agent 206, a SQL query generator agent 208 and a salesforce query generator agent 210;
processing query generated by the query generator agent 204 in respective database tool 214 via a query executor agent 212;
validating and refining the results of the user query 202 in a feedback loop to the query generator agent 204 via a query executor critic agent 216;
sharing the result from the query executor critic agent 216 with an insights generator agent 218 to interpret user inputs often in natural language and generate meaningful analyses from underlying datasets; and
evaluating the result from the insights generator agent 218 by an evaluation agent 220 and shares feedback with the QUIN agent 116 to provide a final answer to the user query 202.
4. The system 100 of claim 1, wherein the ERYL agent 118
processing the user query 202 to a vector retrieval-augmented generation (RAG) 222, a graph RAG 224 or a web search 226;
processing the query by the vector RAG 222, the graph RAG 224 and the web search 226 to a retriever agent 228;
fetching the data from an artificial intelligence search 230 or graph data base 232 or web search 234;
sending query results from the artificial intelligence 230 or graph data base 232 or web search 234 to an answer agent 236;
evaluating the results received from the answer agent 238 via an evaluation agent 240; and
sharing a feedback with the ERYL agent 118 via evaluation agent 220, to provide a final answer to the user query 202.
5. The system 100 of claim 1, wherein the routing agent 114 process the user request 202 at least based on natural language understanding (NLU), schema metadata, and ontological.
6. The system 100 of claim 3, wherein the database tool 212 include the SAP database, SQL database or Salesforce database.
7. The system 100 of claim 6, wherein the hybrid agent is configured to run first QUIN agent 116 and the result from the QUIN agent 116 is passed through ERYL agent 118 to provide a combine answer.
8. A method 400 for handling natural language queries of a user, the method 400 comprising:
generating a query by a user via a computing device 102;
classifying the user query via a routing agent 114;
selecting a processing path by the routing agent, wherein the routing agent assign a QUIN agent 116 for structured query, a ERYL agent 118 for unstructured query or a hybrid agent;
evaluating the answer for relevance, completeness groundedness from the QUIN agent 116, the ERYL agent 118 or hybrid agent; and
delivering the final answer via the network on the computing device 102.
9. The method 400 of claim 8, wherein the QUIN agent 116
processing the user query 202 to a query generator agent 204, wherein the query generator agent 204 may include a plurality of sub-query generator agents such as a SAP query generator agent 206, a SQL query generator agent 208 and a salesforce query generator agent 210;
processing query generated by the query generator agent 204 in respective database tool 214 via a query executor agent 212;
validating and refining the results of the user query 202 in a feedback loop to the query generator agent 204 via a query executor critic agent 216;
sharing the result from the query executor critic agent 216 with an insights generator agent 218 to interpret user inputs often in natural language and generate meaningful analyses from underlying datasets; and
evaluating the result from the insights generator agent 218 by an evaluation agent 220 and shares feedback with the QUIN agent 116 to provide a final answer to the user query 202.
10. The method 400 of claim 8, wherein the ERYL agent 118
processing the user query 202 to a vector retrieval-augmented generation (RAG) 222, a graph RAG 224 or a web search 226;
processing the query by the vector RAG 222, the graph RAG 224 and the web search 226 to a retriever agent 228;
fetching the data from an artificial intelligence search 230 or graph data base 232 or web search 234;
sending query results from the artificial intelligence 230 or graph data base 232 or web search 234 to an answer agent 236;
evaluating the results received from the answer agent 238 via an evaluation agent 240; and
sharing a feedback with the ERYL agent 118 via evaluation agent 220, to provide a final answer to the user query 202.
| # | Name | Date |
|---|---|---|
| 1 | 202541083519-STATEMENT OF UNDERTAKING (FORM 3) [02-09-2025(online)].pdf | 2025-09-02 |
| 2 | 202541083519-REQUEST FOR EARLY PUBLICATION(FORM-9) [02-09-2025(online)].pdf | 2025-09-02 |
| 3 | 202541083519-FORM-9 [02-09-2025(online)].pdf | 2025-09-02 |
| 4 | 202541083519-FORM FOR SMALL ENTITY(FORM-28) [02-09-2025(online)].pdf | 2025-09-02 |
| 5 | 202541083519-FORM FOR SMALL ENTITY [02-09-2025(online)].pdf | 2025-09-02 |
| 6 | 202541083519-FORM 1 [02-09-2025(online)].pdf | 2025-09-02 |
| 7 | 202541083519-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-09-2025(online)].pdf | 2025-09-02 |
| 8 | 202541083519-EDUCATIONAL INSTITUTION(S) [02-09-2025(online)].pdf | 2025-09-02 |
| 9 | 202541083519-DRAWINGS [02-09-2025(online)].pdf | 2025-09-02 |
| 10 | 202541083519-DECLARATION OF INVENTORSHIP (FORM 5) [02-09-2025(online)].pdf | 2025-09-02 |
| 11 | 202541083519-COMPLETE SPECIFICATION [02-09-2025(online)].pdf | 2025-09-02 |
| 12 | 202541083519-MSME CERTIFICATE [03-09-2025(online)].pdf | 2025-09-03 |
| 13 | 202541083519-FORM28 [03-09-2025(online)].pdf | 2025-09-03 |
| 14 | 202541083519-FORM 18A [03-09-2025(online)].pdf | 2025-09-03 |
| 15 | 202541083519-Proof of Right [19-09-2025(online)].pdf | 2025-09-19 |
| 16 | 202541083519-FORM-26 [19-09-2025(online)].pdf | 2025-09-19 |
| 17 | 202541083519-FORM-5 [13-10-2025(online)].pdf | 2025-10-13 |