Abstract: AN APPARATUS AND METHOD UTILIZING RETRIEVAL-AUGMENTED GENERATION (RAG) FOR ADAPTIVE LEARNING The present invention discloses an apparatus and method utilizing Retrieval-Augmented Generation (RAG) for adaptive learning, the apparatus comprising computing device, memory and storage components, one or more input devices, one or more output devices, a power source and communication interfaces. The apparatus and method involve using RAG and Al-processing for retrieving relevant information from multiple knowledge sources. The apparatus and method are cost-effective as a single compact apparatus with offline capabilities, affordable, scalable, user-friendly that integrates seamlessly into educational environments. Figure 1
Description:AN APPARATUS AND METHOD UTILIZING RETRIEVAL-AUGMENTED GENERATION (RAG) FOR ADAPTIVE LEARNING
FIELD OF THE INVENTION
[001] The present invention relates to the field of AI-driven educational technology, specifically designed to provide real-time, interactive assistance in learning environments. It combines voice recognition, adaptive AI responses, and seamless user interaction to enhance educational experiences.
BACKGROUND OF THE INVENTION
[002] Current teaching assistance devices for education rely on pre-programmed responses, require additional software/hardware for real-time interaction, and lack adaptive learning capabilities. These systems do not support continuous learning, multi-modal interaction, or offline functionality, making them less effective for dynamic teaching environments.
[003] Current teaching assistance devices often rely on automated feedback or text-based systems, requiring additional software or hardware for real-time interaction. While voice recognition and AI advancements are emerging, they have not yet been integrated into compact, affordable, and adaptive solutions. Existing devices lack continuous learning capabilities and fail to provide personalized, context-aware responses in real time. This highlights the need for an all-in-one, portable device that offers voice-based assistance, seamless integration, and real-time adaptive responses to enhance teaching and learning experiences. Existing solutions are often costly and complex, limiting accessibility. There is a need for interactive and personalized teaching tools in modern education.
[004] Voice-activated systems and AI-driven conversational interfaces have seen significant adoption in recent years, particularly in virtual assistants, automated customer support, and smart home devices. However, traditional systems often struggle with providing accurate and contextually rich responses when faced with complex or specialized queries. These limitations are exacerbated by reliance on a single, static knowledge base or by the inability to leverage multiple diverse data sources in real time.
[005] The retrieval-augmented generation (RAG) model addresses this issue by combining two key components: information retrieval (which involves searching and retrieving relevant information) and generative models (which create new responses based on the retrieved information).
[006] Therefore, there is a need for an improved system that combines voice input, retrieval-augmented generation, and the ability to query multiple knowledge sources in sequence, all while providing real-time generative responses through an output device.
[007] The present invention solves these problems and provides a cost-effective apparatus and method using Retrieval-Augmented Generation (RAG), Al processing and machine learning for providing real time responses based on user input.
OBJECT OF THE INVENTION
[008] One of the objectives of the present invention is to overcome all the mentioned and existing drawbacks of the prior arts by providing an apparatus and method for real time response generation based on user input, utilizing Retrieval-Augmented Generation, voice recognition, and adaptive AI responses.
[009] Another objective of the present invention is to provide interactive and personalized teaching tool for modern education.
[0010] Yet another objective of the present invention is to provide an apparatus and method for enhancing teaching and learning by integrating a three-layer RAG system, enabling efficient knowledge retrieval from past interactions, a vast textbook database and online resources.
[0011] Still another objective of the invention is to provide an apparatus which is compact, affordable, and real-time AI-powered assistant with adaptive learning capabilities, ensuring seamless and intelligent educational support.
[0012] Another objective of the invention is to provide all-in-one, portable apparatus that offers voice-based assistance, seamless integration, and real-time adaptive responses to enhance teaching and learning experiences.
[0013] Further objective of the invention is to provide an AI-driven educational technology i.e. apparatus and method, specifically designed to provide real-time, interactive assistance in learning environments. It combines voice recognition, adaptive AI responses, and seamless user interaction to enhance educational experiences.
SUMMARY OF THE INVENTION
[0014] The invention relates to an apparatus and method for generating responses to user queries through a combination of voice recognition, information retrieval, retrieval-augmented generation, and generative AI processing.
[0015] The apparatus includes an input device, such as a microphone, that captures voice commands or queries from the user. A Raspberry Pi serves as the central processing unit, running the necessary software libraries and algorithms to perform voice recognition, information retrieval from multiple knowledge sources, and generative AI-based response generation.
[0016] In an embodiment the invention relates to an apparatus and method utilizing retrieval-augmented generation (RAG), a method in which a query is sequentially searched across three distinct knowledge sources. Relevant information retrieved from these sources is then used by a generative AI model to craft a response, which is subsequently delivered through the output device.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] Having thus described the subject matter of the present invention in general terms, reference will now be made to the accompanying drawings, which are not necessarily drawn to scale, and wherein
Figure 1 illustrates a block diagram of the apparatus, showing the interconnection between the input device, Raspberry Pi, output device, power source, communication interface;
Figure 2 illustrates a flowchart of the method, showing the sequence of retrieving information from three knowledge sources and generating a response based on that information in accordance with an embodiment of the invention; and
Figure 3 illustrates a flowchart of the method, showing the sequence of retrieving information from three knowledge sources and generating a response based on that information in accordance with an embodiment of the invention.
DESCRIPTION OF THE INVENTION
[0018] The subject matter of the present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all embodiments of subject matter of the present invention are shown. The subject matter of the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Indeed, many modifications and other embodiments of the subject matter of the present invention set forth herein will come to mind to one skilled in the art to which the subject matter of the present invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. All illustrations of the drawings are for the purpose of describing selected versions of the present invention and are not intended to limit the scope of the present invention. Therefore, it is to be understood that the subject matter of the present invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims.
[0019] As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.
[0020] Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one”, but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” denotes “at least one of the items”, but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list”.
[0021] The present disclosure relates to an artificial intelligence apparatus and method utilizing Retrieval-Augmented Generation (RAG) providing real time response generation with adaptive learning capabilities for seamless and intelligent educational support. More specifically, the disclosure describes a self-contained, low-power computing device equipped with input/output peripherals and software modules for receiving user queries, retrieving relevant information from multiple knowledge sources, and delivering context-aware responses.
[0022] In an aspect, the invention provides an artificial intelligence apparatus and method utilizing Retrieval-Augmented Generation (RAG) providing real time response generation, with adaptive learning capabilities for seamless and intelligent educational support.
[0023] In an embodiment, the apparatus comprises a computing device, memory and storage components, one or more input devices, one or more output devices, a power system, and communication interfaces.
[0024] The computing device serves as the central processing unit (CPU) of the apparatus and is a single-board computer or embedded processing platform. The computing device may be a processor such as a Raspberry Pi or equivalent microprocessor platform capable of running an operating system and executing machine learning models.
[0025] In an embodiment, the computing device is a Raspberry Pi-based processing system or Arduino-based system or Jetson Nano configured to execute both the retrieval and generation operations locally or in conjunction with a remote server.
[0026] The memory and storage components, operatively coupled to the processor, configured to store data/executable instructions including but not limited to store software, models, retrieved data, and historical interactions. The memory and storage components may be solid-state drives, memory cards, or onboard flash storage.
[0027] The one of more input devices configured to receive a user query. The devices may include a microphone for voice-based input and/or a text input interface, such as a touchscreen or keyboard. This may also further include a speech recognition module configured to convert spoken queries into text.
[0028] One or more output devices configured to display or transmit a generated natural language response i.e. deliver generated responses. These may include a visual display unit (e.g., LCD screen) or audio output system (e.g., speaker or headphone jack). This device may include a text-to-speech module configured to vocalize the generated response.
[0029] The power source, provides the necessary power to the apparatus, such as a rechargeable battery, AC power adapter, an external power supply.
[0030] The communication interfaces, enables wireless connectivity for the apparatus. The communication may be wireless modules (such as Wi-Fi, Bluetooth) or wired network connectivity. It supports access to remote and real-time knowledge sources over the internet, facilitating dynamic response generation.
[0031] The apparatus may further include a manual or digital power control switch allowing the user to activate or deactivate the system as needed and an internal bus connecting all the components for data and power transmission.
[0032] In an embodiment, software modules executing on the computing unit, including:
a) A voice and text input processing module, which uses general-purpose libraries for speech-to-text and audio capture.
b) A retrieval module executable by the processor, configured to retrieve a plurality of responses relevant to the user query from one or more external or local knowledge sources.
c) A response generation module that performs natural language processing and generates relevant responses using AI models.
d) A retrieval-augmented generation (RAG) pipeline, which arranges the query and response generation process using context-aware mechanisms.
[0033] The retrieval module may use vector-based similarity search techniques (e.g., through embedded retrieval frameworks) for matching semantically similar documents.
[0034] The response generation module utilizes machine learning models (e.g., TensorFlow-based) to generate a coherent response based on retrieved data.
[0035] The three-layered retrieval-augmented generation (RAG) method employed in the apparatus leverages three distinct layers i.e. three distinct sources of knowledge, accessed in parallel or in sequence, may be accessed only in one of them or two or all of them.
a) Internal Knowledge Base – This includes previously stored interactions or responses to similar user queries. These are stored locally and indexed for semantic similarity using a vector database engine.
b) Structured Educational Resources – These include digital textbooks or curated knowledge repositories, which are indexed and queried for relevant excerpts that supplement the internal database.
c) Online and Internet-Based Sources – Through internet access, the apparatus can perform live searches to retrieve relevant up-to-date information. This may include web articles, forums, or other publicly available knowledge bases.
[0036] The apparatus uses a range of software tools and frameworks to facilitate audio processing, information retrieval, and intelligent generation. These include:
a) Audio and speech processing frameworks, enabling real-time speech-to-text transcription and voice command recognition.
b) Vector-based retrieval engines, used for storing and retrieving semantically similar documents or interactions, supporting the core RAG mechanism.
c) Deep learning frameworks, used for implementing and executing machine learning models for language understanding and generation.
[0037] The computing device runs on Python-based AI software and the developed and deployed Python code handles: Speech recognition → Text conversion → Query processing → Response generation.
[0038] These software modules are executed locally on the computing platform and are optimized for performance on low-power devices.
[0039] The apparatus may be configured to run entirely offline, relying solely on local knowledge bases, or fully online with cloud-based inference capabilities.
[0040] The apparatus is configured to operate in both portable and stationary modes and may function offline or online, depending on the selected knowledge sources.
[0041] An aspect of the invention relates to a method for generating natural language responses (contextually relevant natural language responses) using retrieval-augmented generation, the method comprising following steps.
[0042] Receiving an input query request from user via an input device. The input query may be spoken query from the user and the input device may be an audio input device like microphone. The voice signal is then processed to convert the speech into text using voice recognition software such as SpeechRecognition or pyaudio.
[0043] Retrieving a plurality of text passages relevant to the input query from one or more external or local knowledge sources, by a retriever module/model executed on the computing device. The passages being selected based on a similarity measure between the query embedding and stored embedding vectors.
[0044] The knowledge sources are internally stored previous interactions with the apparatus, text books, and online resources, like mentioned below.
a) Internal Knowledge Base – This includes previously stored interactions or responses to similar user queries. These are stored locally and indexed for semantic similarity using a vector database engine.
b) Structured Educational Resources – This includes digital textbooks or curated knowledge repositories, which are indexed and queried for relevant excerpts that supplement the internal database.
c) Online and Internet-Based Sources – Through internet access, the apparatus can perform live searches to retrieve relevant up-to-date information. This may include web articles, forums, or other publicly available knowledge bases.
[0045] All retrieved data is indexed and stored using vector storage and retrieval module, which enables efficient semantic and vector-based information retrieval.
[0046] In an embodiment, vector storage and retrieval module is chromaDB i.e. information retrieval is facilitated by chromaDB.
[0047] Next step involves, generating response using Al processing: The aggregated data is processed by a generative AI model. The model synthesizes a coherent and contextually relevant response based on the information from all three sources.
[0048] In an embodiment, the AI processing is performed or implemented using TensorFlow.
[0049] Generating a natural language output that is conditioned on both the input query and the retrieved passages, by means of the generative language model.
[0050] Transmitting, the response i.e. outputting the generated natural language output to the user. The output is being present to the user via one or more output devices, such as displaying it on a screen or reading it aloud via speaker.
[0051] Certain embodiments of the inventions are explained below using illustrative figures 1-3.
[0052] In an embodiment, referring to illustrative fig 1, which illustrates the apparatus for generating AI-powered real time responses using retrieval-augmented generation of the present invention, the apparatus (100) comprising a computing device (110), memory and storage components (120), an input device (130), an output device (140), a power source (150) and a communication interface (160).
[0053] The computing device (110) serves as the central processing unit (CPU) of the system. This includes a processor such a Raspberry Pi or equivalent microprocessor platform. It executes key software components responsible for voice recognition, data retrieval, and AI processing. It also manages communication between hardware components and orchestrates the operation of the entire apparatus.
[0054] In an embodiment, the computing device is a Raspberry Pi-based processing system.
[0055] The memory and storage components (120), operatively coupled to the processor, configured to store data/executable instructions including but not limited to store software, models, retrieved data, and historical interactions.
[0056] In an embodiment, the memory and storage components may be solid-state drives, memory cards, or onboard flash storage.
[0057] The Input Device (130), is configured to receive a user query. The input device includes a voice-capturing hardware component, such as a microphone, which captures a user's spoken input. The captured voice signal is processed using speech-to-text software (e.g., Speech Recognition, pyaudio) to convert it into textual data for further processing.
[0058] The output device (140), configured to display or transmit a generated natural language response i.e. deliver generated responses. Depending on the implementation, this may be a speaker for audio output or a screen for visual display. The final response generated by the AI system is delivered to the user through this device.
[0059] The power source (150), supplies power to the apparatus. The power source can be a rechargeable battery or an AC power adapter or any external power supply, providing consistent energy to the Raspberry Pi and associated modules of the apparatus.
[0060] In an embodiment, the built-in battery and charging module provide 3 to 4 hours of continuous operation, allowing uninterrupted usage without external power dependency.
[0061] The communication interface such as Wi-Fi Module (160), enables wireless connectivity for the apparatus. It supports access to remote and real-time knowledge sources over the internet, facilitating dynamic response generation.
[0062] In an embodiment, Bluetooth/Wi-Fi connectivity enables integration with external devices such as educational tools, servers, or a classroom network.
[0063] Referring to illustrative figure 2, which describes the illustrative method (200) for generating natural language responses (contextually relevant natural language responses) using retrieval-augmented generation. Referring to fig. 1 and fig. 2, the method comprises following steps:
[0064] Input/Voice Capture (210): The apparatus captures a voice query via the input device (130). Speech-to-text conversion is applied to transform the voice signal into a textual format for further processing.
[0065] Retrieval from Knowledge Sources (220): The system queries multiple knowledge sources in a following sequence:
a) Internal Knowledge Base (221): This includes previously stored interactions or responses to similar user queries. These are stored locally and indexed for semantic similarity using a vector database engine
b) Structured Educational Resources (222): This includes digital textbooks or curated knowledge repositories, which are indexed and queried for relevant excerpts that supplement the internal database.
c) Online and Internet-Based Sources (223): Through internet access, the apparatus can perform live searches to retrieve relevant up-to-date information. This may include web articles, real-time news forums, or other publicly available knowledge bases.
[0066] All retrieved data is indexed and stored using ChromaDB (230), which enables efficient semantic and vector-based information retrieval.
[0067] Generative AI Processing (240): The aggregated data is processed by a generative AI model, implemented by TensorFlow (241). The model synthesizes a coherent and contextually relevant response based on the information from all three sources.
[0068] Output Response (250): The final response is conveyed to the user via the output device (140), either in spoken form (via a speaker) or visually (via a screen), depending on the system configuration.
[0069] In a specific embodiment of the invention, the method (300) referring to illustrative fig. 3, comprises the following steps.
[0070] Starting the power source of the apparatus, this step initiates booting of processor (Raspberry pi) and loading of the necessary software modules.
[0071] User inputs voice query (310) via input device and the voice signal is then processed to convert the speech into text (311) using voice recognition software such as SpeechRecognition or pyaudio.
[0072] The method by leveraging multiple knowledge sources (320) through the retrieval-augmented generation (RAG) model (321), the method can generate more accurate and contextually relevant responses.
[0073] Retriever module executed on computing device checks for relevant responses in internal knowledge base (322), this includes previously stored interactions or responses to similar user queries. These are stored locally and indexed for semantic similarity using a vector database engine.
[0074] If the relevant response is found in the internal knowledge base, generating a natural language output (331) that is conditioned on both the input query and the retrieved passages, by means of the generative language model.
[0075] If the relevant response is not found in the internal knowledge base, then the user query is being queried/checked in the structured educational resources (323), these include text books, curated knowledge repositories. If the relevant response is found in the structured educational resources, then basis the relevant information found in this source, summarizing top five responses and generating a natural language output (332) that is conditioned on both the input query and the retrieved passages, by means of the generative language model.
[0076] If the relevant response is not found in the structured educational resources, then the user query is being queried/checked in the online and internet-based sources (324) to perform live searches to retrieve relevant up-to-date information. This resource includes web articles, forums or other publicly available knowledge bases. Then basis the relevant information found in this source, summarizing top five responses and generating a natural language output (333) that is conditioned on both the input query and the retrieved passages, by means of the generative language model.
[0077] The response generated based on querying in the structured educational resources and structured educational resources is stored/added to the internal knowledge base.
[0078] If the response is not generated after querying three knowledge sources, the user is allowed to tweak or modify the input query and the method again runs all the above steps.
[0079] Transmitting, the output response (340) i.e. outputting the generated natural language output to the user, based on generated natural language output (330) which may also involve converting text to speech. The output is being present to the user via one or more output devices, such as displaying it on a screen or reading it aloud via speaker.
[0080] Additionally, the apparatus may be integrated with sensors (such as motion sensors or cameras) to enable more advanced interactive features like gesture recognition or visual feedback.
[0081] Further, the apparatus may be integrated with cloud-based AI services for even more advanced machine learning capabilities, enabling the device to offer more personalized and adaptive learning experiences.
[0082] The apparatus and method use three-layer Retrieval-Augmented Generation (RAG) system, ensuring fast and relevant feedback without requiring high computational resources.
[0083] The apparatus and method adopt real-time Adaptive Interaction: The apparatus leverages voice recognition, speech-to-text processing, and machine learning to offer instant, adaptive responses, improving with continued usage for a personalized learning experience.
[0084] Its compact, portable design enhances usability in diverse teaching environments. With the integration of essential components such as microphone, speaker, display, and battery, the apparatus costs low. The apparatus is an energy-efficient design that operates for 3-4 hours unplugged, ensuring seamless, uninterrupted teaching assistance and intelligent teaching support.
[0085] Thus, the apparatus is cost-effective, a single compact with offline capabilities, affordable, scalable, user-friendly solution that integrates seamlessly into educational environments, making advanced AI-based learning assistance accessible to all.
[0086] The apparatus and method have adaptive AI capability: The machine learning model is trained to adapt to user behavior. The apparatus learns from interactions and continuously improves its responses, providing personalized feedback based on prior sessions and context. AI continuously updates its database, refining answers based on previous usage patterns.
[0087] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0088] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open-ended as opposed to limiting. As examples of the fore going: the term “including” should be read as mean “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive limiting list thereof; and adjectives such as “conventional,” “traditional,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, a group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should also be read as “and/or” unless expressly stated otherwise. Furthermore, although item, elements or components of the disclosure may be described or claimed in the singular, the plural is contemplated to be within the scope thereof unless limitation to the singular is explicitly stated. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.
[0089] All publications, patent applications, patents, and other references mentioned in the specification are indicative of the level of those skilled in the art to which the presently disclosed subject matter pertains. All publications, patent applications, patents, and other references are herein incorporated by reference to the same extent as if each individual publication, patent application, patent, and other reference specifically and individually indicated to be incorporated by reference. It will be understood that, although a number of patent applications, patents, and other references are referred to herein, such reference does not constitute an admission that any of these documents forms part of the common general knowledge in the art. Although the foregoing subject matter has been described in some detail by way of illustration and example for purposes of clarity of understanding, it will be understood by those skilled in the art that certain changes and modifications can be practiced within the scope of the appended claims.
, Claims:We claim:
1. An apparatus utilizing Retrieval-Augmented Generation (RAG) for adaptive learning, the apparatus comprising a computing device, memory and storage components, one or more input devices, one or more output devices, a power source, and communication interfaces;
wherein:
the computing device serves as the central processing unit (CPU) of the apparatus, capable of running an operating system, to process input, and execute machine learning models;
the memory and storage components, operatively coupled to the processor, configured to store data/executable instructions;
one of more input devices configured to receive a user query;
one or more output devices configured to display or transmit a generated natural language response;
the power source, provides the necessary power to the apparatus; and
the communication interfaces, enables wireless connectivity for the apparatus;
software modules executing on the computing device, including:
a) a voice and text input processing module, which uses general-purpose libraries for speech-to-text and audio capture;
b) a retrieval module executable by the processor, configured to retrieve a plurality of responses relevant to the user query from one or more knowledge sources;
c) a response generation module that performs natural language processing and generates relevant responses using AI models; and
d) a retrieval-augmented generation (RAG) pipeline, which arranges the querying and response generation process using context-aware mechanisms;
and the retrieval-augmented generation (RAG) is achieved by querying three distinct knowledge sources.
2. The apparatus as claimed in claim 1, wherein the processing unit is Raspberry Pi.
3. The apparatus as claimed in claim 1, wherein the input device is microphone.
4. The apparatus as claimed in claim 1, wherein the memory and storage components are solid-state drives, memory cards, or onboard flash storage.
5. The apparatus as claimed in claim 1, wherein the output device is a speaker for audio output or a screen for visual display.
6. The apparatus as claimed in claim 1, wherein the power source is rechargeable battery or an AC power adapter and the communication interface is Wi-Fi Module
7. The apparatus as claimed in claim 1, the three distinct knowledge sources are internal knowledge base, structured educational resources, and online and internet-based sources.
8. A method for generating relevant natural language responses using retrieval-augmented generation for adaptive learning, the method comprising steps of:
a. capturing a voice query using an input device;
b. converting the voice query into text using voice recognition software;
c. querying three distinct knowledge sources using retrieval-augmented generation;
d. retrieving a plurality of text passages relevant to the input query from one or more one or more knowledge sources, by a retriever module/model executed on the computing device;
e. generating a response using AI processing based on the retrieved information; and
f. delivering the generated response to the user via an output device.
wherein, the method leverages multiple knowledge sources through the retrieval-augmented generation (RAG); and the knowledge sources include internal knowledge base, structured educational resources, and online and internet-based sources.
9. The method as claimed in claim 8, the Al processing is performed using TensorFlow and the information retrieval is facilitated by ChromaDB.
10. The method as claimed in claim 8, the voice recognition software is SpeechRecognition or pyaudio.
| # | Name | Date |
|---|---|---|
| 1 | 202541058729-PROOF OF RIGHT [19-06-2025(online)].pdf | 2025-06-19 |
| 2 | 202541058729-POWER OF AUTHORITY [19-06-2025(online)].pdf | 2025-06-19 |
| 3 | 202541058729-FORM-9 [19-06-2025(online)].pdf | 2025-06-19 |
| 4 | 202541058729-FORM-8 [19-06-2025(online)].pdf | 2025-06-19 |
| 5 | 202541058729-FORM-5 [19-06-2025(online)].pdf | 2025-06-19 |
| 6 | 202541058729-FORM FOR SMALL ENTITY(FORM-28) [19-06-2025(online)].pdf | 2025-06-19 |
| 7 | 202541058729-FORM 3 [19-06-2025(online)].pdf | 2025-06-19 |
| 8 | 202541058729-FORM 18 [19-06-2025(online)].pdf | 2025-06-19 |
| 9 | 202541058729-FORM 1 [19-06-2025(online)].pdf | 2025-06-19 |
| 10 | 202541058729-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [19-06-2025(online)].pdf | 2025-06-19 |
| 11 | 202541058729-EDUCATIONAL INSTITUTION(S) [19-06-2025(online)].pdf | 2025-06-19 |
| 12 | 202541058729-DRAWINGS [19-06-2025(online)].pdf | 2025-06-19 |
| 13 | 202541058729-COMPLETE SPECIFICATION [19-06-2025(online)].pdf | 2025-06-19 |