Abstract: Disclosed is a system (100) to support primary education for students in regional language. The system (100) includes a user device (102) configured to receive a query including a speech input from a user and identify an operational mode. A local server (104) with first processing circuitry (120) is coupled to the user device (102) for offline mode operation. The first processing circuitry (120) converts speech to text, generates an answer using an AI model with a local dataset (122), creates a text response, and converts it to speech. A server (105) with second processing circuitry (124) is coupled to the user device (102) for online mode operation. The second processing circuitry (124) uses online APIs for speech-to-text conversion, generates an answer using pre-trained AI models, creates a text response, and converts it to speech using online Text-to-Speech APIs. FIG. 1 is the reference figure.
DESC:FIELD OF INVENTION
The present disclosure relates to educational systems and methods, and more particularly to an AI-based virtual assistant system and method for supporting primary education in regional languages.
BACKGROUND
The field of education has been rapidly evolving with the integration of technology to enhance learning experiences, particularly in primary education. As the importance of early childhood education becomes increasingly recognized, there is a growing need for innovative solutions that can provide personalized and engaging learning experiences for young students. This need is particularly pronounced in regions where multiple languages are spoken, and students may require educational support in their native or regional languages.
Traditional educational methods often struggle to provide individualized attention to each student, especially in classroom settings with limited resources. While some technological solutions have been developed to address this issue, many of these systems are primarily designed for use in English or other widely spoken languages. This creates a significant barrier for students in regions where the primary language of instruction is not one of these major languages. Additionally, existing educational technology solutions often require constant internet connectivity, which can be unreliable or unavailable in many rural and remote areas.
Furthermore, current AI-based educational assistants typically lack the ability to seamlessly switch between online and offline modes, limiting their usefulness in areas with intermittent internet access. Many existing systems also struggle with accurately processing and responding to speech input in regional languages, particularly when dealing with the unique linguistic characteristics of young learners. This can lead to frustration and decreased engagement among students who are still developing their language skills.
Therefore, there exists a need for a technical solution that solves the aforementioned problems of conventional systems and methods for supporting primary education in regional languages.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In an aspect of the present disclosure, a system to support primary education for students in regional language is disclosed. The system includes a user device configured to receive a query comprising a speech input from a user and identify an operational mode, either an online mode or an offline mode. The system includes a local server having a first processing circuitry, coupled to the user device. When the operational mode is the offline mode, the first processing circuitry is configured to convert the speech input of the query to a text format using a locally executed Speech-to-Text model, generate an answer to the query using an Artificial Intelligence (AI) model that utilizes a local dataset, generate a response to the query based on the generated answer using an automation script, generate an output based on the text response using one or more locally executed AI models, and convert the generated output into an audio file to provide answers via speech. The system also includes a server having a second processing circuitry, coupled to the user device. When the operational mode is the online mode, the second processing circuitry is configured to convert the speech input of the query to a text format using a Speech-to-Text online Application Programming Interface (API), generate an answer to the query using one or more pre-trained Artificial Intelligence (AI) models, generate a response to the query based on the generated answer using the AI model, generate an output based on the text response using one or more Text-to-Speech online APIs, and convert the generated output into an audio file to provide the text response via speech.
In some aspects of the present disclosure, the speech input is a voice note.
In some aspects of the present disclosure, the user device includes a sensor unit configured to detect a voice of the user when the user provides the speech input. The sensor unit is selected from one of a user Condenser Microphone, MEMS (Microelectromechanical Systems) Microphone, DSP (digital signal processor) Chip, Bluetooth Microphone, Array Microphone, or a combination thereof.
In some aspects of the present disclosure, the locally generated dataset is associated with students of Standard 1.
In some aspects of the present disclosure, the first processing circuitry and the second processing circuitry are configured to generate an interactive visual associated with the generated audio files to explain the text response in an interactive manner.
In some aspects of the present disclosure, the first and second processing circuitries are configured to identify an intent associated with the query using the AI model. The intent is one of a greeting or a mathematical operation.
In some aspects of the present disclosure, when the intent associated with the query is the mathematical operation, the first and second processing circuitries are configured to extract one or more entities from the query using the AI model such that the one or more entities are utilized to generate the answer.
In another aspect of the present disclosure, a method for supporting primary education for students in regional language is disclosed. The method includes detecting, using a sensor unit of a user device, a voice of a user when the user provides a speech input. The method includes receiving, using the user device, a query comprising the speech input from the user. The method includes identifying, using the user device, an operational mode, either an online mode or an offline mode. The method includes enabling a local server having first processing circuitry when the operational mode is the offline mode. The first processing circuitry converts the speech input of the query to a text format using a locally executed Speech-to-Text model, generates an answer to the query using an Artificial Intelligence (AI) model that utilizes a local dataset, generates a response to the query based on the generated answer using an automation script, generates an output based on the text response using one or more locally executed AI models, and converts the generated output into an audio file to provide answers via speech. The method includes enabling a server having second processing circuitry when the operational mode is the online mode. The second processing circuitry converts the speech input of the query to a text format using a Speech-to-Text online Application Programming Interface (API), generates an answer to the query using one or more pre-trained Artificial Intelligence (AI) models, generates a response to the query based on the generated answer using the AI model, generates an output based on the text response using one or more Text-to-Speech online APIs, and converts the generated output into an audio file to provide the text response via speech.
In some aspects of the present disclosure, the method includes generating, using the first processing circuitry and the second processing circuitry, an interactive visual associated with the generated audio files to explain the text response in an interactive manner.
In some aspects of the present disclosure, the method includes identifying, using the first and second processing circuitries, an intent associated with the query using the AI model when the intent associated with the query is the mathematical operation. The intent is one of a greeting or a mathematical operation. The method includes extracting, using the first and second processing circuitries, one or more entities from the query using the AI model such that the one or more entities are utilized to generate the answer.
The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF FIGURES
The above and still further features and advantages of aspects of the present disclosure becomes apparent upon consideration of the following detailed description of aspects thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
FIG. 1 illustrates a system diagram for supporting primary education in regional languages, in accordance with an aspect of the present disclosure;
FIG. 2 illustrates a block diagram of a processing unit for processing voice inputs and queries, in accordance with an aspect of the present disclosure;
FIG. 3 illustrates a block diagram of a local server for processing speech inputs and generating speech outputs, in accordance with an aspect of the present disclosure;
FIG. 4 illustrates a block diagram of a server system for processing speech inputs and generating speech outputs, in accordance with an aspect of the present disclosure;
FIG. 5 illustrates a system diagram for processing user input in online and offline modes, in accordance with an aspect of the present disclosure;
FIG. 6 illustrates a system diagram for an online speech processing system, in accordance with an aspect of the present disclosure;
FIG. 7 illustrates a system diagram for an offline processing of speech input and generating computed answers, in accordance with an aspect of the present disclosure; and
FIG. 8 illustrates a flowchart illustrating a method for processing speech inputs and generating speech outputs in both online and offline modes, in accordance with an aspect of the present disclosure.
DETAILED DESCRIPTION
The following description sets forth exemplary aspects of the present disclosure. It should be recognized, however, that such a description is not intended as a limitation on the scope of the present disclosure. Rather, the description also encompasses combinations and modifications to those exemplary aspects described herein.
This section is intended to provide an explanation and description of various possible aspects of the present disclosure. The aspects used herein, and the various features and advantageous details thereof are explained more fully with reference to non-limiting aspects illustrated in the accompanying drawing/s and detailed in the following description. The examples used herein are intended only to facilitate an understanding of ways in which the aspects may be practiced and to enable the person skilled in the art to practice the aspects used herein. Also, the examples/aspects described herein should not be construed as limiting the scope of the aspects herein.
The various aspects including the example aspects are now described more fully with reference to the accompanying drawings, in which the various aspects of the disclosure are shown. The disclosure may, however, be embodied in different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects are provided so that this disclosure is thorough and complete, and fully conveys the scope of the disclosure to those skilled in the art. In the drawings, the sizes of components may be exaggerated for clarity.
The subject matter of example aspects, as disclosed herein, is described with specificity to meet statutory requirements. However, the description itself is not intended to limit the scope of this patent. Rather, the inventor/inventors have contemplated that the subject matter might also be embodied in other ways, including different features or combinations of features similar to the ones described in this document, in conjunction with other technologies. Generally, the various aspects including the example aspects relate to a high-magnification system and method for detection of inter and intra adulteration in the grain sample.
The aspects herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting aspects that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to necessarily obscure the aspects herein. The examples used herein are intended merely to facilitate an understanding of ways in which the aspects herein may be practiced and to further enable those of skill in the art to practice the aspects herein. Accordingly, the examples should not be construed as limiting the scope of the aspects herein.
The present disclosure provides a system and method for an AI-based virtual assistant designed to support primary education in regional languages, specifically focusing on the Kannada language. The system is capable of operating in both online and offline modes, thereby ensuring its functionality in areas with limited or no internet connectivity. The system is designed to provide personalized, interactive learning experiences for young students, particularly those in rural areas. The system includes a user device, a local server, and a server, each equipped with processing circuitry. The processing circuitry is configured to convert speech input into text format, generate answers to queries using an AI model, generate responses to queries based on the generated answers, and convert the generated output into an audio file to provide answers via speech. The system's ability to operate in both online and offline modes, coupled with its focus on delivering personalized, interactive learning experiences in a regional language, makes it a valuable tool for enhancing the accessibility and availability of educational resources for young students in rural areas.
The present disclosure introduces an innovative AI-based virtual assistant system specifically designed to address the unique challenges of primary education in regional languages, with a particular focus on Kannada. This system represents a significant advancement in educational technology, offering a versatile solution that can function effectively in both online and offline environments. This dual-mode capability is crucial for ensuring that educational resources remain accessible to students in rural areas where internet connectivity may be unreliable or non-existent.
At the core of this system is a sophisticated architecture comprising three main components: a user device, a local server, and a remote server. Each of these components is equipped with dedicated processing circuitry, enabling the system to perform complex tasks such as speech-to-text conversion, natural language processing, and text-to-speech synthesis. This multi-layered approach allows for seamless operation regardless of internet availability, ensuring that students can continue their learning activities without interruption.
The system's AI model is a key feature, capable of understanding and responding to queries in Kannada. This localization aspect is particularly important for young learners who may be more comfortable expressing themselves in their native language. By processing speech input, generating contextually appropriate answers, and delivering responses via synthesized speech, the system creates an interactive and engaging learning environment that closely mimics natural conversation.
Furthermore, the system's ability to provide personalized learning experiences sets it apart from traditional educational methods. By analyzing user inputs and adapting its responses accordingly, the AI assistant can tailor its teaching approach to suit individual learning styles and paces. This level of customization is especially beneficial for students in rural areas who may have limited access to personalized tutoring or supplementary educational resources.
The combination of offline functionality, regional language support, and personalized interaction makes this system a powerful tool for bridging the educational gap often experienced in rural and underserved areas. By making quality educational content more accessible and engaging, the system has the potential to significantly improve learning outcomes for young students, particularly those who might otherwise struggle due to language barriers or lack of resources.
Referring to FIG. 1, the system 100 for supporting primary education in regional languages is illustrated. The system 100 comprises a user device 102, a local server 104, a server 105, and a communication network 106. This comprehensive architecture is designed to provide a flexible and robust educational support system that may operate effectively in both online and offline environments, addressing the unique challenges faced by students in rural areas with limited internet connectivity.
The user device 102 serves as the primary interface between the student and the educational system. It is configured to receive a query comprising a speech input from a user and identify an operational mode, which may be either an online mode or an offline mode. This dual-mode capability ensures that the system can adapt to varying connectivity conditions, providing uninterrupted access to educational resources. The user device 102 includes several components such as a sensor unit 108, a processor 110, memory 112, storage 114, a network interface 116, and an input device 118. These components work in concert to enable the user device 102 to interact with the user, process inputs, and communicate with other parts of the system 100. The sensor unit 108, which may be one of, but not limited to, Condenser Microphones, MEMS (Microelectromechanical Systems) Microphones, DSP (digital signal processor) Chips, Bluetooth Microphones, Array Microphone, or a combination thereof, is specifically designed to detect the voice of the user when the user provides the speech input. This voice recognition capability is crucial for enabling natural language interaction, particularly for young learners who may find it easier to express themselves verbally in their regional language.
The local server 104, coupled to the user device 102, is a key component that enables offline functionality. It includes first processing circuitry 120 and a local dataset 122, which are essential for handling offline processing of user queries and responses. When the system operates in offline mode, the first processing circuitry 120 performs a series of sophisticated tasks. It converts the speech input of the query to a text format using a locally executed Speech-to-Text model, ensuring that language processing can occur without internet connectivity. The circuitry then generates an answer to the query using an Artificial Intelligence (AI) model that utilizes the local dataset 122. This local dataset is specifically tailored to the educational needs of primary school students, particularly those in Standard 1, ensuring relevant and age-appropriate responses. The system then generates a response to the query based on the generated answer using an automation script, converting the response into a text format. Finally, it generates an output based on the text response using one or more locally executed AI models and converts the generated output into an audio file, enabling the system to provide answers via speech. This complete offline workflow ensures that students can receive interactive, voice-based educational support even in areas with no internet access.
The server 105, also coupled to the user device 102, complements the local server by handling online processing of user queries and responses. It includes second processing circuitry 124 and pre-trained models 126, which are optimized for online operation. When the system operates in online mode, the second processing circuitry 124 leverages cloud-based resources and APIs for enhanced functionality. It converts the speech input of the query to a text format using a Speech-to-Text online Application Programming Interface (API), taking advantage of more powerful cloud-based speech recognition capabilities. The server then generates an answer to the query using one or more pre-trained Artificial Intelligence (AI) models, which may have access to a broader knowledge base compared to the local dataset. The response to the query is generated based on the AI model's output, and the text response is then converted into speech using Text-to-Speech online APIs. This online mode allows for more complex queries to be processed and potentially provides access to a wider range of educational resources.
The communication network 106 plays a crucial role in connecting the user device 102 with both the local server 104 and the server 105, facilitating seamless data exchange between these components. This network infrastructure enables the system to switch dynamically between online and offline modes based on connectivity availability, ensuring continuous access to educational support.
The system's ability to operate in both online and offline modes demonstrates its versatility and robustness. In offline mode, the user device 102 interacts primarily with the local server 104, utilizing the first processing circuitry 120 and local dataset 122 for query processing and response generation. This ensures that students can continue their learning activities even in areas with poor or no internet connectivity. In online mode, the user device 102 communicates with the server 105, leveraging the second processing circuitry 124 and pre-trained models 126 for processing and responding to user queries. This dual-mode operation allows the system to provide the best possible educational support based on the available resources and connectivity.
The user device 102, through its various components, acts as the central point of interaction for the student. It captures user input through speech, processes it locally to determine the operational mode, and then transmits the query to either the local server 104 or server 105 depending on the current mode of operation. The responses generated by the respective servers are then communicated back to the user device 102, where they are presented to the user in an appropriate format, typically as synthesized speech. This seamless interaction between the user device and the processing servers ensures a smooth and engaging learning experience for the student, regardless of their location or internet connectivity status.
Referring to FIG. 2, the block diagram of a processing unit 110 for a system that processes voice inputs and queries is illustrated. The processing unit 110 comprises several interconnected components, including a voice detection engine 200, a query reception engine 202, a mode identification engine 204, and a data transfer engine 206. These components are interconnected via a first data bus 208, which facilitates communication and data exchange between the various engines within the processing unit 110.
The voice detection engine 200 comprises logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform voice detection operations. This engine is specifically designed to detect and capture the user's voice when speech input is provided. The voice detection engine 200 may utilize advanced audio processing techniques to isolate and identify human speech from background noise, ensuring accurate capture of the user's voice. This capability is particularly important for young learners who may speak softly or with varying clarity, as it allows the system to effectively recognize and process their speech input in their regional language.
Connected to the voice detection engine 200 is the query reception engine 202, which includes logic, circuitry, interfaces, and/or code to receive and process the speech input as a query. This engine is responsible for converting the raw audio data captured by the voice detection engine into a structured query format that can be further processed by the system. The query reception engine 202 may employ natural language processing techniques to parse the speech input, identify key elements of the query, and prepare it for subsequent processing steps.
The mode identification engine 204, linked to the query reception engine 202, is equipped with logic, circuitry, interfaces, and/or code to determine the system's operational mode. This engine plays a crucial role in the system's adaptability by identifying whether the current environment supports online or offline operation. The mode identification engine 204 may consider factors such as network connectivity, available computational resources, and user preferences to make this determination. This dual-mode capability ensures that the system can provide educational support in various scenarios, from well-connected urban areas to remote rural locations with limited or no internet access.
The data transfer engine 206, connected to the mode identification engine 204, manages the flow of data within the system based on the identified operational mode. This engine includes logic, circuitry, interfaces, and/or code to optimize data transfer processes. In online mode, the data transfer engine 206 may facilitate communication with cloud-based services and APIs, enabling access to more extensive computational resources and up-to-date information. In offline mode, the data transfer engine 206 may manage local data storage and retrieval, ensuring efficient use of on-device resources to provide uninterrupted educational support.
All components within the processing unit 110 are interconnected via the first data bus 208, which serves as a high-speed communication channel. This interconnection allows for rapid data exchange and coordination between the various engines, enabling seamless processing of voice inputs, efficient query handling, accurate mode identification, and optimized data transfer operations. The integrated design of the processing unit 110 ensures that each component can work in harmony to deliver a responsive and effective educational support system.
The processing unit 110 is designed with flexibility in mind, allowing it to be integrated into various parts of the overall system architecture. Depending on the operational requirements and available resources, the processing unit 110 may be implemented within the user device 102, the local server 104, or the server 105. This adaptability enables the system to optimize its performance based on the specific constraints and capabilities of each operational environment, ensuring that students can access high-quality educational support regardless of their location or the available technological infrastructure.
Referring to FIG. 3, the local server 104 for processing speech inputs and generating speech outputs is illustrated. The local server 104 includes a network interface 300, an I/O interface 302, first processing circuitry 120, and a local dataset 122. These components are interconnected to facilitate speech processing and response generation in an offline mode of operation.
The network interface 300 and I/O interface 302 are integral components of the local server 104, facilitating communication between the local server 104 and external devices or networks. In some aspects, the network interface 300 may be configured to establish and manage network connections, while the I/O interface 302 may handle input and output operations, such as receiving speech inputs and transmitting speech outputs.
The first processing circuitry 120 is a key component of the local server 104, housing multiple processing engines interconnected via a third data bus 316.The multiple processing engines include a speech-text conversion engine 306, an answer generation engine 308, a response generation engine 310, an output engine 312, and an audio generation engine 314. Each engine is configured to perform specific tasks related to speech processing and response generation.
The speech-text conversion engine 306 includes suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the speech-text conversion engine 306 may be configured to convert the speech input of the query to a text format by way of a locally executed Speech-to-Text model. This conversion process is crucial for enabling the system to understand and process the user's speech input in their regional language.
The answer generation engine 308, connected to the speech-text conversion engine 306 via the third data bus 316, is configured to generate an answer to the query by way of an Artificial Intelligence (AI) model that utilizes a local dataset 122. The local dataset 122, connected to the first processing circuitry 120 via a second data bus 304, provides data storage and retrieval capabilities. In some aspects, the local dataset 122 may be specifically tailored for students of Standard 1, ensuring relevant and age-appropriate responses.
The response generation engine 310, also connected to the third data bus 316, formulates responses based on the generated answers. The responses are formulated in Kannada to match the user's language and provide a clear and understandable answer. The output engine 312, connected to the response generation engine 310 via the third data bus 316, prepares the responses for output.
Finally, the audio generation engine 314, connected to the output engine 312 via the third data bus 316, converts the prepared output into audio format. This conversion process allows the system to provide responses via speech, creating a more interactive and engaging learning experience for the user.
In some aspects, the local server 104 may be hosted on the local intranet in the school, ensuring that the system can operate effectively even in areas with limited or no internet connectivity. This offline functionality is particularly beneficial for students in rural areas, who may not have consistent access to the internet. By processing speech inputs and generating speech outputs locally, the system can provide uninterrupted educational support, regardless of network availability.
Referring to FIG. 3, the local server 104 for processing speech inputs and generating speech outputs is further illustrated. The first processing circuitry 120 within the local server 104 is configured to generate a response to the query based on the generated answer. This response generation is facilitated by an automation script, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the automation script may be configured to formulate a text response based on the answer generated by the AI model. This text response may be in the Kannada language, ensuring that the response is understandable and relevant to the user.
The first processing circuitry 120 is also configured to generate an output based on the text response. This output generation is facilitated by one or more locally executed AI models, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. In some aspects, these AI models may be trained on a local dataset 122, which is associated with students of Standard 1. This dataset may include a variety of educational resources, such as textbooks, worksheets, and interactive learning materials, all tailored to the educational needs of Standard 1 students. By utilizing this local dataset 122, the AI models can generate outputs that are not only accurate but also contextually appropriate for the user's grade level.
Finally, the first processing circuitry 120 is configured to convert the generated output into an audio file. This conversion is facilitated by an audio generation engine 314, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the audio generation engine 314 may be configured to convert the text output into an audio format, enabling the system to provide answers via speech. This speech synthesis capability is particularly beneficial for young learners who may find it easier to understand spoken explanations.
In some aspects, the local server 104 may be hosted on the local intranet in the school, ensuring that the system can operate effectively even in areas with limited or no internet connectivity. This offline functionality is particularly beneficial for students in rural areas, who may not have consistent access to the internet. By processing speech inputs and generating speech outputs locally, the system can provide uninterrupted educational support, regardless of network availability.
Referring to FIG. 4, the server system 105 for processing speech inputs and generating speech outputs in an online mode of operation is illustrated. The server 105 includes several components interconnected to facilitate speech processing and response generation. These components include a network interface 400, an I/O interface 402, second processing circuitry 124, and pre-trained models 126.
The network interface 400 and I/O interface 402 are integral components of the server 105, facilitating communication between the server 105 and external devices or networks. In some aspects, the network interface 400 may be configured to establish and manage network connections, while the I/O interface 402 may handle input and output operations, such as receiving speech inputs and transmitting speech outputs.
The second processing circuitry 124 is a key component of the server 105, housing multiple processing engines interconnected via a fifth data bus 416. These engines include a speech-text conversion engine 406, an answer generation engine 408, a response generation engine 410, an output engine 412, and an audio generation engine 414. Each engine is configured to perform specific tasks related to speech processing and response generation.
The speech-text conversion engine 406 includes suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the speech-text conversion engine 406 may be configured to convert the speech input of the query to a text format by way of a Speech-to-Text online Application Programming Interface (API). This conversion process is crucial for enabling the system to understand and process the user's speech input in their regional language.
The answer generation engine 408, connected to the speech-text conversion engine 406 via the fifth data bus 416, is configured to generate an answer to the query by way of one or more pre-trained Artificial Intelligence (AI) models. These pre-trained AI models, stored in the pre-trained models 126, may have access to a broader knowledge base compared to the local dataset, ensuring relevant and comprehensive responses.
The response generation engine 410, also connected to the fifth data bus 416, formulates responses based on the generated answers. The responses are formulated in Kannada to match the user's language and provide a clear and understandable answer. The output engine 412, connected to the response generation engine 410 via the fifth data bus 416, prepares the responses for output.
Finally, the audio generation engine 414, connected to the output engine 412 via the fifth data bus 416, converts the prepared output into audio format. This conversion process allows the system to provide responses via speech, creating a more interactive and engaging learning experience for the user.
In some aspects, the server 105 may be hosted on a cloud-based platform, ensuring that the system can operate effectively even in areas with limited or no internet connectivity. This online functionality is particularly beneficial for students in urban areas, who may have consistent access to the internet. By processing speech inputs and generating speech outputs online, the system can provide uninterrupted educational support, regardless of network availability.
Referring to FIG. 4, the server system 105 for processing speech inputs and generating speech outputs is further illustrated. The second processing circuitry 124 within the server 105 is configured to generate a response to the query based on the generated answer. This response generation is facilitated by an AI model, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the AI model may be configured to formulate a text response based on the answer generated by the pre-trained AI models. This text response may be in the Kannada language, ensuring that the response is understandable and relevant to the user.
The second processing circuitry 124 is also configured to generate an output based on the text response. This output generation is facilitated by one or more Text-to-Speech online APIs, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. In some aspects, these APIs may be cloud-based services that convert text into speech, enabling the system to provide responses via speech. This online text-to-speech conversion capability is particularly beneficial for students in urban areas, who may have consistent access to the internet. By processing text responses and generating speech outputs online, the system can provide uninterrupted educational support, regardless of network availability.
Finally, the second processing circuitry 124 is configured to convert the generated output into an audio file. This conversion is facilitated by an audio generation engine 414, which may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, to perform one or more operations. Specifically, the audio generation engine 414 may be configured to convert the text output into an audio format, enabling the system to provide answers via speech. This speech synthesis capability is particularly beneficial for young learners who may find it easier to understand spoken explanations.
In some aspects, the server 105 may be hosted on a cloud-based platform, ensuring that the system can operate effectively even in areas with limited or no internet connectivity. This online functionality is particularly beneficial for students in urban areas, who may have consistent access to the internet. By processing speech inputs and generating speech outputs online, the system can provide uninterrupted educational support, regardless of network availability.
Referring to FIG. 5, the system diagram 500 for processing user input in both online and offline modes is illustrated. In other words, the system diagram 500 illustrates an overall architecture diagram for processing user input in both online and offline modes. The system 500 comprises a user input unit 502, an online speech-to-text unit 504, an AI agent 506, an online text-to-speech unit 508, an output interface 510, an offline speech-to-text unit 512, an offline text-to-speech unit 514, and an intranet server 516. These components are interconnected to facilitate speech-to-text conversion, AI processing, and text-to-speech conversion in both online and offline modes.
The user input 502 is shown as a question in Kannada language from a primary school student. This input is processed differently based on whether the system is operating in online or offline mode.
In the online mode, the user input is first processed by the online speech-to-text unit 504, which uses a Google API to convert speech to text. The resulting text input is then passed to the AI agent 506 for computation. The AI agent 506 processes the input and generates an output text. This output text is then converted back to speech by the online text-to-speech unit 508, which uses a Python module called Speakout. Finally, the output is presented through the output interface 510, which displays the output and provides visualization of math operations.
In the offline mode, the user input is processed by the offline speech-to-text unit 512, which uses a speech to text model (STT Model) for speech-to-text conversion. The resulting text is then passed to the AI agent 506 for computation. The output text from the AI agent 506 is converted back to speech by the offline text-to-speech unit 514, which uses a text to speech model (TTS Model). The offline components are connected to the intranet server 516, which hosts the local processing capabilities.
In some aspects, the user input 502 may be a voice note, captured by a microphone or other audio input device on the user device 102. This voice note may contain a question or query from the student, expressed in the Kannada language. The system 500 is configured to process this voice note as a speech input, converting it into text format for further processing by the AI agent 506.
The first processing circuitry 120 and the second processing circuitry 124 within the system 500 are configured to generate an interactive visual associated with the generated audio files. This interactive visual may be displayed on the output interface 510, providing a visual representation of the math operations being performed in response to the user's query. This visual representation can help students better understand the mathematical concepts being taught, enhancing their learning experience.
In some aspects, the output interface 510 may be a display screen on the user device 102, such as a computer monitor, tablet screen, or smartphone screen. The output interface 510 may display the text response generated by the AI agent 506, along with the interactive visual associated with the generated audio files. This display can provide a comprehensive response to the user's query, combining text, speech, and visual elements to deliver a rich and engaging learning experience.
In some aspects, the system 500 may also include a user interface (not shown) that allows the user to interact with the system and provide input. This user interface may include various input devices, such as a keyboard, mouse, touchscreen, or voice recognition system. The user interface may also include output devices, such as a display screen or speakers, to present the responses generated by the system. The user interface may be integrated with the user device 102, or it may be a separate component that communicates with the user device 102 via a wired or wireless connection.
Referring to FIG. 6, the system diagram 600 of a speech processing system is illustrated. The system diagram 600 showcases the flow of information from speech input to speech output, highlighting the various processing steps and modules involved in understanding and responding to user input. Specifically, the system diagram 600 illustrates the flow of information from speech input to speech output when the user input is processed in the online mode.
The system begins with a speech input module 602, where a student provides speech input. This input can be captured using a sensor unit 108 on a user device 102, which may include, but is not limited to, Condenser Microphones, MEMS (Microelectromechanical Systems) Microphones, DSP (digital signal processor) Chips, Bluetooth Microphones, Array Microphone, or a combination thereof. The sensor unit 108 is specifically designed to detect and capture the user's voice when speech input is provided, ensuring accurate capture of the user's voice.
This speech input is then processed by a speech-to-text module 604, which utilizes a Google API to convert the speech into text format. In some aspects, the speech-to-text module 604 may be a part of the first processing circuitry 120 in the local server 104 or the second processing circuitry 124 in the server 105, depending on the operational mode of the system 100. The speech-to-text module 604 is configured to convert the user's speech input into a digital text format that can be further processed by the system.
The text input is then passed to a natural language processing module 606. This module contains a tokenizer, a featurizer, and an entity extractor which break down and analyze the text. The tokenizer, the featurizer, and the entity extractor are integral components of the natural language processing module 606, enabling the system to understand the structure and meaning of the user's speech input. The tokenizer breaks down the text into individual words or tokens, the featurizer extracts features or characteristics from these tokens that are relevant to the user's query, and the entity extractor extracts one or more entities from the text.
An intent classification module 608 receives input from the natural language processing module 606. This module classifies the intent of the input and compares it with an action file to determine the appropriate response. The intent classification module 608 is a key component of the AI agent 506, enabling the system to understand the user's intent and generate a relevant response.
The system includes an output processing module 610, which receives the processed information from the previous modules. This module generates a text output (O/P) and uses a Google gTTS API to convert the text back into speech. The output processing module 610 is a part of the first processing circuitry 120 or the second processing circuitry 124, depending on the operational mode of the system 100. It is configured to generate a text output based on the processed information and convert this text output into speech using a Text-to-Speech API.
Finally, a speech output module 612 includes a speaker. The speech output module 614 is a part of the user device 102 and is configured to provide the user with audible responses to their queries via the speaker.
Referring to FIG. 7, the system diagram 700 for processing speech input and generating computed answers is illustrated. The system diagram 700 showcases the flow of information from web browser and voice input in step 702 through the various processing stages to the final computed answer in the step 720, demonstrating how the system interprets, processes, and responds to user queries in the context of primary education support.
The system begins with a user speech input, where a student provides speech input in step 702. This input can be captured using a sensor unit 108 on a user device 102, which may include, but is not limited to, Condenser Microphones, MEMS (Microelectromechanical Systems) Microphones, DSP (digital signal processor) Chips, Bluetooth Microphones, Array Microphone, or a combination thereof. The sensor unit 108 is specifically designed to detect and capture the user's voice when speech input is provided, ensuring accurate capture of the user's voice.
This speech input is then processed by the web browser that may serve as a front-end interface in step 702. Further, the processed speech may be transferred to a web application technology layer that may handle hosting of the web application and manages the user interface in the step 704. The web application technology layer may further ensure that the user’s voice input is received and may appropriately be routed to the next processing step. The speech may further be transferred to a voice input processing layer in the step 706. The voice input processing layer may manage the recording of the user's voice and may submit the recorded audio for further processing. Specifically, the voice input processing layer may act as a bridge between the user interface and backend processing components. The voice input processing layer may be trained with the local data set 122 that is associated with the students of standard 1. Further, from the voice input processing layer, the user’s voice may be transferred to an external program layer in step 708. The external program layer may employ an audio data processing on the user’s voice to manipulate the audio data to prepare the user’s voice for further analysis. The external program layer may employ audio format conversion when the user’s voice needs to be converted into a format that is compatible with speech recognition tools. Further, external program layer may employ a speech to text conversion in which system may convert the processed and formatted audio into recognized text in Kannada in step 710.
The recognized text in Kannada, which is then passed to an AI agent. The AI agent performs several functions, including natural language understanding to determine the user's intent at step 712. The natural language understanding involves interpreting the meaning behind the words and determining what the user wants to achieve.
The AI agent further uses intent analysis to classify the user's intent into predefined categories such as greeting, addition operation, subtraction operation, etc. in step 714. This step is crucial for determining the appropriate response to the user's query. Once the intent is identified, the system extracts specific pieces of information or entities from the user's input in step 716. These entities are relevant to the identified intent and could include details such as the type of operation (e.g., addition) and the operands (e.g., numbers involved in the operation).
If the user's intent involves a computational task (e.g., addition or subtraction), the AI agent performs the necessary calculations using the extracted entities at step 718. This step ensures that the response is accurate and relevant to the user's query. Based on the intent analysis and extracted entities, the chatbot generates an appropriate response in step 720. The response is formulated in Kannada to match the user's language and provide a clear and understandable answer.
The final output is a computed answer in Kannada, along with explanations and step-by-step solutions if applicable in step 722. This provides a comprehensive response to the user's query. The computed answer in regional Kannada language with explanation is given in speech format after conversion.
In some aspects, the AI agent may be configured to generate an interactive visual associated with the generated audio files to explain the text response in an interactive manner. This visual representation can help students better understand the mathematical concepts being taught, enhancing their learning experience.
In some aspects, the user speech input may be a voice note, captured by a microphone or other audio input device on the user device 102. This voice note may contain a question or query from the student, expressed in the Kannada language. The system is configured to process this voice note as a speech input, converting it into text format for further processing by the AI agent.
In some aspects, the user device 102 may include a user interface (not shown) that allows the user to interact with the system and provide input. This user interface may include various input devices, such as a keyboard, mouse, touchscreen, or voice recognition system. The user interface may also include output devices, such as a display screen or speakers, to present the responses generated by the system. The user interface may be integrated with the user device 102, or it may be a separate component that communicates with the user device 102 via a wired or wireless connection.
In some aspects, the system may also include a network interface (not shown) that facilitates communication between the system and external devices or networks. This network interface may be configured to establish and manage network connections, enabling the system to access external resources and services as needed. This network interface may be particularly useful in online mode, where the system may need to access cloud-based services and APIs for speech-to-text conversion, AI processing, and text-to-speech conversion.
In some aspects, the system may also include a storage unit (not shown) that stores the local dataset and pre-trained models used by the system. This storage unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The storage unit provides data storage and retrieval capabilities, ensuring that the system has access to the necessary data for processing user queries and generating responses.
In some aspects, the system may also include a power supply unit (not shown) that provides power to the various components of the system. This power supply unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The power supply unit ensures that the system can operate continuously, providing uninterrupted educational support to the user.
In some aspects, the system may also include a cooling unit (not shown) that dissipates heat generated by the various components of the system. This cooling unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The cooling unit ensures that the system can operate efficiently, preventing overheating and potential damage to the system components.
In some aspects, the system may also include a security unit (not shown) that protects the system and its data from unauthorized access and potential threats. This security unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The security unit ensures that the system can operate securely, protecting the user's data and maintaining the integrity of the system.
In some aspects, the system may also include a user interface (not shown) that allows the user to interact with the system and provide input. This user interface may include various input devices, such as a keyboard, mouse, touchscreen, or voice recognition system. The user interface may also include output devices, such as a display screen or speakers, to present the responses generated by the system. The user interface may be integrated with the user device 102, or it may be a separate component that communicates with the user device 102 via a wired or wireless connection.
In some aspects, the system may also include a network interface (not shown) that facilitates communication between the system and external devices or networks. This network interface may be configured to establish and manage network connections, enabling the system to access external resources and services as needed. This network interface may be particularly useful in online mode, where the system may need to access cloud-based services and APIs for speech-to-text conversion, AI processing, and text-to-speech conversion.
In some aspects, the system may also include a storage unit (not shown) that stores the local dataset and pre-trained models used by the system. This storage unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The storage unit provides data storage and retrieval capabilities, ensuring that the system has access to the necessary data for processing user queries and generating responses.
In some aspects, the system may also include a power supply unit (not shown) that provides power to the various components of the system. This power supply unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The power supply unit ensures that the system can operate continuously, providing uninterrupted educational support to the user.
In some aspects, the system may also include a cooling unit (not shown) that dissipates heat generated by the various components of the system. This cooling unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The cooling unit ensures that the system can operate efficiently, preventing overheating and potential damage to the system components.
In some aspects, the system may also include a security unit (not shown) that protects the system and its data from unauthorized access and potential threats. This security unit may be integrated with the user device 102, the local server 104, or the server 105, depending on the operational requirements and available resources. The security unit ensures that the system can operate securely, protecting the user's data and maintaining the integrity of the system.
Referring to FIG. 8, a flowchart illustrating a method for processing speech inputs and generating speech outputs in both online and offline modes is shown. The method initiates with Step 802, where a sensor unit 108 of a user device 102 detects a voice of a user when the user provides a speech input. The sensor unit 108 may include, but is not limited to, Condenser Microphones, MEMS (Microelectromechanical Systems) Microphones, DSP (digital signal processor) Chips, Bluetooth Microphones, Array Microphone, or a combination thereof. These components enable the sensor unit 108 to accurately capture the user's voice, ensuring precise voice detection for further processing.
Following this, in Step 804, a query comprising the speech input from the user is received by the user device 102. The user device 102, equipped with various components such as a processor 110, memory 112, storage 114, a network interface 116, and an input device 118, facilitates the reception and processing of the user's query. The query may be a question or a statement in the Kannada language, related to a mathematical problem or concept that the user wishes to understand.
The flowchart then diverges based on the operational mode identified in Step 806, which determines whether the process will proceed in an online or offline mode. The operational mode is identified by the user device 102 based on factors such as the availability of internet connectivity, user preferences, and system settings. This dual-mode capability ensures that the system can adapt to varying connectivity conditions, providing uninterrupted access to educational resources.
If the offline mode is selected, Step 808 activates a local server 104 equipped with first processing circuitry 120. At step 809, an intent associated with the query via the AI model that utilizes a local dataset 122 and via the one or more pre-trained AI model is identified via the first processing circuitry. Subsequently, Steps 810 through 818 detail the offline processing sequence: converting speech to text using a locally executed Speech-to-Text model in Step 810, extracting one or more entities from the query by way of the AI model such that the one or more entities are utilized to generate the answer in step 811, generating an answer using an AI model with a local dataset 122 in Step 812, forming a text response from the answer in Step 814, generating an output from this text response using locally executed AI models in Step 816, and finally converting this output to an audio file for speech answers in Step 818.
Conversely, if the online mode is selected in Step 3, Step 820 enables a server 105 with second processing circuitry 124. At step 821, an intent associated with the query via the AI model that utilizes a local dataset 122 and via the one or more pre-trained AI model is identified via the second processing circuitry. This initiates the online processing sequence with Step 822, where speech is converted to text using an online Speech-to-Text API. Extracting one or more entities from the query by way of the AI model such that the one or more entities are utilized to generate the answer in step 823. This is followed by generating an answer using a pre-trained AI model in Step 824, generating a text response based on this answer in Step 826, producing an output from the text response using a Text-to-Speech online API in Step 828, and converting this output to an audio file for speech responses in Step 830.
Furthermore, after the conversion of the audio file at step 818/830, the system may generate an interactive visual associated with the audio file.
The flowchart effectively demonstrates the dual-pathway processing based on operational mode, highlighting the flexibility of the method to function with varying resources and connectivity environments. The method utilizes distinct sets of processing circuitry and AI models tailored to the constraints and advantages of each mode, ensuring efficient handling of speech inputs and outputs. This adaptability is a notable feature, allowing seamless operation regardless of network availability. The use of both local and online resources optimizes performance and reliability, catering to different user needs and scenarios.
Thus, the system, device, and method provide several technical advantages:
1. A complete offline workflow for speech-to-text and text-to-speech processing along with AI agent interaction, enabling functionality in areas with limited or no internet connectivity. This includes an offline system for voice input processing, audio format conversion, and speech-to-text conversion specifically tailored for the Kannada language.
2. An on-device computation engine for educational queries, capable of generating accurate, detailed, and explanatory responses in Kannada. This engine includes speech synthesis for spoken answers, allowing for a fully interactive learning experience without relying on external resources.
3. A chatbot with integrated natural language understanding (NLU) designed specifically for primary school children's educational needs. This chatbot is capable of context-aware intent analysis and entity extraction in Kannada, ensuring appropriate and tailored responses to young learners.
4. Focused mathematical computation capabilities within the chatbot itself, eliminating the need for external calculation tools and providing a seamless learning experience for basic arithmetic operations.
5. Dual-mode functionality, allowing seamless switching between online and offline modes based on internet availability, thus ensuring continuous access to educational resources regardless of connectivity status.
6. A user-friendly interface that accommodates both voice and text inputs in the regional language, making the system accessible to young learners who may not be proficient in typing or reading complex text.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims. ,CLAIMS:1. A system (100) to support primary education for students in regional language, the system (100) comprising:
a user device (102) configured to:
receive a query comprising a speech input from a user; and
identify an operational mode, wherein the operation mode is one of, an online mode or an offline mode;
a local server (104) having first processing circuitry (120), and coupled to the user device (102), wherein, when the operational mode is the offline mode, the first processing circuitry (120) is configured to (i) convert the speech input of the query to a text format by way of a locally executed Speech-to-Text model, (ii) generate an answer to the query by way of an Artificial Intelligence (AI) model that utilizes a local dataset (122), (iii) generate a response to the query based on the generated answer by way of an automation script, wherein the response to the query is a text response, (iii) generate an output based on the text response by way of one or more locally executed AI models, and (iv) convert the generated output into an audio file such that the audio file provide answers via speech; and
a server (105) having second processing circuitry (124), and coupled to the user device (102), wherein, when the operational mode is the online mode, the second processing circuitry (124) is configured to (i) convert the speech input of the query to a text format by way of a Speech-to-Text online Application Programming Interface (API), (ii) generate an answer to the query by way of one or more pre-trained Artificial Intelligence (AI) model, (iii) generate a response to the query based on the generated answer by way of the one or more pre-trained AI model, wherein the response to the query is a text response, (iv) generate an output based on the text response by way of one or more Text-to-Speech online API, and (iv) convert the generated output into an audio file to provide the text response via speech.
2. The system (100) as claimed in claim 1, the speech input is a voice note.
3. The system (100) as claimed in claim 1, wherein the user device (102) comprising a sensor unit (108) configured to detect a voice of the user when the user provides the speech input, wherein the sensor unit (108) is selected from one of, a user Condenser Microphones, MEMS (Microelectromechanical Systems) Microphones, DSP (digital signal processor) Chips, Bluetooth Microphones, Array Microphone, or a combination thereof.
4. The system (100) as claimed in claim 1, wherein the local dataset (122) is associated with students of Standard 1.
5. The system (100) as claimed in claim 1, wherein the first processing circuitry (120) and the second processing circuitry (124) is further configured to generate an interactive visual associated with the generated audio files to explain the text response in an interactive manner.
6. The system (100) as claimed in claim 1, wherein the first and second processing circuitries (120, 124) are configured to identify an intent associated with the query by way of the AI model that utilizes a local dataset (122) and by way of the one or more pre-trained AI model, respectively, wherein the intent comprising one of, a greeting or a mathematical operation.
7. The system (100) as claimed in claim 6, wherein, when the intent associated with the query is the mathematical operation, the first and second processing circuitries (120, 124) are configured to extract one or more entities from the query by way of the AI model that utilizes a local dataset (122) and by way of the one or more pre-trained AI model, respectively such that the one or more entities are utilized to generate the answer.
8. A method (800) for supporting primary education for students in regional language, the method (200) comprising steps of:
Detecting (802), by way of a sensor unit (108) of a user device (102), a voice of a user when the user provides a speech input;
Receiving (804), by way of the user device (102), a query comprising the speech input from the user;
Identifying (806), by way of the user device (102), an operational mode, wherein the operation mode is one of, an online mode or an offline mode;
enabling (808), a local server (104) having first processing circuitry (120) when the operational mode is the offline mode such that the first processing circuitry (120) (i) converts the speech input of the query to a text format by way of a locally executed Speech-to-Text model, (ii) generates an answer to the query by way of one an Artificial Intelligence (AI) model that utilizes a local dataset (122), (iii) generates a response to the query based on the generated answer by way of an automation script, wherein the response to the query is a text response, (iii) generates an output based on the text response by way of one or more locally executed AI models, and (iv) converts the generated output into an audio file such that the audio file to provide answers via speech; and
enabling (820). a server (105) having second processing circuitry (124) when the operational mode is the online mode such that the second processing circuitry (124) converts the speech input of the query to a text format by way of a Speech-to-Text online Application Programming Interface (API), (ii) generates an answer to the query by way of one or more pre-trained Artificial Intelligence (AI) model, (iii) generates a response to the query based on the generated answer by way of the AI model, wherein the response to the query is a text response, (iv) generates an output based on the text response by way of one or more Text-to-Speech online API, and (iv) converts the generated output into an audio file to provide the text response via speech.
9. The method (800) as claimed in claim 8, further comprising a step of generating, by way of the first processing circuitry (120) and the second processing circuitry (124), an interactive visual associated with the generated audio files to explain the text response in an interactive manner.
10. The method (800) as claimed in claim 8, further comprising a step of:
Identifying (809, 821), by way of the first and second processing circuitries (120, 124), an intent associated with the query via the AI model that utilizes a local dataset (122) and via the one or more pre-trained AI model, respectively when the intent associated with the query is the mathematical operation, wherein the intent in one of, a greeting or a mathematical operation; and
Extracting (811, 823), by way of the first and second processing circuitries (120,124), one or more entities from the query by way of the AI model such that the one or more entities are utilized to generate the answer.
| # | Name | Date |
|---|---|---|
| 1 | 202421000136-STATEMENT OF UNDERTAKING (FORM 3) [01-01-2024(online)].pdf | 2024-01-01 |
| 2 | 202421000136-PROVISIONAL SPECIFICATION [01-01-2024(online)].pdf | 2024-01-01 |
| 3 | 202421000136-FORM FOR SMALL ENTITY(FORM-28) [01-01-2024(online)].pdf | 2024-01-01 |
| 4 | 202421000136-FORM FOR SMALL ENTITY [01-01-2024(online)].pdf | 2024-01-01 |
| 5 | 202421000136-FORM 1 [01-01-2024(online)].pdf | 2024-01-01 |
| 6 | 202421000136-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-01-2024(online)].pdf | 2024-01-01 |
| 7 | 202421000136-EVIDENCE FOR REGISTRATION UNDER SSI [01-01-2024(online)].pdf | 2024-01-01 |
| 8 | 202421000136-DRAWINGS [01-01-2024(online)].pdf | 2024-01-01 |
| 9 | 202421000136-DECLARATION OF INVENTORSHIP (FORM 5) [01-01-2024(online)].pdf | 2024-01-01 |
| 10 | 202421000136-FORM-26 [28-03-2024(online)].pdf | 2024-03-28 |
| 11 | 202421000136-Proof of Right [01-07-2024(online)].pdf | 2024-07-01 |
| 12 | 202421000136-FORM 3 [12-07-2024(online)].pdf | 2024-07-12 |
| 13 | 202421000136-FORM-5 [21-10-2024(online)].pdf | 2024-10-21 |
| 14 | 202421000136-DRAWING [21-10-2024(online)].pdf | 2024-10-21 |
| 15 | 202421000136-COMPLETE SPECIFICATION [21-10-2024(online)].pdf | 2024-10-21 |
| 16 | 202421000136-PA [31-12-2024(online)].pdf | 2024-12-31 |
| 17 | 202421000136-FORM28 [31-12-2024(online)].pdf | 2024-12-31 |
| 18 | 202421000136-EVIDENCE FOR REGISTRATION UNDER SSI [31-12-2024(online)].pdf | 2024-12-31 |
| 19 | 202421000136-EDUCATIONAL INSTITUTION(S) [31-12-2024(online)].pdf | 2024-12-31 |
| 20 | 202421000136-ASSIGNMENT DOCUMENTS [31-12-2024(online)].pdf | 2024-12-31 |
| 21 | 202421000136-8(i)-Substitution-Change Of Applicant - Form 6 [31-12-2024(online)].pdf | 2024-12-31 |
| 22 | 202421000136-FORM-9 [20-02-2025(online)].pdf | 2025-02-20 |
| 23 | 202421000136-FORM 18 [20-02-2025(online)].pdf | 2025-02-20 |
| 24 | Abstract.jpg | 2025-02-28 |
| 25 | 202421000136-FORM 13 [03-06-2025(online)].pdf | 2025-06-03 |
| 26 | 202421000136-RELEVANT DOCUMENTS [08-08-2025(online)].pdf | 2025-08-08 |
| 27 | 202421000136-FORM 13 [08-08-2025(online)].pdf | 2025-08-08 |