Sign In to Follow Application
View All Documents & Correspondence

Artificial Intelligence Based Language Learning System And Method Thereof

Abstract: The present disclosure provides a system (1000) and a method for AI-based language learning. The method includes generating, by a processor (402), assessments using one or more assessment generation AI models based on a difficulty level and a topic using an assessment generation model, receiving student responses to the generated language learning assessments from a student device, determining a student performance score for each student response using an AI agent module, determining the difficulty level of subsequently generated language learning assessments based on the determined student performance score using a classification model, and training at least one of: the assessment generation model and/or the classification model based on the difficulty level and/or the student performance score.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 March 2024
Publication Number
38/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

WEXL EDU PRIVATE LIMITED
Plot no-72, S.no 12, 2nd Floor, Journalist Colony, Jubilee Hills, Hyderabad, Telangana - 500033, India.

Inventors

1. LINGA, Naveen Kumar
Near Lingampally, MMTS Station, Plot No 21, Laxmi Vihar 2, Nalagandla Serilingampally, Rangareddy, Telangana - 500019, India.
2. BHARAT, Akkinepalli
6-2-382 Plot No. 23, Vanasthalipuram Phase-1, Near Community Hall, Vanasthalipuram Hayathnagar, K.V.Rangareddy, Telangana – 500070, India.

Specification

DESC:TECHNICAL FIELD
[001] The present disclosure relates to the field of Artificial Intelligence (AI) technology. In particular, the present disclosure provides a system and a method that uses AI to improve language learning proficiency for students.

BACKGROUND
[002] Traditional tools in language learning often adhere to a standardized, one-size-fits-all approach, presenting a generic overview of language skills without catering to individual needs. These conventional tools may lack adaptability, offering fixed content regardless of an individual’s performance, and limiting the customization needed for effective learning experiences. Additionally, feedback from traditional tools might be delayed, thereby impeding the timely correction of mistakes and hindering the swift progress of learners. Furthermore, these tools may lack real-time performance tracking, limiting the ability to monitor and adjust learning strategies dynamically. The constraints of traditional tools underscore the need for more adaptive and technologically advanced solutions in language learning.
[003] Therefore, there is a need to address at least the above-mentioned drawbacks and any other shortcomings, or at the very least, provide a valuable alternative to the existing methods and systems.

OBJECTS OF THE PRESENT DISCLOSURE
[004] A general object of the present disclosure is to provide an efficient and a reliable system and method that obviates the above-mentioned limitations of existing systems and methods efficiently.
[005] An object of the present disclosure is to provide a system and a method that utilizes Artificial Intelligence (AI) techniques to automatically generate questions across various domains, thereby enhancing learning, testing, and assessment processes by generating various relevant questions.
[006] Another object of the present disclosure is to provide a system and a method that uses Natural Language Processing (NLP) and Machine Learning (ML) techniques for comprehending and manipulating text, thereby resulting in the generation of high-quality questions aligned with educational objectives.
[007] Another object of the present disclosure is to provide a system and a method for evaluating linguistic proficiency of users, the linguistic proficiency corresponding to any or combination of listening, reading, speaking, grammar, and writing.
[008] Another object of the present disclosure is to provide a system and a method with an interface that allows teachers to generate assessments customized for students, receive responses from the students, and evaluate performance of the students, using a single interface.
[009] Yet another object of the present disclosure is to provide a system and a method that utilizes AI techniques for adjusting a difficulty level of generated questions based on an intended audience or learning objectives.

SUMMARY
[010] An aspect of the present disclosure relates to a system for artificial intelligence (AI)-based language learning, the system including a processor and a memory operatively coupled to the processor, where the memory includes one or more processor-executable instructions, which, when executed, cause the processor to: generate assessments using one or more assessment generation AI models based on a difficulty level and a topic using an assessment generation model, receive student responses to the generated language learning assessments from a student device, determine a student performance score for each student response using an AI agent module, determine the difficulty level of subsequently generated language learning assessments based on the determined student performance score using a classification model, and train at least one of: the assessment generation model and/or the classification model based on the difficulty level and/or the student performance score. This allows the system to effectively assess and adapt to a student’s language learning capabilities, personalizing the educational journey for improved learning outcomes.

BRIEF DESCRIPTION OF THE DRAWINGS
[011] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[012] FIG. 1 illustrates an example sequence diagram representing a method for providing scores for each attempted question, in accordance with an embodiment of the present disclosure.
[013] FIG. 2 illustrates an example schematic view of an Artificial Intelligence (AI)-based language learning platform, in accordance with an embodiment of the present disclosure.
[014] FIGs. 3A-3V illustrate example schematic representations of a user interface, in accordance with an embodiment of the present disclosure.
[015] FIG. 4 illustrates a schematic representation of the system for AI-based language learning, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[016] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosures as defined by the appended claims.
[017] In the present disclosure, an Artificial Intelligence (AI)-based system for enhanced learning represents an educational technology designed to transform learning experiences. The system may enable institutes to facilitate language skill improvement in students by covering listening, speaking, reading, vocabulary, and grammar. The language may be, English, but not limited thereto. The system may be configured with a micro-sized learning model (Nano learn) tailored for children, integrating features across language proficiency domains, thereby providing a comprehensive and specialized approach to language education.
[018] In an embodiment, the system may allow teachers and students to interact with each other. The interaction may pertain to at least one of generating assessments (such as a set of questions), sharing the assessment for students to attempt, receiving responses to the assessment from students, evaluating performance of the student in the assessment, or providing automated or teacher’s feedback to the students. Each of these interactions may be facilitated by an AI engine/agent. In some examples, an AI engine/agent may recommend questions to the teachers to generate the assessment for a specific student, based on their proficiencies in one or more topics, or language proficiencies. In further examples, the AI engine may be configured to receive the responses from the students and evaluate the performance of the based on the responses.
[019] In an embodiment, the system for language learning may involve a complex approach to skill development, such as listening, speaking, reading, vocabulary, and grammar, but not limited thereto. In some examples, the system or the AI engine may include a listening module to generate assessments and evaluate the listening proficiency of students. For listening, an audio-based multiple-choice questions enhance understanding as students respond to queries post-listening, fostering critical thinking and active participation. In an embodiment, the listening module may be utilized to improve comprehension, communication skills, language fluency, vocabulary expansion, and cultural awareness for effective communication in various contexts, thereby enhancing overall language proficiency.
[020] In an embodiment, the system may be configured with a speaking module or a speech analysis module that provides an interactive space for children to practice spoken English with engaging prompts, scenarios, and advanced AI-powered speech analysis offering real-time feedback on pronunciation, intonation, and fluency. For example, the speech analysis module may receive audio recording of students pronouncing one or more words and phrases, segment the audio recording to identify each phoneme pronounced by each student, and compare each segment of phoneme pronounced by the student with an exemplary phoneme recording. The speech analysis module may receive the audio recording connected to the system. The speech analysis module may be configured with speech processing AI models for evaluating speaking proficiency of the student. This not only enhances effective communication and confidence but also contributes to academic success and global opportunities. The integration of AI in the speech analysis module ensures continuous improvement, providing personalized feedback and contributing to a comprehensive language learning process.
[021] In an embodiment, the system may be configured with a reading module that provides paragraph-based questions post-reading, promoting critical thinking and strengthening comprehension skills. By presenting (either manually curated or automatically generated) questions on the passage, the system may transform reading into an educational adventure, vocabulary expansion, language fluency, improved academic performance and heightened focus and concentration. In an embodiment, the system may proficiently enhance the skill of reading to establish a robust groundwork for continuous learning across a spectrum of subjects
[022] In an embodiment, the system may be configured with a vocabulary module. The vocabulary module may be configured to generate questions to allow students to practice vocabulary. In some examples, the vocabulary module may be configured to use spaced repetition to allow students to memorize vocabulary. Further, the vocabulary module may be configured to dynamically generate different types of questions for learning vocabulary. In some examples, the vocabulary module may be configured to generate any one of fill-in-the-blank questions, multiple choice questions, polysemous questions, matching questions, apply-the-meaning question, spelling questions, open-ended questions, cloze questions, and the like, but not limited thereto, on each spaced repetition. The vocabulary module may provide an interactive interface for students to learn vocabulary. In some examples, students may engage with these questions, dragging and dropping words to complete sentences, creating a visually engaging and interactive experience. The system may recognize the critical role of vocabulary development in communication, refining writing skills, improving reading comprehension, and enhancing critical thinking. The active encouragement of vocabulary expansion stands as a component within the system's design promoting overall language proficiency and contributing to academic achievement.
[023] In an embodiment, the system may be configured with a grammar module. The grammar module may provide an engaging and interactive experience through drag-and-drop, fill-in-the-blank questions, and injecting excitement into traditional exercises. This approach makes language proficiency development dynamic and entertaining among young learners. Proficiency in grammar ensures clear communication, improved writing skills, and confidence in speaking, supporting critical thinking and accurate expression of complex ideas. The comprehensive integration of these modules ensures a well-rounded and effective language-learning process.
[024] In an embodiment, the system may include a Natural Language Processing (NLP) module that operates as a core framework within the system, serving as the hub for comprehensive verbal analysis. Tasked with deciphering the context and semantics of provided content, this module may employ techniques such as tokenization, part-of-speech tagging, and syntactic analysis to meticulously deconstruct and analyse the input text. This intricate functionality allows for the understanding of language hints and structures.
[025] In an embodiment, the system incorporates Machine Learning (ML) techniques/models, such deep neural networks. These techniques/modles function as adept learners, discerning patterns and structures within the input data. The training data encompasses a diverse array of questions and their corresponding contexts, facilitating the model's ability to generalize and generate questions for entirely new content.
[026] In an embodiment, the system’s contextual understanding capabilities are a verification of the sophistication. Trained to grasp the context of input materials, the system may ensure the generated questions are not only relevant but also contextually appropriate. By considering relationships between words, phrases, and concepts, the system formulates questions that provide the understanding of the subject matter.
[027] In an embodiment, diversity in question types is a notable strength of the system. This can adeptly generate various question formats, including multiple-choice, true/false, short-answer, and essay-style questions. This adaptability is further emphasized by an ability to tailor the approach based on the desired question format, providing flexibility to different educational objectives.
[028] In an embodiment, an adaptive difficulty level module may be used to adapt the difficulty of the questions identified by the NLP module (or subordinate modules thereof), which may represent the system’s responsiveness to the audience’s proficiency levels or specific learning objectives. The AI model adeptly adjusts the difficulty level of generated questions, ensuring a tailored experience suitable for learners at different stages of proficiency. This adaptability underscores the system’s commitment to optimizing the learning journey, providing a useful and effective tool for diverse educational purposes.
[029] In an embodiment, an AI-based platform concentrates on distinct language skills like reading, listening, speaking, vocabulary, and grammar, empowering learners to pinpoint and enhance their weak areas. The system having the AI-based platform may also be suitably adapted to improve proficiencies of students in skills such as narrative structures in prose, poetry, symbolisms, and the like. but not limited thereto. Within this platform, students can refine their English language skills. However, the system may be suitably adapted to improve linguistic proficiency of students in other languages such as Telugu, Tamil, Hindi, and other languages. In an embodiment, the system may facilitate ongoing improvement for students in an online learning environment, irrespective of their physical location.
[030] In an embodiment, the system may adopt a personalized learning approach, tailoring content based on individual proficiency levels and adapting as learners progress. In an embodiment, the system may focus specifically on listening, speaking, reading, vocabulary, and grammar, allowing learners to target and improve specific areas of weakness. In an embodiment, the system may incorporate adaptive learning techniques, adjusting difficulty levels based on learner performance to optimize the learning experience and provide immediate feedback, enabling learners to promptly identify and rectify errors, and enhancing the learning process. In an embodiment, the system may generate detailed analytics and reports, allowing teachers to analyse individual and class-wide performance trends, identify areas for improvement, and facilitate data-driven decisions. The system may provide a comprehensive platform for training students on one or more of proficiencies in vocabulary, listening, speaking, reading, writing, and the like.
[031] In some examples, the AI module may be trained using a machine learning pipeline. The pipeline may include gathering relevant data essential for language learning. This may include textual data, spoken language samples, or any other relevant information. The acquired data may be divided into two subsets such as a training set (used to train the model) and a test set (used to evaluate the model's performance). The system may assess whether a specific target, such as language proficiency level, needs to be predicted or classified. If a learning task involves discovering patterns or structures within the data without labelled outputs, an unsupervised learning model is applied. In such examples, the acquired data may not have a corresponding label for each input/data point. If the acquired data includes labels for each input/datapoint, the supervised learning models may be used. The system may identify whether the label is continuous or discrete, inducing a subsequent modelling approach. If the label is discrete, the system may employ a classification model to predict or classify language proficiency levels or outcomes. If the target variable is continuous, the system may utilize a regression model to predict language proficiency scores or other continuous outcomes. The chosen model may be trained using the training data to learn the patterns and relationships within the language learning dataset. The trained model is tested using the test data to evaluate the performance on new data. After testing, the system may recheck whether the target variable is continuous or discrete, ensuring alignment with an initial determination. The output from the test data may be evaluated using a loss function. For classification tasks, the system may generate a confusion matrix, for example, to assess the model’s performance in predicting discrete outcomes. For regression tasks, the system may calculate a Root Mean Squared Error (RMSE), for example, as a measure of the model’s accuracy in predicting continuous outcomes. In further embodiments, feedback from teacher, students, and/or other entities may be used to improve the AI models associated with the AI engine/agent.
[032] The system may include one or more modules that are either integrated within or connected to the AI model, which may be configured to perform NLP tasks, since the system is intended to operate on natural language data. In an embodiment, the one or more modules may include a preprocessor module configured to preprocess the natural language inputs provided thereto. Since the natural language text may be received in a plurality of form factors, such as in text, images, audio, and video, the preprocessor module may be included to clean and make the input data compatible for the AI model to process. The one or more modules (or the NLP module) may also include separate sub-ordinate modules configured to perform at least one of segmentation, tokenization, Part of Speech (POS) tagging, and aggregation, and the like, each executing specific tasks in syntactic analysis and segmentation. The modules may include a negation detection module that discerns linguistic constructs denoting negation, a crucial aspect for sophisticated understanding. In an embodiment, the modules may include a named entity detection module interlinked with the dictionary, augmenting the system proficiency in recognizing entities within the verbal context. The modules may also include a dependency parser that analyses syntactic structures, separating difficult dependencies between words. The incorporation of these elements feed into a co-reference parser and relationship parser, raising a deeper understanding of contextual particulars, critical for comprehensive language comprehension. The result of these processes contributes to the enriched knowledge base, funnelling into downstream applications modules, where the integrated language insights allow diverse applications in language learning. Through the seamless integration of these technical modules, the AI-driven language learning system achieves a capacity for syntactic and semantic analysis, representing an advanced model in the training of language acquisition. In some embodiments, the AI model and/or the one or more modules may be native to the system, or accessed using third-party Application Programming Interfaces (APIs).
[033] Embodiments explained herein relate to AI technology. In particular, the present disclosure relates to a system and a method that uses AI to improve language learning proficiency for individual students. Various embodiments with respect to the present disclosure will be explained in detail with reference to FIGs. 1-4.
[034] FIG. 1 illustrates an example sequence diagram 100 representing a method for providing scores for each attempted question, in accordance with an embodiment of the present disclosure.
[035] Referring to FIG. 1, at 108, teacher 102 triggers an English assessment consisting chapter-wise questions (or generally assessments in the form of a predefined data structure, or natural language text) to student 104 for assessment of listening, speaking, reading, vocabulary, and grammar proficiency. While embodiment of the present disclosure are described in the context of English language learning assessment, it may be appreciated that the embodiments of the present disclosure may be suitably adapted for learning other language as well.
[036] At 110, student 104 may attempt any one of listening, speaking, reading, vocabulary, and grammar proficiency assessments, and submits to teacher 102. The attempt or responses of the student 104 may also be in the form of data structures (such as JSON, or XML). Thereafter, the result page may display the correct answers, explanations, and the overall test score. At 112, teacher 102 can provide constructive comments to aid students in their improvement efforts based on the performance of students. At 114, the student 104 attempts and submits speaking questions. When student 104 triggers or initializes the speech analysis module using a system 1000 for AI-based learning, at 116, an AI agent/engine 106 provides accuracy scores for each attempted question. Pronunciation details, including how each letter is pronounced, are presented. Additionally, individual scores for Common European Framework of Reference for Languages (CEFR), International English Language Testing System (IELTS), and Pearson Test of English (PTE) are provided in a detailed manner. It may be appreciated the steps 108 to 116 shown in the sequence diagram 100 may be performed by computing devices automatically, or by human users operating corresponding computing devices.
[037] FIG. 2 illustrates an example schematic view 200 of an architecture of AI-based language learning platform, in accordance with an embodiment of the present disclosure.
[038] Referring to FIG. 2, in an educational ecosystem, students 104, teachers 102, parents 202, and (educational) institutes 204 (using the corresponding computing devices) may access the system 1000 via a Progressive Web Application (PWA), but not limited thereto. This PWA may be configured by a digital learning platform that integrates several components such as an authentication layer 208, a micro service-based platform 210, a delivery & operations infrastructure 212, and a 3rd party data and content providers 214. The authentication layer 208 may enhance a secure and personalized entry point, such as seeking and validating a username and password for allowing the users to access the system, for example. At the core lies the micro service-based platform 210, using modular functionalities, including content service 210A, batch service 210B, parent service 210C, reporting service 210D, and wallet service 210E.
[039] Simultaneously, the delivery & operations infrastructure 212 includes essential components like observability 212A, Continuous Integration/Continuous Deployment (CI/CD) 212B, Authentication/Authorization 212C, secret management 212D, and ML Notebooks 212E. This involved infrastructure that enhances secure access, continuous deployment, user authorization, confidential information management, and support for machine learning tasks. The 3rd party data and content providers 214 may include content curators 214A that contribute curated educational content, payment gateways 214B facilitate financial transactions, rewards & gift card providers 214C may provide incentives, and partner integration 214D collaborate with external entities.
[040] FIGs. 3A-3V illustrate example schematic representations 300A, 300B, 300C, 300D, 300E, 300F, 300G, 300H, 300I, 300J, 300K, 300L, 300M, 300N, 300O, 300P, 300Q, 300R, 300S, 300T, 300U, and 300V of a user interface, in accordance with an embodiment of the present disclosure. FIGs 3A-3V illustrate a user journey of the teachers and the students while accessing the system.
[041] Referring to FIG. 3A, upon logging into an application, a teacher accesses an ELP module. Within this module, after selecting the grade, the teacher can view all the chapters associated with the respective grade. Referring to FIG. 3B, for each chapter, there is a trigger button to activate for allowing the teacher to send tests to all students in the class and before sending the test, the teacher has the option to preview the questions. The test comprises sections named listening, speaking, reading, vocabulary, and grammar.
[042] In an embodiment, referring to FIG. 3C, each set of questions for listening, speaking, reading, vocabulary, and grammar can be previewed by the teacher before sending a test to students. Referring to FIG. 3D, upon assigning tests to students, the colour of the trigger button changes to indicate whether tests have been assigned. Once tests are assigned, students proceed to attempt them. Result pages are generated, displaying correct answers and providing explanations for each answer. Referring to FIG. 3E, the teachers can easily track students who have attempted the test and those who have not attempted the test. Referring to FIG. 3F, based on individual performance, teachers can provide constructive comments to aid students in their improvement efforts.
[043] In an embodiment, the teacher has the capability to view various metrics, including the average performance of all students, individual student results, and detailed result pages. This functionality allows the teacher to assess and analyse student performance comprehensively, facilitating informed feedback and targeted support for each student's learning journey. Referring to FIG. 3G, once the student logs into the system and selects "My ELP," they can view assignments in Listening, Speaking, Reading, vocabulary, and grammar as assigned by the teacher. Initially, the icons are coloured red, indicating that the test has not been attempted. Referring to FIG. 3H, the audio-based multiple-choice questions are part of listening. When the student clicks on the listening icon, they listen to the audio for each question and select the correct answers.
[044] In an embodiment, referring to FIG. 3I, after attempting all the questions, the student clicks the submit button. Referring to FIG. 3J, the result page displays the correct answers, explanations, and the overall test score. This enables students to identify their mistakes, focus on incorrect questions, and enhance their performance. Referring to FIG. 3K, for the Speaking section, the student records and saves their responses to each question. Referring to FIG. 3L, after attempting all the questions, the student clicks the submit button. Referring to FIG. 3M, upon submitting the test, on the result page an AI-powered speech analysis provides accuracy scores for each attempted question. Pronunciation details, including how each letter is pronounced, are presented. Additionally, individual scores for CEFR, IELTS, and PTE are provided in a detailed manner.
[045] In an embodiment, referring to FIG. 3N, paragraph-based questions are part of the Reading section. The student clicks on the Reading icon, addresses paragraph-based questions by reading the provided text, and attempts all associated questions. Referring to FIG. 3O, following completion, the student submits the test, leading to the visibility of the result pages. Paragraph-based questions serve as a valuable tool to foster critical thinking, enhance reading comprehension, and prepare students for academic and professional challenges. Referring to FIG. 3P, fill-in-the-blank questions are incorporated into the vocabulary section. When the student selects the vocabulary icon, they encounter drag-and-drop questions.
[046] In an embodiment, referring to FIGs. 3Q and 3R, the student proceeds to drag and drop the answers into the corresponding blanks. Upon completing all the questions, the student submits the test, leading to the visibility of result pages displaying correct answers, incorrect answers, and the total marks achieved. Referring to FIG. 3S, the student clicks on the grammar icon and finds fill-in-the-blank questions in the grammar section. Referring to FIGs. 3T and 3U, drag and drop the answers into the respective blanks, submit the test, and then proceed to view the result pages. Referring to FIG. 3V, upon submission of the tests, all icons change to a green colour, indicating that the students have attempted the tests.
[047] The system 1000 may be configured to provide such functionality to the students, teachers, and/or other entities to participate in the educational process. As stated previously, the system 1000 may use AI agents/engines in order to perform at least one of: speech analysis, student performance assessment, and/or adaptive difficulty level adjustment for subsequent evaluations.
[048] Referring to FIG. 4, the system 1000 may include one or more processor(s) 402. The one or more processor(s) 402 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 402 may be configured to fetch and execute computer-readable instructions stored in a memory 404. The memory 404 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 404 may include any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as an Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[049] In an embodiment, the system 1000 may also include an interface(s) 406. The interface(s) 406 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as Input/Output (I/O) devices, storage devices, and the like. The interface(s) 406 may provide a communication pathway for one or more components of the system 1000. Examples of such components include, but are not limited to, processing module(s) 410 and database 408. In some embodiments, the database 408 may store the AI models and/or educational materials therein (such as question banks, study materials, video/audio lectures, datasets, and the like).
[050] In an embodiment, the processing module(s) 410 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing module(s) 410. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing module(s) 410 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing module(s) 410 may include a processing resource (for example, controller), to execute such instructions. In other embodiments, the processing module(s) 410 may be implemented by electronic circuitry. The database 408 may include data that is either stored or generated as a result of functionalities implemented by any of the components of the processing module(s) 410.
[051] In some embodiments, the processing module(s) 410 may include an assessment generation module 412, an AI agent module 414, and other engine(s) 216. The assessment generation module 412 may further include a listening module to generate assessments and evaluate the listening proficiency of students, a vocabulary module, a grammar module, and a reading module. The AI agent module 414 may include an application engine 416 and a Natural Language Processing (NLP) module 418. The NLP module 418 may further include a preprocessor module, a negation detection module, a named entity detection module, and a dependency parser. The processing modules 410 may also include a difficulty adaptation module 420. Other modules (not shown) may implement functionalities that supplement applications/functions performed by the system 1000.
[052] In some embodiments, the assessment generation module 412 may be configured to generate assessments for each of the language proficiencies. For example, a listening assessment may be generated using the listening module, a vocabulary assessment may be generated using the vocabulary module, a grammar assessment may be generated using the grammar module, and a reading assessment may be generated using the reading module. The assessments may be retrieved from the database 408, which may include pre-generated assessments. In other embodiments, the assessments may be stored in a vector database, and retrieved using retrieval augment generation (RAG). In some embodiments, the assessments may be generated in real-time based on data provided by a teacher using a teacher device. The data may include topic, student, proficiency level, types of assessment, and the like, but not limited thereto. The assessment generated by the assessment generation module 412 may be adjusted by the difficulty adaptation module 420, as explained subsequently.
[053] In one or more embodiments, the assessment generation module 412 may utilize AI models, such as large language models, to automatically generate assessments and/or retrieve assessments from semantic databases. These ‘assessment generation AI models’ may be trained using at least one of contextually relevant data and adaptive difficulty levels. Training with contextually relevant data ensures that the generated assessments are pertinent to the subject matter and learning objectives. Training with adaptive difficulty levels enables the AI models to generate assessments that are appropriately challenging for students with varying proficiency levels.
[054] In one or more embodiments, the contextually relevant data for training the assessment generation AI models may be determined based on semantic analysis. For example, if a teacher requests an assessment on ‘verbs’ for students of a specific grade, semantic analysis may be employed to identify relevant textual resources, learning materials, and example questions related to verbs that are appropriate for that grade level. Training the assessment generation AI models with such inputs may allow the assessment generation AI models to generate and/or retrieve the assessments based on the topic and the difficulty level provided by the teacher, using the corresponding teacher device.
[055] In one or more embodiments, the assessment generation module 412 may generate audio-based assessments. For example, the listening module may generate audio-based question-and-answer assessments. In such assessments, the system 1000 may retrieve a question from the database 408, convert the question into an audio format, and present the audio question to the student. The system 1000 may then receive the student’s spoken answer, analyze the answer using speech analysis techniques, and provide feedback. This process may be repeated for subsequent questions in the audio-based assessment, creating an interactive and engaging listening comprehension exercise.
[056] In one or more embodiments, the assessment generation module 412 may generate interactive assessments that include graphical elements. These interactive assessments may incorporate drag-and-drop interfaces, image-based questions, or other visual components to enhance student engagement and cater to different learning styles. For instance, a vocabulary assessment may include a graphical matching exercise where students drag words to match corresponding images or definitions. Similarly, a grammar assessment may present sentences with missing words and provide a set of graphical word choices for students to select and place in the blanks.
[057] In one or more embodiments, the generated assessment may be sent to the student device. Responses from the student may be received by the system 1000. Thereafter, the AI agent module 414 may assess the responses for grading and appraising the student's performance.
[058] In one or more embodiments, the responses from the students may be processed by the NLP module 418. The sub-modules utilized within the NLP module 418 may be dependent on the type of assessment provided. All types of assessments, be it speaking, reading, listening, and the like, may be processed using the NLP module 418. The response may be pre-processed using the preprocessor module. The preprocessor module may be configured to convert the responses into tokens which may be interpretable by AI models. The AI models may include large language models, large multimodal models, language models, diffusion models, Recurrent Neural Networks (RNNs), and the like, but not limited thereto.
[059] In one or more embodiments, the AI agent module 414 may include the application engine 416. The application engine 416 may be configured to communicate and coordinate with the NLP module 418 and external AI models. The application engine 416 may further include tools which may allow the AI agent module 414 to perform actions. Examples of the tools may include transmission of Application Programming Interface (API) calls, function calling, data transformation tools for adapting data formats, routing mechanisms for directing data flow between modules, and the like, but not limited thereto.
[060] In one or more embodiments, the application engine 416 may use the tools to receive the student responses and identify the type of assessment submitted (e.g., listening, speaking, reading, grammar, vocabulary). Based on the identified assessment type, the application engine 416 may determine and invoke the appropriate NLP module 418 or external AI model for further processing. For example, if the assessment is a speaking assessment, the application engine 416 may utilize a speech analysis module (which may be a part of the NLP module 418 or an external AI model). This speech analysis module may employ Automatic Speech Recognition (ASR) models to transcribe the student's spoken response into text. Furthermore, the speech analysis module may incorporate models for phonetic analysis, prosody analysis, and fluency assessment, potentially utilizing models such as Hidden Markov Models (HMMs), deep neural networks (DNNs) trained for acoustic modeling, or sequence-to-sequence models for end-to-end speech recognition and analysis.
[061] In one or more embodiments, processing the student responses through the NLP module 418 or external AI models may allow the system 1000 to evaluate student performance. The evaluation of student performance may be provided through various methods, such as numerical ratings, natural language text feedback, visual feedback (e.g., highlighting areas of improvement in text or speech waveforms), proficiency level classifications, or standardized test scores, but not limited thereto.
[062] For instance, in one or more embodiments, in the context of speech analysis, the evaluation process may involve, receiving audio recordings of a student’s spoken responses from a computing device; extracting, using an AI engine, one or more phonemes from the audio recordings, determining a comparison score between the one or more phonemes and a set of reference phonemes for words pronounced in the audio recording, and classifying or identifying phonemes pronounced correctly and incorrectly by the student based on the comparison score. Similarly, for grammar, the dependency module may be used to determine correctness of a sentence as a probability value.
[063] In one or more embodiments, the AI models associated with the system 1000 may be trained using diverse datasets and various machine learning paradigms. The input dataset for training may encompass a wide range of language learning materials, including textual data (e.g., books, articles, dialogues), audio recordings of spoken language (e.g., lectures, conversations, pronunciation examples), and multimedia content.
[064] In examples where supervised learning is employed, labeled datasets are utilized to train models for specific tasks. For instance, to train a model for evaluating grammar proficiency, a dataset of sentences labeled as grammatically correct or incorrect may be used. Similarly, for proficiency level prediction, student responses may be labeled with corresponding proficiency levels (e.g., beginner, intermediate, advanced) to train a classification model. In the context of speech analysis, audio recordings of speech paired with phonetic transcriptions or pronunciation scores may be used for supervised training of acoustic models and pronunciation assessment tools.
[065] In examples where unsupervised learning is applied, unlabeled datasets may be used to discover patterns and structures within language data. For example, clustering algorithms may be used to group vocabulary words based on semantic similarity, or topic modeling techniques could be applied to identify common themes in reading passages. Unsupervised learning may also be useful for identifying patterns in student learning behaviors, such as common error types or learning style preferences, without requiring pre-defined labels.
[066] Reinforcement learning may be incorporated to refine the AI models based on feedback, such as teacher input or student interactions. For example, in the difficulty adaptation module 420, a reinforcement learning agent may be trained to dynamically adjust the difficulty level of questions based on student performance and teacher feedback. Teachers may provide feedback on the relevance or appropriateness of generated questions, and this feedback may be used to reward or penalize the AI agent’s actions. Similarly, reinforcement learning may be used to optimize the generation of personalized feedback for students, where student engagement and improvement in subsequent attempts serve as rewards for the AI model.
[067] In one or more embodiments, the difficulty adaptation module 420 may function as a classification model configured to dynamically adjust the difficulty level of assessments. The difficulty adaptation module 420 may be trained using a reinforcement learning approach that may use student performance metrics, as may be determined by the AI agent module 414 based on the assessment scores. The student performance metrics may be aggregated or determined based on the assessment made for each student. The metrics may include, but are not limited to, subject-wise scores, number of assessment attempts, teacher-assigned difficulty levels, nature of topics attempted, learning rate, time spent on tasks, error patterns, and motivation scores. The difficulty adaptation module 420 may be configured to determine an appropriate difficulty level for subsequent assessments based on at least one of: a comparison of the aggregated metrics with pre-established baseline values, a comparison of the aggregated metrics with aggregated metrics of other students (benchmarking peer performance), and/or pattern analysis identified from the aggregated metrics.
[068] For instance, a motivation score, which may be indirectly indicated by the frequency with which a student attempts new assessments or explores new topics, may serve as a crucial metric. Reinforcement learning may be employed to optimize difficulty levels to maintain or enhance student motivation. As excessively high or low difficulty levels can negatively impact student motivation and learning engagement, the difficulty adaptation module 420 may be configured to determine the difficulty level to ensure motivation is sustained. For example, if a student’s motivation score decreases, the difficulty adaptation module 420 may infer that the current difficulty level is not optimal. The difficulty adaptation module 420 may then adjust the difficulty level, either increasing it if the student appears unchallenged or decreasing it if the student seems overwhelmed, in order to identify a difficulty level that is more conducive to sustained motivation and effective learning.
[069] In one or more embodiments, various adaptation metrics or values or scores, beyond motivational scores, may be employed by the difficulty adaptation module 420 to dynamically determine and adjust the assessment difficulty level. These adaptation metrics may encompass a wide spectrum of indicators reflecting a student’s learning progress, engagement, and overall performance. The adaptation metrics/values/scores may include, but are not limited to, learning rate, accuracy trends across different skill domains, consistency of performance, time taken to complete assessments, areas of strength and weakness, and patterns of interaction with the learning platform.
[070] In some embodiments, the difficulty adaptation module 420 may be trained using reinforcement learning techniques. In such embodiments, an adaptation metric, such as the motivation score or learning rate, may be utilized within a reward function to guide the learning process of the difficulty adaptation module 420. For instance, if an adaptation metric, such as the motivation score, is compared against a predefined baseline range, the outcome of this comparison may determine the reward signal for the reinforcement learning process. If the motivation score falls within or exceeds the baseline range, it may be considered a positive outcome, resulting in a ‘positive reward’ signal. Conversely, if the motivation score falls below the baseline range, indicating potential disengagement or frustration, it may be considered a negative outcome, resulting in a ‘negative reward’ or ‘penalty’ signal. This reward signal may be used to update the weights of the classifier used by the difficulty adaptation module 420, thereby improving accuracy of adjusting the difficulty level of subsequent assessments to optimize student engagement and learning outcomes.
[071] Another aspect of the present disclosure relates to a method for artificial intelligence (AI)-based language learning, the method including generating, by a processor, assessments using one or more assessment generation AI models based on a difficulty level and a topic using an assessment generation model, receiving, by the processor, student responses to the generated language learning assessments from a student device, determining, by the processor, a student performance score for each student response using an AI agent module, determining, by the processor, the difficulty level of subsequently generated language learning assessments based on the determined student performance score using a classification model, and training, by the processor, at least one of: the assessment generation model and/or the classification model based on the difficulty level and/or the student performance score.
[072] In one or more embodiments, the method may be training the classification model using reinforcement learning, the reinforcement learning being based on adaptation metrics determined from the student performance score.
[073] In one or more embodiments, the assessment generation model may be trained using contextually relevant question data determined based on semantic analysis of a teacher request for an assessment.
[074] In one or more embodiments, performing speech analysis and scoring of student responses using the AI agent module, includes receiving, by the processor, audio recordings of student spoken responses, extracting, by the processor, using an AI engine, one or more phonemes from the audio recordings, determining, by the processor, a comparison score between the one or more phonemes and a set of reference phonemes for words pronounced in the audio recording, and classifying, by the processor, phonemes pronounced correctly and incorrectly by the student based on the comparison score.
[075] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the disclosure is determined by the claims that follow. The disclosure is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE PRESENT DISCLOSURE
[076] The present disclosure reduces time educators spend on question creation, allowing them to focus more on teaching.
[077] The present disclosure provides tailored questions to suit specific learning objectives or assessment criteria.
[078] The present disclosure provides a consistent and standardized approach to question generation.
[079] The present disclosure provides an easy way to accommodate a wide range of educational content and subjects.
,CLAIMS:1. A system (1000) for artificial intelligence (AI)-based language learning, the system (1000) comprising:
a processor (402); and
a memory (404) operatively coupled to the processor (402), wherein the memory (404) comprises one or more processor-executable instructions, which, when executed, cause the processor (402) to:
generate assessments using one or more assessment generation AI models based on a difficulty level and a topic using an assessment generation model;
receive student responses to the generated language learning assessments from a student device;
determine a student performance score for each student response using an AI agent module;
determine the difficulty level of subsequently generated language learning assessments based on the determined student performance score using a classification model; and
train at least one of: the assessment generation model and/or the classification model based on the difficulty level and/or the student performance score.
2. The system (1000) as claimed in claim 1, wherein the processor (402) is further configured to train the classification model using reinforcement learning, the reinforcement learning being based on adaptation metrics determined from the student performance score.
3. The system (1000) as claimed in claim 2, wherein the adaptation metrics comprise at least one of subject-wise scores, number of assessment attempts, teacher-assigned difficulty levels, nature of topics attempted, learning rate, time spent on tasks, error patterns, and motivation scores.
4. The system (1000) as claimed in claim 1, wherein the assessment generation model is trained using contextually relevant question data determined based on semantic analysis of a teacher request for an assessment.
5. The system (1000) as claimed in claim 1, wherein the assessment generation model is configured to generate audio-based question and answer assessments, wherein to generate the an audio-based question and answer assessments, the processor (402), by the assessment generation model, is configured to:
retrieve the generated assessment;
convert the assessment into an audio; and
transmit the audio to a student device via the interface.
6. The system (1000) as claimed in claim 1, wherein to perform speech analysis and scoring of student responses, the AI agent module is configured to:
receive audio recordings of student spoken responses;
extract, using an AI engine, one or more phonemes from the audio recordings;
determine a comparison score between the one or more phonemes and a set of reference phonemes for words pronounced in the audio recording; and
classify phonemes pronounced correctly and incorrectly by the student based on the comparison score.
7. A method for artificial intelligence (AI)-based language learning, the method comprising:
generating, by a processor (402), assessments using one or more assessment generation AI models based on a difficulty level and a topic using an assessment generation model;
receiving, by the processor (402), student responses to the generated language learning assessments from a student device;
determining, by the processor (402), a student performance score for each student response using an AI agent module;
determining, by the processor (402), the difficulty level of subsequently generated language learning assessments based on the determined student performance score using a classification model; and
training, by the processor (402), at least one of: the assessment generation model and/or the classification model based on the difficulty level and/or the student performance score.
8. The method as claimed in claim 7, wherein the method comprises training the classification model using reinforcement learning, the reinforcement learning being based on adaptation metrics determined from the student performance score.
9. The method as claimed in claim 7, wherein the assessment generation model is trained using contextually relevant question data determined based on semantic analysis of a teacher request for an assessment.
10. The method as claimed in claim 1, wherein performing speech analysis and scoring of student responses using the AI agent module, comprises:
receiving, by the processor (402), audio recordings of student spoken responses;
extracting,, by the processor (402), using an AI engine, one or more phonemes from the audio recordings;
determining, by the processor (402), a comparison score between the one or more phonemes and a set of reference phonemes for words pronounced in the audio recording; and
classifying, by the processor (402), phonemes pronounced correctly and incorrectly by the student based on the comparison score.

Documents

Application Documents

# Name Date
1 202441018702-STATEMENT OF UNDERTAKING (FORM 3) [14-03-2024(online)].pdf 2024-03-14
2 202441018702-PROVISIONAL SPECIFICATION [14-03-2024(online)].pdf 2024-03-14
3 202441018702-POWER OF AUTHORITY [14-03-2024(online)].pdf 2024-03-14
4 202441018702-FORM FOR STARTUP [14-03-2024(online)].pdf 2024-03-14
5 202441018702-FORM FOR SMALL ENTITY(FORM-28) [14-03-2024(online)].pdf 2024-03-14
6 202441018702-FORM 1 [14-03-2024(online)].pdf 2024-03-14
7 202441018702-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [14-03-2024(online)].pdf 2024-03-14
8 202441018702-EVIDENCE FOR REGISTRATION UNDER SSI [14-03-2024(online)].pdf 2024-03-14
9 202441018702-DRAWINGS [14-03-2024(online)].pdf 2024-03-14
10 202441018702-DECLARATION OF INVENTORSHIP (FORM 5) [14-03-2024(online)].pdf 2024-03-14
11 202441018702-FORM-5 [14-03-2025(online)].pdf 2025-03-14
12 202441018702-DRAWING [14-03-2025(online)].pdf 2025-03-14
13 202441018702-CORRESPONDENCE-OTHERS [14-03-2025(online)].pdf 2025-03-14
14 202441018702-COMPLETE SPECIFICATION [14-03-2025(online)].pdf 2025-03-14