Sign In to Follow Application
View All Documents & Correspondence

System And Method For Offline Learning Management And Personalized Learning

Abstract: The present disclosure provides a system (108) and a method (500) for offline and/or personalized learning management. The system (108) allows for personalized learning without the need for digital tools, while still providing digital evaluation, analysis, and management. The system (108) includes a processor (202) configured to generate a first set of embeddings representing a knowledge base, combine the first set of embeddings with a second set of embeddings derived from an external corpus into cross-referenced embeddings, extract a set of cross-referenced embeddings based on a query instruction set, and generate natural language outputs based on the extracted set of cross-referenced embeddings using a language model.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 March 2024
Publication Number
39/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

WEXL EDU PRIVATE LIMITED
Plot no-72, S.no 12, 2nd Floor, Journalist Colony, Jubilee Hills, Hyderabad, Telangana - 500033, India.

Inventors

1. LINGA, Naveen Kumar
Near Lingampally, MMTS Station, Plot No 21, Laxmi Vihar 2, Nalagandla Serilingampally, Rangareddy, Telangana - 500019, India.
2. BHARAT, Akkinepalli
6-2-382 Plot No. 23, Vanasthalipuram Phase-1, Near Community Hall, Vanasthalipuram Hayathnagar, K.V.Rangareddy, Telangana – 500070, India.

Specification

DESC:TECHNICAL FIELD
[001] The present disclosure relates to the systems and methods for education. In particular, the present disclosure provides a system and a method for offline education for students with digital evaluation, analysis, and management, and personalized learning.

BACKGROUND
[002] Offline classes, pen and paper tests, teacher-student interactions, etc., has been typical modes of imparting education to students. While offline education has the advantages of having increased human interaction, lowered screen time, etc., they still have several disadvantages. For example, offline education severely limits the extent to which curriculum can be personalized to the needs of each student. Further, teachers also have to spend significant amounts of time curating content and assessments to students, and spend inordinate amount of time in evaluation of students and other administrative tasks. With advent of the Internet and digitization of existing institutions, online education has been seen as a solution to some of the aforementioned problems. However, online/digital educations have its own set of disadvantages, such as high screen time, lowered human interaction, lack of feedback, higher costs of infrastructure (and correspondingly lower access), etc. Further, existing automated solution for personalized learning and/or assessment generation have poor accuracy, and consume significant computational resources (both in terms of processing and memory). Additionally, there is a lack of solutions that allow different adapting the content curation and assessment generation according to different curriculum. Therefore, there is a need for a system and method to merge the advantages of traditional offline education with that of online education.

OBJECTS OF THE PRESENT DISCLOSURE
[003] A general object of the present disclosure is to provide a system and method that obviates the above-mentioned limitations of existing systems and methods efficiently.
[004] Another object of the present disclosure is to provide a system and a method that allows teachers to curate learning materials from a knowledge base based on different curriculum, and generate tests/evaluations/assessments personalized for each student.
[005] Another object of the present disclosure is to provide a system and a method for integration of offline evaluation means with digital Learning Management Systems (LMS).
[006] Another object of the present disclosure is to provide a system and a method for retrieving texts from a knowledge base corpus (or first corpus), through cross-referencing of vectors with those of a curriculum corpus (or second corpus).
[007] Another object of the present disclosure is to develop customized embeddings to reduce computational expenditure for retrieval and/or generation tasks.
[008] Another object of the present disclosure is to develop a knowledge base/first corpus based on the digitization of physical documents.
[009] Yet another object of the present disclosure is to provide a system and a method that provides assessments on students learning to all stakeholders of education, including, but not limited to, students, teachers, parents, institutions, and the like.

SUMMARY
[010] An aspect of the present disclosure relates to a system for personalized learning, including a processor and a memory operably coupled to the processor. The memory includes one or more processor-executable instructions configured to cause the processor to digitize one or more printed documents into a knowledge base, generate a first set of embeddings representing the knowledge base, combine the first set of embeddings with a second set of embeddings derived from an external corpus into a set of cross-referenced embeddings, extract a subset from the set of cross-referenced embeddings based on a query instruction set, and generate natural language outputs based on the extracted set of cross-referenced embeddings using a language model.
[011] In one or more embodiments, the processor may be further configured to identify a classification of the first set of embeddings by processing the first set of embeddings through a classifier.
[012] In one or more embodiments, the set of cross-referenced embeddings may be extracted based on at least a semantic similarity value with the query instruction set.
[013] In one or more embodiments, the processor may be further configured to validate the generated natural language outputs based on at least one of user feedback, a rule-based method, and/or an evaluation artificial intelligence (AI) engine, and train the language model based on the validation.
[014] In one or more embodiments, the natural language outputs may be generated by the language model based on the query instruction set and a system prompt.
[015] In one or more embodiments, the system prompt may include at least one of student data, student learning profile data, and/or entity data.
[016] In one or more embodiments, the processor may be configured to dynamically generate the system prompt or receive the system prompt from a computing device.
[017] In one or more embodiments, the processor may be further configured to receive at least one natural language response to the natural language output from a computing device, determine an evaluation score based on the at least one natural language response, and adjust a difficulty score based on the evaluation score, where the language model is configured to generate subsequent natural language outputs based on at least the difficulty score.
[018] An aspect of the present disclosure relates to a system for personalized learning, including a processor and a memory operably coupled to the processor. The memory includes one or more processor-executable instructions configured to cause the processor to receive a query instruction set, extract a subset from the set of cross-referenced embeddings based on a query instruction set, where the set of cross-referenced embeddings is obtained by combining a first set of embeddings derived from one or more digitized printed text and a second set of embeddings from an external corpus, and generate natural language outputs based on the extracted set of cross-referenced embeddings using a language model.
[019] An aspect of the present disclosure relates to a method for personalized learning, including digitizing, by a processor, one or more printed documents into a knowledge base, generating, by the processor, a first set of embeddings representing the knowledge base, combining, by the processor, the first set of embeddings with a second set of embeddings derived from an external corpus into a set of cross-referenced embeddings, extracting, by the processor, a subset from the set of cross-referenced embeddings based on a query instruction set, and generating, by the processor, natural language outputs based on the extracted set of cross-referenced embeddings using a language model.
[020] Various aspects, objects, and/or advantages of the present disclosure are described in reference to the detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS
[021] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[022] FIG. 1 illustrates an example architecture of a system for offline learning management, in accordance with an embodiment of the present disclosure.
[023] FIG. 2 illustrates an example schematic view of a control unit for the system, in accordance with an embodiment of the present disclosure.
[024] FIGs. 3A-3J illustrate example schematic representations of a user interface of the system, in accordance with an embodiment of the present disclosure.
[025] FIG. 4 illustrates a block representation of interaction between components of the system generating natural language output for personalized learning, in accordance with an embodiment of the present disclosure.
[026] FIG. 5 illustrates a flowchart of a method for personalized learning, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[027] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosures as defined by the appended claims.
[028] The present disclosure relates to the systems and methods for education. In particular, the present disclosure provides a system and a method for offline education for students with digital evaluation, analysis, and management, and personalized learning.
[029] Aspects of the present disclosure aim to provide a hybrid platform for offline and digital education. Systems and methods proposed in the present disclosure aim to provide targeted education to students without requiring screen time (or minimal screen time) for students, while allowing teachers, parents, and/or administrators to manage curriculum, learning materials, tests/assessments, and the like, for students using digital tools such as Learning Management Systems (LMS), and personalized learning.
[030] The present disclosure provides a system and a method to develop knowledge bases (or a corpus) based on digitization of physical documents or printed text (such as textbooks, print-outs, newspapers, journals, books, and the like, but not limited thereto). The digitized corpus may be converted into custom (vector) embeddings for use by language models, and stored in a vector database. The custom embeddings may be optimized for retrieval and text generation of content based on the knowledge base (first corpus) and/or a query instruction set. The embeddings may be stored in a vector database, and may be queried using the instruction set. In some embodiments, the custom embeddings may be combined with those an external corpus (i.e., second corpus) to form cross-referenced embeddings. Based on the query instruction set, the relevant cross-reference embeddings may be extracted, based on which the language model may generate natural language outputs. The language model may be trained based on user feedback (such as reinforcement learning) to improve accuracy of the language mode.
[031] Further, the natural language output may be converted into learning modules (which may be representative of questions or curated study material for queried topics/concepts in the query instruction set). In some embodiments, the learning modules may correspond to the natural language text for topics, chapters, and/or subjects, or assessments such as questions (for example, short answer form, multiple choice questions, long-answer type questions, Assertion-Reasoning, and the like, but not limited thereto). Natural language responses to the natural language outputs may be received (or scanned from another printed text document) from students, which may be evaluated using rule-based methods, other humans (such as teachers), and/or artificial intelligence (AI) models adapted for this purpose. The evaluation (score) may be used for determining a difficulty score, which may be used by the language model for generating subsequent natural language outputs, thereby allowing the difficulty level to be adjusted.
[032] For example, the system and the method of the present disclosure may relate to using offline evaluation means, such as Optical Mark Recognition (OMR) and/or Optical Character Recognition (OCR) for evaluating performance of students. The offline evaluation means may be received and analysed by the system to determine the performance of the student. The system may determine one or more categories or sub-categories (such as subjects, topics, and/or sub-topics) where the student is performing above and/or below a threshold metric (such as by scoring above/below a threshold). The performance of the student may be analysed and reports may be generated for teachers, parents and other stakeholders. The reports may be used by teachers to curate lessons, learning materials, study plans, and the like, but not limited thereto, for the students. For example, if the student is not performing well (example below a threshold) in one or more of the categories, then the teachers may curate lessons such that more emphasis is provided to such topics, thereby facilitating targeted/customized/personalized learning for the students, while also minimizing digital exposure to the students. Further, the reports generated by the system may also be used by stakeholders, such as parents of the students to evaluate progress, by administrators to view student’s academic journey and also aggregate data to view performance of the institution, etc.
[033] Various embodiments with respect to the present disclosure will be explained in detail with reference to FIGs. 1-5.
[034] FIG. 1 illustrates an example architecture 100 of a system 108 for offline learning management, in accordance with an embodiment of the present disclosure. As shown, the system 108 includes a scanner 112, a printer 114, and a control unit 116. The system 108 may be configured to generate/print offline education means 105, and offline evaluation means 106, and receive natural language responses from a student 102 through the offline evaluation means 106, such as through OMR 107. The OMR 107 may also be printed by the system 108. The system 108 may be operated by a teacher 104 using a computing device 110.
[035] The scanner 112 and the printer 114 may be any scanner and printer known in the art respectively. The control unit 116 may be configured to allow the system 108 to communicate with the scanner 112, the printer 114, and/or the computing device 110 operated by the teacher 104. The control unit 116 may be any controller configured to control hardware (such as the printer 114 and the scanner 112) using electronic control signals.
[036] Components of the system 108 are described in reference to FIG. 2. The system 108 includes one or more processor(s) 202 that may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 108. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[037] In an embodiment, the system 108 may include interface(s) 206. The interface(s) 206 may include a variety of interfaces, for example, interfaces for data input and output (I/O) devices, storage devices, and the like. The interface(s) 206 may also provide a communication pathway for one or more components of the control unit 116. Examples of such components include, but are not limited to, processing engine(s) 208 and a database 210, where the processing engine(s) 208 may include, but not be limited to, a digitization engine 211, a cross-referencing engine 213, a retrieval engine 212, a generation engine 214, a training engine 215, a control engine 216, a response extraction engine 217, an analytics engine 218, and other engine(s) 220. In an embodiment, the other engine(s) 220 may include, but not limited to, a data management engine, an input/output engine, and a notification engine, which may facilitate the operation of the system 108. The interfaces (s) 206 may also allow the system 108 to communicate with the computing device 110 associated with the teacher 104.
[038] In an embodiment, the processing engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the system 108 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 108 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry.
[039] In some embodiments, the control unit 116 may have instructions to operate automatically. In other embodiments, the control 116 (and correspondingly the system 108) may be operated by the teacher 104 using their computing device 110. Examples of computing device 110 may include, but not be limited to, smartphones, laptops, desktops, tablets, phablets, servers, and the like. In some embodiments, the computing device 110 may be embedded into the system 108. In other embodiments, the computing device 110 may be external to the system 108. In such embodiments, the system 108/control unit 116 and the computing device 110 may be in communication with each other using wired communication means (such as electrical wires, optical fiber cables, and the like), or wireless communication means (such as Wireless-Fidelity, telecommunication networks, near-field communication (NFC) and the like). In some embodiments, the system 108 may be implemented in a computing device (such as the computing device 110), or in a server.
[040] In preferred implementations, the student 102 may be provided with physical offline resources. For example, the student 102 may be provided with offline education means 105, such as curated portions, chapters, topics, and the like, from any one or a combination of: text books, notes, printed materials, lectures, scientific equipment for experimentation, and the like, but not limited thereto. The student 102 may be evaluated using offline evaluation means 106, such as through written examinations, viva/interviews, presentations, and the like. For example, a question paper printed on paper may be provided to the student 102. The offline education means 105 and the offline evaluation means 106 may be generated/curated by the system 108. The student 102 may provide their natural language response to the question paper through the offline evaluation means 106, such as OMR 107 provided therewith. The natural language responses may be analyzed and evaluated by the system 108.
[041] In some embodiments, the system 108 may allow teachers 104 to create and print the offline evaluation means 106. The computing device 110 may provide teachers 104 with an interface (such as Graphical User Interfaces (GUIs), Application Programming Interfaces (APIs), Command Line Interfaces (CLIs), and the like, but not limited thereto). Using the interface, the teachers 104 may search, select, and insert one or more natural language texts indicative of questions into a (question paper) document. In some embodiments, the process of generating the (question paper) document may be automated, as described subsequently in the present disclsoure. In such embodiments, the system 108 may generate the document based on the previous performance of the student 102.
[042] The natural language texts indicative of questions may be stored in the database 210. The database 210 may include a corpus of natural language texts/questions associated with a plurality of categories (such as subjects, topics or sub-topics). Each natural language text/question may be associated with/assigned to at least one of the subjects/topics. In some embodiments, the corpus may be a knowledge base stored in the database 210, in the form of a vector database. In some embodiments, the knowledge base (or first corpus) may be generated by digitization of printed text documents. The digitization of the printed documents is described subsequently in the present disclosure.
[043] The teacher 104 or the system 108 may filter the questions based on the student’s previous performance. For example, questions from the subjects/topics where the student 102 is struggling (or unable to achieve a score greater than a threshold) may be retrieved to generate the question paper. The question paper/document may be printed and shared with the students 102, thereby eliminating need for students 102 to view screens for answering the questions. Further, by automating the process of question paper generation, the system 108 may alleviate burden of the teachers 104 in typing questions, proofreading, and selecting questions, and the like, but not limited thereto, which otherwise requires considerable time and effort. Additionally, the system 108 may provide teachers 104 with a convenient interface (such as drag and drop interfaces, check box interface, Large Language Model (LLM) based question generation interfaces, chat interfaces, and the like) for generating evaluation means for the students 102.
[044] The question paper (or generally offline evaluation means 106) may be answered by the student using means such as OMR 107. The OMR 107 may be scanned by the system 108 using the scanner 112. Responses of the student may be extracted from the OMR 107. The OMR sheet may be a standard A4 size sheet black and white paper, which may be lower in cost in comparison to other OMR solutions known in the art. Further, the scanner 112 used to extract responses from the OMR 107 may be any one or combination of including, but not limited to, OMR scanners known in art, cameras of smartphones, other general-purpose document scanners, and the like. In other embodiments, the students 102 may provide handwritten responses on paper to the system 108. The system 108 may scan the paper, perform OCR (using an OCR engine (not shown), and extract the student’s response as natural language text. The responses of the student 102 may be extracted (such as using response extraction engine 216), and aggregated and analysed by the system 108 for evaluation (such as by the analytics engine 218). In further embodiments, audio recordings of the student 102 may be collected and transcribed into natural language texts, using automatic speech recognition (ASR) engines (not shown). The OCR engines and the ASR engines may be local models available within the system 108, or may be external models accessed through APIs. The natural language texts may then be used for analysing and evaluating the student’s performance. By automating analysis and evaluation, the workload of the teacher 104 may be reduced. Further, the system 108 may provide faster and more accurate evaluation for the performance of the student 102.
[045] The system 108 may generate detailed reports on the performance of the student 102. The OMR 107, for instance, may include an identifier associated with the student 102. The performance (through an evaluation score) of the student 102 may be determined, analysed, and associated with the corresponding student 102 using the identifier. The evaluation score may be generated by an evaluation AI engine (not shown), which may be trained to assess and/or evaluation the natural language response of the student. For example, the system 108 may generate an evaluation score for the student, percentile, categories/subjects/topics that the student 102 is not performing well (below threshold), types of mistakes made, recommendations for remedial learning, student leader boards, and the like. The report may be made available at various levels, including, but not limited to, students 102, teachers 104, parent, instructions, and the like.
[046] In some embodiments, the detailed reports may be transmitted to devices used by each of the stakeholders. In some embodiments, the reports may be sent to the stakeholders using including, but not limited to, Short Messaging Service (SMS), APIs, webhooks, emails, and the like. The reports may be stored in internal databases for future reference and review.
[047] The reports may be generated in the form of a document that compiles raw data associated with the student’s performance, and/or aggregated/processed data. In other embodiments, the reports may be provided in the form of an interactable dashboard where the presentation of student performance data can be viewed dynamically by the teacher or other stakeholders. In yet other embodiments, the reports may be indicative of raw data made available on relational databases. The reports may also be associated with the student 102, and stored in the database 210 for future reference. Storing of the reports may also allow the system 108 to comply with regulatory requirements.
[048] In some embodiments, the aforementioned steps of generating questions, scanning and extracting responses, and analysing and generating reports may be performed using corresponding AI engines/models. The AI engine/models may be any one or combination of including, but not limited to, convolutional neural networks, recurrent neural networks, transformers (including encoder-decoder models, encoder-only models, and/or decoder-only models), ensemble models, Large Multimodal models, generative AI models, and the like.
[049] Based on the report, the teacher 104, or the system 108 may automatically, recommend offline education means 105 through creating a study plan/course of action for the student. The offline education means 105 may be customized/personalized/targeted for each student. For example, the system 108 may provide a learning material (such as physical study materials like books, notes, recordings etc.) printed/stored in repositories (such as in library, server, etc.) to address the topics where the student 102 is struggling/performing below threshold. Further, the system 108 may automatically generate practice questions based on previous performance to strengthen the student’s learning in that particular subject/topic/sub-topic. The system 108 may allow teachers, using the computing device 110, to save the customized questions created, thereby allowing for targeting learning. The process of generating offline education means 105, and evaluating performance of students 102 on the offline evaluation means 106 may be iterated until the performance of the student 102 progresses to a value greater than a threshold, indicating that the student 102 has learned the subject/topic/subtopic. In some embodiments, the system 108 may be configured to adjust the difficulty level/score for each of the students, based on the evaluation score of corresponding natural language responses.
[050] In some embodiments, the system 108 may interface with a Learning Management System (LMS). The LMS may be hosted as an individual system (such as a stand-alone platform) of the system 108, or the system 108 itself may provide an integrated interface to the LMS. In such embodiments, the offline evaluation means 106 may be generated, filled, and/or analysed directly using the LMS. Additionally, data gathered by the system 108 may also be viewable in the LMS, along with other analytics. By allowing for integration of offline learning with online platforms, the system 108 may allow for seamless education experience for both teachers and students. Furthermore, the system 108 merges offline teachings and digital evaluation, allowing teaching institutions to track student progress using reports and analytics while also providing an option for curating targeted/desirable offline education means 105 for students. The merging of offline and digital solutions may enable students 102, teacher 104, and other stakeholders of education to benefit from complementary advantages of both modes. Merging of the two modes may allow students 102 to learn much more efficiently.
[051] Furthermore, the system 108 may be configured to categorize students 102 based on their performance. The system 108 may categorize and analyse the topics or subtopics that the students 102 are underperforming in. such analysis may allow the teachers 104 to modify the offline education means 105 provided to the students 102. Further, the system 108 may be configured to generate personalized learning paths for each student 102, or for students 102 in each category. The personalized learning paths may be generated using AI engines, or curated using the aid of the teachers 104.
[052] FIGs. 3A-3J illustrate example schematic representations 300A, 300B, 300C, 300D, 300E, 300F, 300G, 300H, 300I and 300J of a user interface associated with the system 108, in accordance with an embodiment of the present disclosure. FIGs 3A-3J illustrate a user journey of the teachers 104 and the students 102 while accessing the system 108.
[053] Referring to FIG. 3A, upon logging into an application, the teacher 104 accesses a dashboard for generating an offline evaluation means 106, such as a question paper. The user interface may provide a list of previously curated question papers for the teacher 104. The teacher 104 may select one of the previously curated questions papers either for reuse, or for selecting a set of question therefrom. The selection may be made through use of check boxes (or other graphical elements) and/or prompts provided to a language model of the system 108. The user interface may also provide access to a repository of questions for a chosen category/subject/topic to the teacher 104. The example shown in FIG. 3B shows the repository of questions for the subject “Mathematics” and topic “Linear Equations in two variables”. The teacher 104 may also be provided with an option to filter questions based on sub-topics, complexity or difficulty level/score, and type of questions. On selecting the filters, the system 108 may retrieve and display the questions. The teacher 104, using their computing device 110 may select one or more of the questions to be compiled/included in the question paper. On selecting the desired questions, the system 108 may provide an option to preview the question paper on the user interface, as shown in FIG. 3C.
[054] The teacher 104 may review the questions, and either choose to return to the editing dashboard/user interface shown in FIG. 3B, or proceed to printing the question paper as shown in FIG. 3D. To print the question paper, the system 108 may compile the selected questions and generate a display compatible file, such as Portable Document Format (PDF). For example, the offline evaluation means 106 having the questions may be printed into A4 sheets. Further the offline evaluation means 106 may also include OMR which may allow the students 102 to provide their responses. In some embodiments, the teacher 104 may issue instructions to print the OMR 107, as shown in FIG. 3E. The system 108 may provide templates for the OMR 107 or other means through which the responses from students may be collected, which the teacher 104 may select and print. The offline evaluation means may also be customizable based on the type of questions, for example question paper on geometry requiring graphs. In further embodiments, the responses to the offline evaluation means 106 may be collected using a microphone, such as when the offline evaluation means 106 relates to a viva where the responses are provided in the form of audio recordings.
[055] Once question papers/ the offline evaluation means 106 are provided to the student, students 102 proceed to attempt the assessment. The students 104 may provide their responses to the offline evaluation means 106 through, for example the OMR 107. The OMR 107 may be scanned and uploaded to the system 108, as shown in FIG. 3F. The system 108 may extract the responses provided by the student 102, and evaluate the same. The system 108 may determine the performance of the student 102 based on their responses. For example, the system 108 may compare the responses extracted from the OMR 107, and compare them with the correct answers to generate a score. The score may indicate the number of questions that the student 102 answered correctly. The system 108 may analyse the data, and identify topics and/or sub-topics that the student 102 underperformed (i.e. when percentage of questions answered for a sub-topic is below a threshold number, for example). The OMR 107 may also include identifiers associated with each student 102, to allow the system 108 to map the performance analytics to the corresponding student 102.
[056] The system 108 may also generate reports to indicate performance of one or more of the students 102. In some embodiments, the system 108 may provide analytics associated with the students 102 in the user interface, as shown in FIG. 3G. The user interface may be viewed by any one of the stake holders, who may use corresponding computing devices (not shown). The reports may be either static or in an interactable dashboard. For example, as shown in FIG. 3H, the system 108 may display a leaderboard of performances of all students who attempted the question paper.
[057] Additionally, the system 108 may provide personalized learning to students. FIGs. 3I and 3J illustrate example user interfaces that allow teachers to select and cause the system 108 to generate personalized/curated offline education means 105 for the student 102. In some embodiments, the offline education means may be curated based on the performance of the student 102 in preceding offline evaluation means 106. For example, if the student 102 is underperforming in a sub-topic the system 108 may be configured to automatically recommended learning resources, such as chapters of a textbook or practice problems with solutions, for the student 102. The system 108 may be configured to curate the recommended learning resources from the knowledge base and/or external sources. The teacher 104 may also view the recommendations, add or remove references provided by the system 108, and provide the learning materials to the student 102. For example, the teacher 104 may print the practice problems and provide to the student 102. The student 102 may be periodically evaluated on the topics to track progress/learning.
[058] Further, the system 108 may provide personalized learning paths for students based on their performance. For example, students 102 may be classified in to three categories, viz. those scoring below 50%, those scoring between 50% and 80%, and those scoring above 80% (as shown in FIG. 3J). Worksheets (and more generally the offline evaluation means 106 are individually reviewed and assigned by the teacher 104 according to each student’s category. These worksheets are downloaded and distributed to students for completion during physical exams. Students who score above 80% receive more challenging worksheets aimed at extending their learning beyond basic requirements. Conversely, students scoring below 50% are provided with simplified worksheets, allowing them to build foundational skills before progressing to the next level.
[059] Providing personalized learning paths may allow the system 108 to tailor worksheets or material that is suitably to their needs and abilities. Further, the personalized approach enhances engagement and comprehension by addressing each student's unique learning pace and style. Additionally, targeted support can be provided to students 102 based on their performance. Categorizing the students 102 into different groups may allow teachers to provide support to students in the underperforming category. The students 102 struggling with certain concepts can receive simplified worksheets to strengthen their understanding, while high achievers can be challenged with more complex tasks to further develop their skills. The system 108, by providing the aforementioned interfaces, may allow teachers 104, students 102, and other stakeholders to track and monitor progress over time. Teachers 104 can track improvements as students move from one category to another, helping to identify areas of growth and areas that may require additional attention. Additionally, offering a range of worksheet difficulty levels allows for differentiation within the classroom. The teachers 104 can accommodate diverse learning needs by providing appropriate materials for students 102 at various skill levels, fostering a supportive learning environment where all students can succeed.
[060] Differentiated assessment and learning may also allow teachers 104 to maintain the motivation for students 102 by providing them with materials that are challenging, but not impossible to solve. Making the assessments too easy to solve may cause the students 102 to lose interest out of boredom while making assessments that are too difficult to solve may cause the students 102 to give up too quickly. Since the system 108 adjusts the difficulty of the material according to the student’s capabilities may help maintain the right balance of difficulty to keep the student 102 motivated. Tailored worksheets can also serve as effective preparation for assessments, ensuring that students 102 are adequately equipped with the necessary skills and knowledge to succeed. This targeted approach helps students 102 feel more confident and competent when facing exams or evaluations. Overall, the personalized worksheet approach facilitates individualized learning experiences, supports student 102 progress, and promotes a positive and inclusive classroom environment.
[061] The system 108 may iterate the aforementioned steps until the student 102 learns the topics. The system 108 may operate until the student 108 learns all the topics provided thereto by teachers 104 or administrators as part of a curriculum. The teacher 104/administrators may determine the curriculum based on student needs and interest, scientific research on education, regulatory requirements, and the like. The student 102 may be evaluated, and further recommendations may be made for promoting the learning of the student 102.
[062] FIGs. 2 and 4 illustrate interaction between different components of the system 108 for personalized offline learning.
[063] As shown in FIG. 4, printed text 402, such as books, journals, newspapers, periodicals, and the like, may be provided as input to a digitization engine 211. The digitization engine 211 may be configured to digitize the printed text 402, for example, by using a scanner 112 to capture images of the printed text 402, and/or by using OCR engines to recognize and extract text therefrom. The OCR engines may be implemented locally within the system 108, or externally in another server accessible via APIs. The digitized printed text from the digitization engine 211 may be used to generate a vector database 404. The vector database 404 may store a first set of embeddings representing the digitized printed text.
[064] In one or more embodiments, the first set of embeddings may utilize a custom embeddings representation unique to the digitized printed text, which may reduce the computational complexity of the method 500, and thereby the computational resources and energy required to perform the aforementioned operations. In one or more embodiments, techniques such as word2vec, count-based vectorization, term frequency-inverse document frequency, and the like, but not limited thereto, may be used for vectorizing or generating custom embedding representations unique to the knowledge base.
[065] The system 108 may also include or have access to an external corpus 406. The external corpus 406 may be indicative of other documents or text not within the digitized printed text, i.e., a corpus maintained parallelly to that of the digitized printed text. In one or more embodiments, the external corpus 406 may include instructions on curating the text/data in the digitized printed text. For example, the external corpus 406 may be indicative of a curriculum, which may include the topics/chapters/concepts and order thereof, difficulty level, context, indications of constraints on time, and the like.
[066] The first set of embeddings from the vector database 404 and a second set of embeddings derived from the external corpus 406 may be provided to the cross-referencing engine 212. The cross-referencing engine 212 may be configured to combine the first set of embeddings and the second set of embeddings into a set of cross-reference embeddings 408. In one or more embodiments, the external corpus 406 may be combined with the digitized printed text through inclusion of a cross-reference/cross-reference data, among other techniques.
[067] A user instruction set 410 may be provided to a retrieval engine 213. n one or more embodiments, the user instruction set 410 may include a query instruction set received from a computing device 110 operated by a teacher 104. The user instruction set 410 may further include information from the external corpus 406, such as curriculum details, topic selections, chapter specifications, and subject matter to be covered. In one or more embodiments, the user instruction set 410 may incorporate student data, student performance data, and/or entity data. The student data may include student learning profiles, past responses, and learning history. The student performance data may indicate areas of student strength and weakness, progress, and engagement levels. Entity data, in the user instruction set 410, may include institutional policies or specific educational guidelines. Furthermore, the user instruction set 410 may specify the type of content to be generated, such as questions, study materials, or assessments, and may also include parameters related to difficulty level or learning objectives.
[068] The retrieval engine 213 may be configured to extract a subset from the set of cross-reference embeddings 408 based on the query instruction set from the user instruction set 410. In one or more embodiments, the retrieval engine 213 may extract the set of cross-reference embeddings 408 based on at least a semantic similarity value with the query instruction set. For the purposes of extracting the cross-referenced embeddings, techniques such as approximate nearest neighbour search algorithms may be used. Further, the extraction/retrieval of the cross-referenced embeddings may be optimized by use of indexing techniques. Examples of indexing techniques may include, but are not limited to, Hierarchical Navigable Small World (HNSW), Product Quantization (PQ), and Inverted File Index (IVF). The extraction of a subset of cross-referenced embeddings may advantageously reduce computational resources and improve the speed of response generation of the system 108.
[069] The extracted set of cross-reference embeddings may be provided to a generation engine 214. The generation engine 214 may be configured to generate natural language output 412 based on the extracted set of cross-reference embeddings using a language model (not shown). In one or more embodiments, the language model may be a neural network model. Examples of neural network models may include, but are not limited to, Recurrent Neural Network (RNN) models, Long Short-Term Memory (LSTM) models, transformer models, and autoregressive models. The language model may be pre-trained, and in some embodiments, may be further fine-tuned with curriculum-specific datasets to improve domain relevance of the natural language output 412. The embeddings stored in the vector database 404 may be the same as or different from the embeddings used by the language model. In one or more embodiments, an embedding layer of the language model may be compatible with the custom embeddings utilized in the vector database 404. Such compatibility may obviate or minimize the need for conversions between embedding representations.
[070] While FIG. 4 primarily illustrates the generation of natural language output 412, in one or more embodiments, the system 108 may further include a validation step for the generated natural language output 412. The validation may be performed before the natural language output 412 is provided to the control engine 216 or subsequently. In one or more embodiments, a validation engine (not shown in FIG. 4) may be included in the system 108. The validation engine may be configured to validate the generated natural language output 412 based on one or more validation criteria. Examples of validation criteria may include, but are not limited to, user feedback, rule-based methods, and/or an evaluation Artificial Intelligence (AI) engine. User feedback may be obtained from teachers 104 or other stakeholders. Rule-based methods may involve pre-defined rules for evaluating the quality and relevance of the natural language output 412. An evaluation AI engine, which may be a separate AI model trained for this purpose, may also be used to assess the natural language output 412. Based on the validation, the language model within the generation engine 214 may be further trained to improve the quality and relevance of subsequent natural language outputs. The validation process may also incorporate difficulty level tuning and filtering to ensure that the natural language output 412 is appropriate for the intended students 102. Curriculum alignment verification may be performed to ensure that the output aligns with the specified curriculum. Redundancy elimination techniques may be employed to promote diversity and novelty in the generated content.
[071] The natural language output 412 may be provided to a control engine 216. In one or more embodiments, the control engine 216 may transmit the natural language output 412 in electronic control signals to the printer 114. The printer 114 may be configured to print the natural language output 412 into paper or any other physical form, thereby generating offline education means 105 and offline evaluation means 106. The offline education means 105 and offline evaluation means 106, once printed, may be provided to students 102. The students 102 may then attempt to engage with the offline education means 105 and provide responses to the offline evaluation means 106. The responses from the students 102, provided through the offline evaluation means 106, may then be digitized for further processing by the system 108.
[072] The response extraction engine 217 may be configured to receive and process natural language responses to the natural language output 412, for example, from offline evaluation means 106. The response extraction engine 217 may extract responses provided by the student 102 from the offline evaluation means 106. The extracted responses may be provided to an evaluation AI engine or the analytics engine(s) 218. The analytics engine(s) 218 may be configured to analyze the extracted responses and generate analytics related to student performance. The analytics engine 218 may be configured to generate at least one evaluation score based on the extracted responses.
[073] Referring to FIG. 5, a flowchart of a method 500 for personalized learning, in accordance with embodiments of the present disclosure. In some embodiments, the method 500 may be implemented by a system, such as system 108 or processor 202 of FIG. 2.
[074] At step 502, the method 500 includes digitizing, by a processor 202, one or more printed documents into a knowledge base. In some embodiments, the printed documents may include, but not be limited to, books, journals, newspapers, textbooks, study materials, booklets, periodicals, and the like, but not limited thereto. The printed documents may be digitized using the scanner 112, which may capture images of the printed documents, and/or OCR engines may be configured to recognize and extract text therefrom. In some embodiments, handwritten notes or documents, whiteboards, etc., (such as from students 102 or teachers 104) may also be digitized and OCR-ed, and included within the knowledge base.
[075] At step 504, the method 500 includes generating, by the processor 202, a first set of embeddings representing the knowledge base. In some embodiments, the first set of embeddings may utilize a custom embeddings representation unique to the knowledge base. In some embodiments, techniques such as word2vec, count-based vectorization, term frequency-inverse document frequency, and the like, but not limited thereto, may be used for vectorizing or generating custom embedding representations unique to the knowledge base. The use of custom embeddings may reduce the computational complexity of the method 500, and thereby the computational resources and energy required to perform the aforementioned operations.
[076] In some embodiments, the method 500 may include identifying a classification of the first set of embeddings by processing the first set of embeddings through a classifier. Classification may allow a set of related vectors/embeddings to be grouped and retrieved together when a query is executed. The grouped vectors/embeddings may be retrieved together, thereby reducing computation complexity of the method 500.
[077] At step 506, the method 500 includes combining, by the processor 202, the first set of embeddings with a second set of embeddings derived from an external corpus into a set of cross-referenced embeddings. In some embodiments, the external corpus may be indicative of other documents or text not within the knowledge base, i.e., a corpus maintained parallelly to that of the knowledge base. In some embodiments, the external corpus may include instructions on curating the text/data in the knowledge base. For example, the external corpus may be indicative of a curriculum, which may include the topics/chapters/concepts and order thereof, difficulty level, context, indications of constraints on time (such as whether the curriculum is for a semester or a 1-week workshop), and the like, but not limited thereto. In some embodiments, a query instruction set received from a computing device (such as computing device 110 operated by the teacher 104) may include the external corpus.
[078] In some embodiments, the external corpus may be combined with the knowledge base through inclusion of a cross-reference/cross-reference data. In some embodiments, the external corpus may be converted into the second set of embeddings. In some embodiments, the second set of embeddings may either have the same vector representations for individual terms/words as that of the first set of embeddings. In other embodiments, the second set of embeddings may use different vector representations in comparison to the first set of embeddings. In some embodiments, the cross-reference data may be added to as a parameter to the first and the second set of embeddings. In other embodiments, the first set of embeddings may be mapped or aligned with the second set of embeddings by converting the first and second sets into a common vector space. The combination of the first and second embodiments may improve the relevance of outputs of the queries made to the knowledge base and/or the external corpus.
[079] At step 508, the method 500 includes extracting, by the processor 202, a subset from the set of cross-referenced embeddings based on the query instruction set. In some embodiments, the method 500 may include extracting the set of cross-referenced embeddings based on at least a semantic similarity value with the query instruction set. In some embodiments, techniques such as approximate nearest neighbour search algorithms may be used for the purposes of extracted the cross-referenced embeddings. Further, the extraction/retrieval of the cross-referenced embeddings may be optimized by use of indexing techniques, such as HNSW, Product Quantization PQ, and Inverted File Index IVF, but not limited thereto. The extraction of a subset of cross-referenced embeddings may advantageously reduce computational resources and improve the speed of response generation of the method 500.
[080] At step 510, the method 500 includes generating, by the processor 202, natural language outputs based on the extracted set of cross-referenced embeddings using a language model. In some embodiments, the language model may include a pretrained LLM. In some embodiments, the pretrained LLM may be fine-tuned with curriculum-specific datasets (such as the knowledge base and/or the external corpus). Such fine-tuning may improve domain relevance of the natural language outputs. In other embodiments, the processor 202 (using the training engine 215) may extend the pretraining of the language model with the knowledge base and/or the external corpus. In some embodiments, external knowledge augmentation may be implemented via the cross-referenced embeddings to integrate curated educational corpora for more contextualized question generation by the language model.
[081] In some embodiments, the method 500 may include generating the natural language outputs by the language model based on the query instruction set and a system prompt. In some embodiments, the method 500 may include the system prompt comprising at least one of, student data, student learning profile data, and/or entity data (such as policies of the instruction operating the system 108), but not limited thereto. In some embodiments, the method 500 may include dynamically generating the system prompt, or receiving the system prompt from a computing device 110. In some embodiments, the student data may include student responses, student performance data (such as student strengths and weaknesses), student progress data, student accuracy, and/or student engagement levels, but not limited thereto. The student performance data may allow the language model to generate personalized outputs (such as personalized worksheets), thereby increase relevance thereof. In some embodiments, the system prompt may be also include difficulty based on student accuracy and engagement levels.
[082] In some embodiments, the method 500 may include validating the generated natural language outputs based on at least one of: user feedback, a rule-based method, and/or an evaluation AI engine, and training the language model based on the validation. In some embodiments, the validation of the natural language outputs may further include difficulty level tuning and filtering. For example, a classifier, such as a BERT-based or other transformer-based classifier, may be utilized to filter and tune the difficulty level of the generated natural language outputs to ensure appropriateness. In some embodiments, curriculum alignment verification may be performed as part of the validation. For example, rule-based matching and/or classifiers may be used to verify the alignment of the generated natural language outputs with a curriculum. In some embodiments, redundancy elimination may be implemented via semantic similarity techniques. For example, semantic similarity measures, such as cosine similarity and MinHash, may be employed to identify and eliminate redundant natural language outputs, thereby promoting diversity in the generated content.
[083] In some embodiments, the method 500 may include receiving at least one natural language response to the natural language output from a computing device 110, determining an evaluation score based on the at least one natural language response, and adjusting a difficulty score based on the evaluation score. The language model may be configured to generate subsequent natural language outputs based on at least the difficulty score. In some embodiments, the difficulty level may be adjected by including the determined difficulty score in the prompt and/or retraining the language model. In some embodiments, the retraining of the language model may be based on the evaluation score. In some embodiments, the retraining may utilize reinforcement learning, where a reward signal for the reinforcement learning may be based on the evaluation score. In some embodiments, the evaluation score may be determined based on user feedback, voting, validated teacher feedback, confidence scores determined by the AI evaluation engine, and/or other evaluation mechanisms.
[084] While the foregoing describes various embodiments of the disclosure, other and further embodiments of the present disclosure may be devised without departing from the basic scope thereof. The scope of the disclosure is determined by the claims that follow. The disclosure is not limited to the described embodiments, versions, or examples, which are included to enable a person having ordinary skill in the art to make and use the disclosure when combined with information and knowledge available to the person having ordinary skill in the art.

ADVANTAGES OF THE PRESENT DISCLOSURE
[085] The present disclosure provides a system and method that obviates the above-mentioned limitations of existing systems and methods efficiently.
[086] The present disclosure provides a system and a method that allows teachers to curate learning materials, curriculum, and tests/evaluations personalized for each student.
[087] The present disclosure provides a system and a method that provides assessments on students learning to all stakeholders of education, including, but not limited to, students, teachers, parents, institutions, and the like.
[088] The present disclosure provides a system and a method that allows teachers to curate learning materials from a knowledge base based on different curriculum, and generate tests/evaluations/assessments personalized for each student.
[089] The present disclosure provides a system and a method for integration of offline evaluation means with digital Learning Management Systems (LMS).
[090] The present disclosure provides a system and a method for retrieving texts from a knowledge base corpus (or first corpus), through cross-referencing of vectors with those of a curriculum corpus (or second corpus).
[091] The present disclosure develops customized embeddings to reduce computational expenditure for retrieval and/or generation tasks.
[092] The present disclosure develops a knowledge base/first corpus based on the digitization of physical documents.
,CLAIMS:1. A system (108) for personalized learning, including:
a processor (202);
a memory (204) operably coupled to the processor (202), the memory (204) including one or more processor-executable instructions configured to cause the processor (202) to:
digitize one or more printed documents into a knowledge base;
generate a first set of embeddings representing the knowledge base;
combine the first set of embeddings with a second set of embeddings derived from an external corpus into a set of cross-referenced embeddings;
extract a subset from the set of cross-referenced embeddings based on a query instruction set; and
generate one or more natural language outputs based on the extracted set of cross-referenced embeddings using a language model.
2. The system (108) as claimed in claim 1, wherein the processor (202) is further configured to identify a classification of the first set of embeddings by processing the first set of embeddings through a classifier.
3. The system (108) as claimed in claim 1, wherein the set of cross-referenced embeddings is extracted based on at least a semantic similarity value with the query instruction set.
4. The system (108) as claimed in claim 1, wherein the processor (202) is further configured to:
validate the one or more natural language outputs based on at least one of: user feedback, a rule-based method, and/or an evaluation artificial intelligence (AI) engine, and
train the language model based on the validation.
5. The system (108) as claimed in claim 1, wherein the one or more natural language outputs generated by the language model based on the query instruction set and a system prompt.
6. The system (108) as claimed in claim 6, wherein the system prompt comprises at least one of: student data, student learning profile data, and/or entity data.
7. The system (108) as claimed in claim 6, wherein the processor (202) is configured to:
dynamically generate the system prompt; or
receive the system prompt from a computing device (100).
8. The system (108) as claimed in claim 1, wherein the processor (202) is further configured to:
receive at least one natural language response to the one or more natural language output from a computing device (110);
determine an evaluation score based on the at least one natural language response; and
adjust a difficulty score based on the evaluation score, wherein the language model is configured to generate subsequent natural language outputs based on at least the difficulty score.
9. A system (108) for personalized learning, comprising:
a processor (202);
a memory (204) operably coupled to the processor (202), the memory (204) including one or more processor-executable instructions configured to cause the processor (202) to:
receive a query instruction set;
extract a subset from the set of cross-referenced embeddings based on a query instruction set, wherein the set of cross-referenced embeddings is obtained by combining a first set of embeddings derived from one or more digitized printed text and a second set of embeddings from an external corpus; and
generate natural language outputs based on the extracted set of cross-referenced embeddings using a language model.
10. A method (500) for personalized learning, comprising:
digitizing, by a processor (202), one or more printed documents into a knowledge base;
generating, by the processor (202), a first set of embeddings representing the knowledge base;
combining, by the processor (202), the first set of embeddings with a second set of embeddings derived from an external corpus into a set of cross-referenced embeddings;
extracting, by the processor (202), a subset from the set of cross-referenced embeddings based on a query instruction set; and
generating, by the processor (202), natural language outputs based on the extracted set of cross-referenced embeddings using a language model.

Documents

Application Documents

# Name Date
1 202441022322-STATEMENT OF UNDERTAKING (FORM 3) [22-03-2024(online)].pdf 2024-03-22
2 202441022322-PROVISIONAL SPECIFICATION [22-03-2024(online)].pdf 2024-03-22
3 202441022322-POWER OF AUTHORITY [22-03-2024(online)].pdf 2024-03-22
4 202441022322-FORM FOR STARTUP [22-03-2024(online)].pdf 2024-03-22
5 202441022322-FORM FOR SMALL ENTITY(FORM-28) [22-03-2024(online)].pdf 2024-03-22
6 202441022322-FORM 1 [22-03-2024(online)].pdf 2024-03-22
7 202441022322-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-03-2024(online)].pdf 2024-03-22
8 202441022322-EVIDENCE FOR REGISTRATION UNDER SSI [22-03-2024(online)].pdf 2024-03-22
9 202441022322-DRAWINGS [22-03-2024(online)].pdf 2024-03-22
10 202441022322-DECLARATION OF INVENTORSHIP (FORM 5) [22-03-2024(online)].pdf 2024-03-22
11 202441022322-FORM-5 [22-03-2025(online)].pdf 2025-03-22
12 202441022322-DRAWING [22-03-2025(online)].pdf 2025-03-22
13 202441022322-CORRESPONDENCE-OTHERS [22-03-2025(online)].pdf 2025-03-22
14 202441022322-COMPLETE SPECIFICATION [22-03-2025(online)].pdf 2025-03-22