Abstract: The present disclosure relates to a system (100) and a method (300) presents an advanced AI solution to the complex task of evaluating handwritten or online subjective exams. A comprehensive AI-driven framework to revolutionize the assessment process. The system (100) operates an AI-based platform for grading student answer submissions. The method (300) begins with answer submission through a digital portal, utilizing scanning devices or digital interfaces. Responses are stored in a cloud database and preprocessed using sophisticated computer vision libraries and cloud-based Optical Character Recognition (OCR) technologies. The refined responses undergo further processing with Natural Language Processing (NLP) techniques and are enhanced using custom trained sentence transformer-based models (LLM). Regression models are then employed for accurate grading and feedback generation, culminating in a comprehensive evaluation summary for the student. Continuous learning is ensured through the MLOps framework, emphasizing ongoing model optimization and commitment to grading accuracy.
Description:TECHNICAL FIELD
[0001] The present disclosure relates generally to an artificial intelligence based auto-score generation. In particular, the present disclosure relates to an AI based system to automate the generation of score and personalized evaluation report for subjective answers.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present disclosure. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed disclosure, or that any publication specifically or implicitly referenced is prior art.
[0003] The genesis of this innovation arises from the enduring obstacles faced in educational assessment, especially in the appraisal of handwritten or online subjective examinations. Conventional methods reliant on manual grading have long struggled with issues of subjectivity, inconsistency, and inefficiency. These challenges are compounded by the diverse linguistic contexts prevalent in academic and civil service domains, particularly prominent in the educational landscape of India. For instance, in the case of the UPSC Civil Services Examination, which encompasses Prelims, Mains, and Interview stages, the Mains Exam comprises descriptive papers across nine subjects. The evaluation process entails assessing candidates’ knowledge, writing skills, language proficiency, and speed.
[0004] Current technologies often exhibit subjectivity and inconsistency due to reliance on manual marking or rudimentary automated grading algorithms. Automated systems struggle to decode handwritten or non-standardized text precisely, resulting in errors and inaccuracies in grading. Many existing solutions may lack scalability and adaptability, especially when dealing with huge test volumes across a range of subjects and languages. Traditional Natural Language Processing (NLP) methods are unable to deliver meaningful feedback due to their limited understanding of context, subtleties, and semantics within textual answers. Additionally, the need for manual intervention for system ongoing training and verification disrupts efficiency and full automation, impeding the broad adoption of these automated grading technologies.
[0005] Existing solutions primarily rely on pattern recognition algorithms for grading handwritten or online responses. These systems often may struggle with accurately interpreting non-standardized text, leading to errors and inaccuracies in assessment. While some solutions incorporate basic NLP techniques for text analysis, they may lack the sophistication to understand context and semantics within textual responses by implementing modern LLM platforms.
[0006] There is, therefore, a need to provide an end-to-end AI platform to provide students with a prompt and efficient evaluation experience. The system processes and delivers evaluations within approximately five minutes after answers are submitted through our specialized online platform.
OBJECTS OF THE PRESENT DISCLOSURE
[0007] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0008] An object of the present disclosure is to streamline the evaluation process of subjective exams by significantly reducing the time required for assessment, thus providing prompt feedback to students.
[0009] Another object of the present disclosure is to minimize subjective biases inherent in manual grading methods by implementing an automated assessment system based on standardized criteria and algorithms.
[0010] Another object of the present disclosure is to ensure the accuracy and reliability of assessment outcomes by leveraging advanced AI and NLP techniques like Sentence Transforer models and also ongoing model updation technologies like MLOps for comprehensive analysis of student responses.
[0011] Another object of the present disclosure is to offer tailored feedback to individual students based on the strengths, weaknesses, and areas for improvement identified in their responses, thereby promoting personalized learning experiences.
[0012] Yet another object of the present disclosure is to design a system capable of handling large volumes of student submissions across diverse educational contexts while maintaining high performance and reliability.
SUMMARY
[0013] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0014] An aspect of the present disclosure relates to a system to automate generation of score and personalized evaluation report for subjective answers, the system may be configured to include a server configured to generate the score and personalized evaluation report for subjective answers; including one or more processors; and a memory operatively coupled to the server, wherein the memory includes processor-executable instructions, which on execution, cause the one or more processors to: receive one or more images for image preprocessing using an image processing unit; employ cloud based Optical Character Recognition (OCR) to digitize and extract text from the preprocessed one or more images; convert student answers of the one or more images into a data frame, and apply text pre-processing techniques.
[0015] Furthermore, the one or more processors to: utilize Sentence Transformer models to generate embeddings for the student answers and model answers upon the pre-processed text and compute cosine similarity also other similarity like jaccard similarity on the introduction and conclusion of the student and model answer; employ regression models to generate final assessment scores after utilizing Sentence Transformer models; utilize Large Language Models (LLMs) to generate personalized feedback comments for students based on the answers of the one or more images; and generate comprehensive an automated assessment report to enhance the learning experience and facilitate self-assessment.
[0016] In an aspect, the image pre-processing technique including tasks any or a combination of cropping images, removing headers and footers, and enhancing image quality.
[0017] In an aspect, the text pre-processing technique including any or a combination of stop word removal, stemming, lemmatization, and auto spell-checking using NLP libraries, wherein further the NPL libraries comprises any or a combination of NLTK and regex.
[0018] In an aspect, the regression model comprises any or a combination of Support vector regressor (SVR), Linear regression, and Ridge.
[0019] In an aspect, the system further can include the regression models designed to calculate final assessment scores based on the factors that include similarities from one or more answer sections, any or a combination of the introduction, body, conclusion, and the length of the answer.
[0020] In an aspect, the specialized trained Sentence Transformer models (trained based on historic copies for different subjects independently) are utilized to establish similarity between student answers and model answers, for one or more section of the answer comprising an introduction, body and conclusion and finally the regression models configured to calculate final scores based on similarity values extracted from one or more answer sections.
[0021] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF DRAWINGS
[0022] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in, and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure, and together with the description, serve to explain the principles of the present disclosure.
[0023] In the figures, similar components, and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[0024] FIG. 1 illustrates an exemplary block diagram of an AI-powered system to automate the generation of score and personalized evaluation report, in accordance with an embodiment of the present disclosure.
[0025] FIG. 2 illustrates an exemplary architecture of module diagram of a proposed AI-powered system to automate the generation of score and personalized evaluation report, in accordance with an embodiment of the present disclosure.
[0026] FIG. 3 illustrates an exemplary view of a flow diagram of proposed method for generating of score and personalized evaluation report for subjective answers, in accordance with some embodiments of the present disclosure.
[0027] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0028] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0029] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0030] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0031] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0032] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
[0033] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0034] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0035] In an embodiment, the present disclosure relates to a system to automate generation of score and personalized evaluation report for subjective answers, the system may be configured to include a server configured to generate the score and personalized evaluation report for subjective answers; including one or more processors; and a memory operatively coupled to the server, wherein the memory includes processor-executable instructions, which on execution, cause the one or more processors to: receive one or more images for image preprocessing using an image processing unit; employ cloud based Optical Character Recognition (OCR) to digitize and extract text from the preprocessed one or more images; convert student answers of the one or more images into a data frame, and apply text pre-processing techniques.
[0036] Furthermore, the one or more processors to: utilize Sentence Transformer models to generate embeddings for the student answers and model answers upon the pre-processed text and compute cosine similarity between them; employ regression models to generate final assessment scores after utilizing Sentence Transformer models; utilize Large Language Models (LLMs) to generate personalized feedback comments for students based on the answers of the one or more images; and generate comprehensive an automated assessment report to enhance the learning experience and facilitate self-assessment.
[0037] FIG. 1 illustrates an exemplary block diagram of an AI-powered system to automate the generation of score and personalized evaluation report, in accordance with an embodiment of the present disclosure.
[0038] As illustrated in FIG. 1, the auto-score generation system 100 (interchangeably referred to as a system 100, hereafter) can represent a cutting-edge AI solution designed to address the complex task of evaluating handwritten or online subjective exams. The automated assessment technologies primarily center on AI-driven systems 100, computer vision, and fundamental NLP techniques. While proficient in automating grading for multiple-choice questions and objective assessments, the technologies may encounter challenges when tasked with evaluating lengthy subjective exams. This is especially evident in diverse linguistic settings and when providing section-wise marks or scores, generating auto-generated comments, and compiling comprehensive evaluation reports for students.
[0039] In an exemplary embodiment, a system 100 may be configured to include a server 102 where the server 102 may be configured to include one or more processor 104 (interchangeably referred to as a processor 104, hereinafter) and a memory 106 storing a set of instructions, which upon being executed cause the processor 104. The processor 104 includes any or a combination of suitable logic, circuitry, and/or interfaces that are operable to execute one or more instructions stored in the memory 106 to perform pre-determined operations. The memory 106 may be operable to store the one or more instructions. The processor 104 may be implemented using one or more processor technologies known in the art. Examples of the processor 104 include but are not limited to, an x86 processor, a RISC processor, an ASIC processor, a CISC processor, or any other processor.
[0040] In an embodiment, the system 100 may be configured to include a processor 104, either singular or multiple, interconnected with a memory 106 component. The memory 106 can serve as a storage repository for sets of instructions that dictate the system’s 100 operations and behaviors. When these instructions are activated and processed by the one or more processors 104 (interchangeably referred to as a processor 104, hereinafter), they initiate specific actions within the system 100, guiding its functionality and behavior. The arrangement facilitates the system’s 100 ability to perform tasks and execute processes according to the programmed instructions stored in memory 106.
[0041] In an exemplary embodiment, the system 100 may be configured to include a network 108 can include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Wide Area Network (WAN), a Local Area Network (LAN), or a Metropolitan Area Network (MAN). Various devices in the system 100 can connect to the network 180 in accordance with the various wired and wireless communication protocols such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), and 2G, 3G, and 4G communication protocols.
[0042] In an exemplary embodiment, the system 100 may be configured to include mobile computing device 110 (interchangeably referred to as a computing device 110, hereinafter) for producing a detailed assessment report directly on a computing device 110. This means that the process of creating the report, which typically includes gathering and analyzing assessment data, generating scores or grades, and compiling feedback, comments, and suggestions, occurs entirely on a smartphone, tablet, or similar portable computing device 110.
[0043] FIG. 2 illustrates an exemplary architecture of module diagram of a proposed AI-powered system to automate the generation of score and personalized evaluation report, in accordance with an embodiment of the present disclosure.
[0044] In an exemplary embodiment, referring to FIG. 2, a module diagram 200 of the system 100 may comprise one or more processor(s) 104 (interchangeably referred to as a processor 104, hereinafter). The processor 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the processor 104 may be configured to fetch and execute computer-readable instructions stored in a memory 106 of the system 100. The memory 106 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 106 may comprise any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0045] The system 100 may include an interface(s) 208. The interface(s) 208 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 208 may facilitate communication to/from the system 100. The interface(s) 208 may also provide a communication pathway for one or more components of the system 100. Examples of such components include but are not limited to, processing unit/engine(s) 210 and a database 202.
[0046] In an embodiment, the processing unit/engine(s) 210 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 210. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 210 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 210 may comprise a processing resource (for example, one or more processors), to execute such instructions.
[0047] In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 210. In such examples, the system 100 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system 100 and the processing resource. In other examples, the processing engine(s) 210 may be implemented by electronic circuitry.
[0048] In an embodiment, the database 202 may include data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor 102 or the processing engine 210. In an embodiment, the database 202 may be separate from the system 100. The database 202 may be configured to include a cloud-based database and undergo preprocessing utilizing advanced computer vision libraries nay or a combination of, but not limited to, OpenCV, Pillow, and others. The preprocessing may be facilitated by cloud-based Optical Character Recognition (OCR) technologies deployed on the cloud server.
[0049] In an exemplary embodiment, the processing engine 210 may include one or more engines selected from any of an image preprocessing engine 212, an optical character engine by cloud cognitive 214, an custom trained LLM engine 216 , a regression engine 218, a feedback generation engine 220, and a MLOps engine 222.
[0050] In an exemplary embodiment, the image preprocessing engine 212 may be pertains to the enhancement and refinement of handwritten or online responses prior to analysis. The process encompasses tasks any or a combination of, but not limited to, cropping images, eliminating headers and footers, and enhancing image quality. The objective of image pre-processing engine 212 may be configured to ensure that the textual content within the images is clear and standardized, thereby facilitating subsequent analysis.
[0051] In an exemplary embodiment, the optical character recognition (OCR) engine by cloud cognitive 214 may be configured to cloud-based Optical Character Recognition (OCR) technology is utilized to digitize and extract text from the preprocessed images. The OCR 214 engine precisely converts the processed images into machine-readable text, thereby facilitating automated assessment processes. Additional post-processing steps were implemented to enhance OCR accuracy. These steps include the removal of extraneous or irrelevant text, commonly referred to as “garbage text,” from the OCR output.
[0052] Furthermore, the Optical Character Recognition (OCR) process, responses were transformed into a structured data format, specifically a data frame. Subsequently, various text pre-processing techniques, including stop word removal, stemming, and lemmatization, were applied to the text data using Natural Language Processing (NLP) libraries such as NLTK (Natural Language Toolkit) and regex (regular expressions). Additionally, to ensure accurate word representation, an automated spell checker library was investigated to rectify any spelling errors present in the OCR-generated text.
[0053] In an exemplary embodiment, the Custom trained LLM engine 216 where particular focus was generating similarity values between student answers and model answers utilizing various Sentence Transformer models which were traned based on historic answers copies. Sentence Transformer is a Python framework that provides state-of-the-art sentence and text embeddings for a vast array of languages, which can then be compared using cosine-similarity to discern similar meanings. After examining and testing a multitude of these trained models on different data subsets with adjusted hyperparameters, text correction tools were used to clean student answer texts. The models were then further refined to compute the embeddings of both the student and model answers to calculate cosine similarity. These refined models achieved a particular level of accuracy and an impressive correlation between similarity value and actual marks.
[0054] In an exemplary embodiment, the regression engine 218 involving any or a combination of, SVR (Support Vector Regressor), Linear Regression, and Ridge Regression are utilized to compute final assessment scores. The scores may derived from a combination of factors including similarities observed in different sections of the answer (For example introduction, body, and conclusion), as well as the length of the response.
[0055] In an exemplary embodiment, the feedback generation engine 220 may be configured to Language models are employed to produce tailored feedback comments for students based on the submitted responses. These comments mimic instructor feedback, offering constructive criticism and guidance to aid student learning. The language models analyze the semantic content of student responses, identifying strengths, weaknesses, and areas for improvement. By providing personalized feedback, the learning experience is enhanced, and students are empowered to engage in self-assessment, ultimately promoting academic growth and improvement.
[0056] In an exemplary embodiment, the MLOps engine 222 is crucial in streamlining processes, especially by incorporating automated training and validation. The use of a cloud-based MLOps platform significantly enhances efficiency, facilitating a smoother workflow. This approach not only optimizes performance but also paves the way for the widespread adoption of the solution.
[0057] FIG. 3 illustrates an exemplary view of a flow diagram of proposed method for generating of score and personalized evaluation report for subjective answers, in accordance with some embodiments of the present disclosure.
[0058] As illustrated, a method 300 for generating of score and personalized evaluation report for subjective answers. At step 302, the method 300 may involve can be configured to design to handle images, and the particular step involves the initial intake of image data. The purpose of this step 302 is to prepare the images for further processing or analysis by performing various preprocessing tasks, such as adjusting image quality, removing noise, or extracting relevant features.
[0059] Continuing further, at step 304, the method 300 may involve leveraging OCR algorithms and techniques hosted on cloud servers to analyze the content within the preprocessed images and extract the textual information present in them. The extracted text can then be further processed or analyzed as needed for various applications or purposes within the system.
[0060] Continuing further, at step 306, the conversion into a data frame, various text pre-processing techniques are applied to the textual content. These techniques may include tasks such as removing stop words, stemming, lemmatization, and spell checking, among others. The objective of text pre-processing is to prepare the textual data for further analysis or processing by cleaning, normalizing, and standardizing it to improve the accuracy and effectiveness of subsequent tasks, such as similarity analysis or sentiment classification.
[0061] Continuing further, at step 308, generating embedding for student and model answers and calculate various similarity values using trained LLM model. The sentence transformer models are sophisticated neural network-based architectures designed to convert text into dense vector representations that capture semantic similarities and relationships between sentences. Once the embeddings are generated for both the student and model answers, the cosine similarity between these representations is computed. Cosine similarity is a metric used to measure the similarity between two vectors by calculating the cosine of the angle between them.
[0062] Continuing further, at step 310, the method 300 includes after utilizing the Sentence Transformer models to generate numerical representations for student answers and model answers and also one or more similarity for introduction, conclusion for student and model answers, finally the regression models are employed to analyze these representations and compute the final assessment scores. Regression models are statistical techniques used to model the relationship between independent variables and dependent variables.
[0063] Furthermore, the regression models may include one or more algorithms any or a combination of, Support Vector Regression (SVR), Linear Regression, or Ridge Regression. They analyze the embeddings generated by the Sentence Transformer models and determine the final assessment scores based on factors such as the similarity between student and model answers, the length of the responses, or other relevant criteria.
[0064] Continuing further, at step 312, In this step, the LLMs analyze the textual content of the student answers extracted from the images and generate feedback comments tailored to each student. These comments may provide constructive criticism, highlight areas of strength and weakness, offer guidance for improvement, and emulate instructor remarks. The goal is to provide personalized and meaningful feedback to students to aid their learning and development.
[0065] Continuing further, at step 312, the method 300 may be involved the generation of a comprehensive automated assessment report by the processors 104 within a computing system. The report is intended to significantly enhance the learning experience and promote self-assessment for the individuals undergoing evaluation. Through this automated process, various aspects of the assessment are systematically compiled and analyzed to provide a holistic view of the individual’s performance. The report typically includes detailed information such as evaluation results, personalized feedback comments, performance analysis across different criteria, and comparative data against benchmarks or previous performance.
[0066] Continuing further, at step 314 , the method MLOps engine is essential for streamlining various processes related to machine learning by leveraging automated training and validation. By incorporating these capabilities into a cloud-based MLOps platform, efficiency is significantly improved, leading to a smoother workflow. This technology allows for the easy recalibration of models when new data becomes available and enables the re-training of models with new features and algorithms. Overall, MLOps platforms enhance the agility and responsiveness of machine learning operations, making it easier to maintain up-to-date, effective models in a dynamic data environment.
[0067] In summary, the present disclosure integrates a variety of technologies and methodologies including image preprocessing, Optical Character Recognition (OCR), Natural Language Processing (NLP) techniques, advanced Artificial Intelligence (AI) models (can be known as LLM models), regression analysis, and feedback generation to develop a comprehensive automated assessment system. By harnessing the capabilities of these components, the system is capable of accurately evaluating subjective exams, offering meaningful feedback to students, and improving the efficiency and objectivity of the assessment process.
[0068] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are comprised to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE INVENTION
[0069] The present disclosure is to provide a system and a method to incorporate advanced techniques such as image pre-processing, cloud OCR, and custom-trainable LLM techniques, ensuring accurate extraction and analysis of student answers.
[0070] The present disclosure is to provide distinct models dedicated to each subject enhances precision in grading and marks calculation, catering to the unique characteristics and requirements of different subjects.
[0071] The present disclosure is to provide multilingual support ensures inclusiveness by catering to a diverse demographic pool, enabling students from different linguistic backgrounds to benefit from the assessment platform.
[0072] The present disclosure is to provide the AI/ML-based ongoing model monitoring system (MLOps) provides dynamic adjustments based on student responses, ensuring continuous optimization and maintaining assessment precision over time.
[0073] The present disclosure is to provide a system and a method offers a comprehensive view of performance, strengths, and areas of improvement, providing valuable insights for both students and educators to enhance learning outcomes.
, Claims:1. A system (100) to automate generation of score and personalized evaluation report for subjective answers, the system (100) comprising:
a server (102) configured to generate the score and personalized evaluation report for subjective answers; comprising:
one or more processors (104); and
a memory (106) operatively coupled to the server (102), wherein the memory (106) comprises processor-executable instructions, which on execution, cause the one or more processors (104) to:
receive, one or more images for image preprocessing (comprising tasks any or a combination of cropping images, removing headers and footers, and enhancing image quality) using an image processing unit;
employ, cloud based Optical Character Recognition (OCR) to digitizing and extracting text from the preprocessed one or more images;
convert, student answers of the one or more images into a data frame, and applying text pre-processing techniques;
utilize, sentence transformer models for generating embeddings for the student answers and model answers upon the pre-processed text and compute cosine similarity between them;
employ, regression models (comprise any or a combination of Support vector regressor (SVR), Linear regression, and Ridge) for generating final assessment scores based on the factors that include similarities from one or more answer sections, any or a combination of the introduction, body, conclusion, and the length of the answer;
utilize to generate personalized feedback comments for students based on the answers of the one or more images; and
generate an automated assessment report to enhance the learning experience and facilitate self-assessment.
2. The system (100) as claimed in claim 1, wherein the specialized trained Sentence Transformer models are utilized to establish similarity between student answers and model answers, and the regression models configured to calculate final scores based on similarity values extracted from one or more answer sections.
3. The system (100) as claimed in claim 1, wherein a grading mechanism that assesses each part of the response (introduction, body, conclusion) on its own before accumulating all points for the final score, heightening the accuracy of the assessment.
4. The system (100) as claimed in claim 1, wherein distinct models dedicated to each subject, further enhancing the precision of grading and marks calculation.
5. The system (100) as claimed in claim 1, wherein a language model-driven feedback cycle, offering personalized comments to aid learners’ comprehension and enable self-remediation.
6. The system (100) as claimed in claim 1, wherein multilingual support catering to a wide demographic pool, allowing more inclusiveness in education.
7. The system (100) as claimed in claim 1, wherein an AI/ML-based ongoing model monitoring system (MLOps) which provides dynamic adjustments calibrated to students' response copies to maintain the assessment precision.
8. The system (100) as claimed in claim 1, wherein an in-detail student answer evaluation report which gives a comprehensive view of their performance, strengths, and areas of improvement.
9. The system (100) as claimed in claim 1, wherein features combined take student answer grading and feedback generation to a new level of sophistication, offering precision, efficiency and ML-driven personalization that caters to modern educational needs.
| # | Name | Date |
|---|---|---|
| 1 | 202411043104-STATEMENT OF UNDERTAKING (FORM 3) [03-06-2024(online)].pdf | 2024-06-03 |
| 2 | 202411043104-REQUEST FOR EARLY PUBLICATION(FORM-9) [03-06-2024(online)].pdf | 2024-06-03 |
| 3 | 202411043104-POWER OF AUTHORITY [03-06-2024(online)].pdf | 2024-06-03 |
| 4 | 202411043104-FORM-9 [03-06-2024(online)].pdf | 2024-06-03 |
| 5 | 202411043104-FORM FOR STARTUP [03-06-2024(online)].pdf | 2024-06-03 |
| 6 | 202411043104-FORM FOR SMALL ENTITY(FORM-28) [03-06-2024(online)].pdf | 2024-06-03 |
| 7 | 202411043104-FORM 1 [03-06-2024(online)].pdf | 2024-06-03 |
| 8 | 202411043104-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [03-06-2024(online)].pdf | 2024-06-03 |
| 9 | 202411043104-EVIDENCE FOR REGISTRATION UNDER SSI [03-06-2024(online)].pdf | 2024-06-03 |
| 10 | 202411043104-DRAWINGS [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202411043104-DECLARATION OF INVENTORSHIP (FORM 5) [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202411043104-COMPLETE SPECIFICATION [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202411043104-STARTUP [04-06-2024(online)].pdf | 2024-06-04 |
| 14 | 202411043104-FORM28 [04-06-2024(online)].pdf | 2024-06-04 |
| 15 | 202411043104-FORM 18A [04-06-2024(online)].pdf | 2024-06-04 |
| 16 | 202411043104-FORM-8 [06-06-2024(online)].pdf | 2024-06-06 |
| 17 | 202411043104-FER.pdf | 2024-07-05 |
| 18 | 202411043104-FORM-5 [03-01-2025(online)].pdf | 2025-01-03 |
| 19 | 202411043104-FORM-26 [03-01-2025(online)].pdf | 2025-01-03 |
| 20 | 202411043104-FER_SER_REPLY [03-01-2025(online)].pdf | 2025-01-03 |
| 21 | 202411043104-DRAWING [03-01-2025(online)].pdf | 2025-01-03 |
| 22 | 202411043104-CORRESPONDENCE [03-01-2025(online)].pdf | 2025-01-03 |
| 23 | 202411043104-PatentCertificate26-08-2025.pdf | 2025-08-26 |
| 24 | 202411043104-IntimationOfGrant26-08-2025.pdf | 2025-08-26 |
| 1 | SearchStrategyMatrix202411043104E_25-06-2024.pdf |