Sign In to Follow Application
View All Documents & Correspondence

An Integrated Assistive System For Teaching And Learning

Abstract: ABSTRACT AN INTEGRATED ASSISTIVE SYSTEM FOR TEACHING AND LEARNING The present invention discloses an integrated assistive system (100) for teaching and learning to increase the attention of the learner as well as effectively delivering the contents by the expert. The assistive system (100) comprises a camera, microphone, speaker, screen capturer and a processor. The processor is operatively coupled with an AI engine/module. The system also comprises a learner state assessment module, dynamic co-creation and customization module, content creation module and an exam proctoring module. The learner state assessment module (102) checks the learner mental condition before starting the session. The dynamic co-creation and customization module (103) creates the contents and modifies its flow based on the user condition, without just selecting it from the library/memory. The content creation module (104) helps to create the contents by the SME by using the multi sensor data. The SME support module (104B) also helps to enhance the skills of the expert. The exam proctoring module (105) will proctor the students based on the real-time analysis of the AI module. The present invention will improve the quality of education in real-time applications in an effective manner. Fig 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 September 2025
Publication Number
44/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

32Mins Digital Consultancy Services Private Limited
No 228, 232/5A1, S2, Phase 3, 2nd floor, VIP Flats, Anugraha Gated Community, Vanniyar Street, Kolapakkam, Poonamallee, Tiruvallur-600128, Tamil Nadu, India.

Inventors

1. Sribalaji Ravi
32Mins Digital Consultancy Services Private Limited, No 228, 232/5A1, S2, Phase 3, 2nd floor, VIP Flats, Anugraha Gated Community, Vanniyar Street, Kolapakkam, Poonamallee, Tiruvallur-600128, Tamil Nadu, India.

Specification

Description:FORM 2

The Patents Act 1970
(39 of 1970)
&
The Patent Rules, 2003

COMPLETE SPECIFICATION
(Section 10 and Rule 13)

Title: AN INTEGRATED ASSISTIVE SYSTEM FOR TEACHING AND LEARNING

Applicant:
32Mins Digital Consultancy Services Private Limited
No 228, 232/5A1, S2, Phase 3, 2nd floor, VIP Flats, Anugraha Gated Community, Vanniyar Street, Kolapakkam, Poonamallee, Tiruvallur-600128, Tamil Nadu, India.

The following specification particularly describes the invention and the manner in which it is to be performed.
AN INTEGRATED ASSISTIVE SYSTEM FOR TEACHING AND LEARNING
TECHNICAL FIELD
The present invention relates to the field of digital learning systems. More specifically, the present invention related to an integrated assistive system for teaching and learning of concepts using AI engine for dynamically generating and customizing the contents.
BACKGROUND
The evolution of digital learning technologies has created new opportunities for accessibility, scalability, and convenience in education. However, despite rapid advancements, existing online teaching and learning platforms suffer from several critical limitations that reduce the overall effectiveness, inclusiveness, and credibility of digital education. The conventional systems primarily focus on delivering pre-recorded or live-streamed content, without adequately considering the learner’s cognitive and emotional state during the learning process. The learners may experience cognitive overload, boredom, frustration, disengagement, or confusion at various points in the session. Presently, there are no reliable mechanisms within the existing platforms to monitor and profile such complex states using multimodal data, such as facial expressions, vocal tonality, and eye-tracking. As a result, educators remain unaware of the learner’s real-time challenges, and the system fails to dynamically intervene with appropriate instructional strategies.
Furthermore, the existing platforms typically lack adaptive content generation or customization capabilities. The most digital systems present static instructional material that does not change according to learner engagement, comprehension level, or emotional condition. This absence of personalized optimization leads to reduced effectiveness, particularly in learners who require differentiated support or alternative modes of explanation.
Another drawback is the insufficient support provided to subject matter experts (SMEs) in the content creation process. The conventional systems do not provide analytical tools to evaluate SME teaching quality or presentation style across dimensions such as pacing, clarity, vocal engagement, or audience comprehension indicators. Nor do they offer real-time, non-intrusive feedback to help SMEs adjust their teaching approaches during content creation. Similarly, there is no structured mechanism to recommend the inclusion of interactive elements, alternative explanations, supplementary resources, or segmentation into micro-learning modules. This lack of intelligent support contributes to inconsistencies in content delivery and learning outcomes. The problem further extends to the evaluation process.
The current digital platforms employ rudimentary proctoring mechanisms that rely on limited video monitoring or manual invigilation. These systems are unable to analyse subtle behavioural cues, such as micro-expressions, eye-gaze patterns, or anomalous audio events, that could indicate external assistance, unauthorized resource use, or impersonation during exams. The lack of advanced, AI-assisted proctoring significantly undermines assessment integrity and diminishes trust in online certifications. In addition, existing solutions often provide these functionalities in isolation. For example, some platforms focus on adaptive learning, others on proctoring, and a few on SME training. However, there is no integrated solution that combines learner state assessment, dynamic content optimization, SME support, and robust proctoring into a single cohesive system. The fragmented nature of available tools compels institutions and educators to rely on multiple, disconnected platforms, which creates inefficiency and reduces adoption.
Thus, there is a need to provide an integrated assistive system for teaching and learning by using Artificial Intelligence to analyse the learner state, customize the contents, create the contents, assist the SMEs and real-time proctoring. The present invention will overcome the aforementioned problems, limitations and disadvantages in an effective manner.
OBJECTIVE OF THE INVENTION
The primary object of the present invention is to provide an integrated assistive system for teaching and learning by utilizing the AI engine to improve the quality of education.
Another object of the present invention is to deliver the contents based on the learner state assessment using multi-sensor data.
Another object of the present invention is to dynamically generate the contents as well as customizing the contents based on the learner state, without just selecting it from the library or memory.
Another object of the present invention is to help the subject matter experts for creating the contents using the multi-sensor data.
Another object of the present invention is to provide the feedback for the SMEs contents using AI engine.
Another object of the present invention is to provide a SME support module for assisting with them to create the clear and focused content based on the learner requirement.
Yet another object of the present invention is to provide an exam proctoring module for providing the real-time situation based proctoring results using the AI engine/module.
These and other objects and advantages of the present invention will become readily apparent from the following detailed description taken in conjunction with the accompanying drawings.
SUMMARY
The various embodiment of the present invention discloses an integrated assistive system for teaching and learning to increase the attention of the learner as well as effectively delivering the contents by the expert. The assistive system comprises a camera, microphone, speaker, screen capturer and a processor. The processor is operatively coupled with an AI engine/module. The system also comprises a learner state assessment module. The learner state assessment module is configured to receive the multi-sensor data for profiling of a learner’s complex cognitive and emotional states by using one or more combinations of facial expressions, vocal tonality and eye-tracking patterns. The said learner state assessment module predictively identifies the points of learning friction or disengagement by using one or more indications. The said one or more emotional states’ indications may comprise the engaged flow, constructive struggle, cognitive overwhelm, boredom, frustration, eureka moment or the like.
The integrated assistive system also comprises a dynamic co-creation and optimization module is operatively coupled with the processor for dynamically generating or customizing the contents in one or more categories, wherein the said one or more categories may comprise the simplified analogies, targeted quizzes, shifts in content modality, contextual hints or prompts, or positive reinforcement. The one or more categories will be injected into the said module based on the personalized information identified by the learner state assessment module.
The integrated assistive system also comprises a content creation module is configured to create the education related contents by the subject matter experts using the multi sensor data. The content creation module comprises an analytical feedback module configured to analyse the subject matter expert’ presentation style in one or more dimensions. The said one or more dimensions may comprise the clarity of explanation, pacing and flow, vocal engagement, screen presence and visual communication, content structuring and organization, audience comprehension indicators or the like.
The content creation module also comprises a personalized SME support module is configured to: provide subtle, non-intrusive cues or visual indicators to the SME regarding aspects like pacing, vocal engagement, or areas where they might elaborate further, during the content recording process; receive the data-driven feedback report including specific timestamps and examples for each piece of feedback; suggest optimal points within the video content for the inclusion of interactive elements such as quizzes, polls, discussion prompts, or simulations; suggest alternative ways of explaining complex concepts, offering different analogies, examples, or even alternative teaching methodologies that might resonate better; analyse longer video segments and recommend logical breakpoints for creating shorter, more digestible micro-learning modules, improving flexibility for learners; suggest integrating external resources such as research papers, supplementary readings, or relevant online tools to enrich the learning experience; and, provide targeted skill development, interactive practice sessions, best practice resources, and progress tracking or the like for the subject matter experts.
The integrated assistive system also comprises an exam proctoring module is configured to: analyse the real-time data to identify potential academic integrity breaches by using one or more combination of facial cues, sustained eye gaze shifts away from the exam interface, anomalous audio events indicating high stress potentially unrelated to exam content or the like; differentiate between normal exam stress/cognitive load and patterns indicative of external assistance, use of unauthorised materials, or impersonation, wherein the process identifies subtle cues like micro-expressions associated with deception or unusual interaction patterns with the device/environment; provide detailed information, including time-stamps, specific data points from all relevant sensors (video, audio and screen activity), and the AI's confidence level in the anomaly; provide real-time audio or text alerts to the user, if the unfavourable activity detection; execute the temporary exam pause or advise the human proctor based on the high confidence anomaly; provide a comprehensive integrity report comprises an organized summary of all detected anomalies, precise time-stamps for each event, assessment result, and the specific type of anomaly identified; and, provide access to the relevant video clips, audio recordings, and screen activity logs directly linked to the anomaly.
These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
BRIEF DESCRIPTION OF THE DRAWINGS
The other objects, features and advantages will occur to those skilled in the art from the following description of the preferred embodiment and the accompanying drawings in which:
Fig 1 illustrates the schematic view of the integrated assistive system for teaching and learning, according to an embodiment of the present invention.
Although the specific features of the present invention are shown in some drawings and not in others. This is done for convenience only as each feature may be combined with any or all of the other features in accordance with the present invention.
100- An Integrated Assistive System for Teaching and Learning
101- A Multi-Sensor Data Module (a camera, a microphone, a speaker and a screen capturer)
102- A Learner State Assessment Module
103- A Dynamic Co-Creation and Optimization Module
104- A Content Creation Module
104A- An Analytical Feedback Module
104B- A Personalized SME Support Module
105- An Exam Proctoring Module
106- A Processor
DETAILED DESCRIPTION
The various embodiments and the other advancements and features are illustrated with the reference to the non-limiting details in the following detailed description. Illustration of processing techniques of well-known components are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The various embodiments of the present invention discloses an integrated assistive system for teaching and learning to increase the attention of the learner as well as effectively delivering the contents by the expert. The assistive system (100) comprises a camera, microphone, speaker, screen capturer and a processor. The processor (106) is operatively coupled with an AI engine/module. The said camera, microphone, speaker and screen capturer is/are operatively coupled with the processor. The entire system is operable by the power source. The integrated assistive system can be operated by the one or more user handheld devices like mobile, laptop, PC, tab and the like, for teaching and learning the contents. The AI engine/tool/module is executing the operations dynamically based on the conditions of the learner/expert. The one or more form of internet connectivity provides the support to the system using AI for analysing the learner state, generating the contents, customizing the delivery, creating the contents, supporting the SMEs, providing online proctoring and generating feedback.
The system also comprises a learner state assessment module, dynamic co-creation and customization module, content creation module and an exam proctoring module. The learner state assessment module (102) is configured to receive the multi-sensor data (101) for profiling of a learner’s complex cognitive and emotional states by using one or more combinations of facial expressions, vocal tonality and eye-tracking patterns. The said learner state assessment module predictively identifies the points of learning friction or disengagement by using one or more indications. The said one or more emotional states’ indications may comprise the engaged flow, constructive struggle, cognitive overwhelm, boredom, frustration, eureka moment or the like.
Fig 1 illustrates the schematic view of the integrated assistive system for teaching and learning, according to an embodiment of the present invention. The said assistive system (100) is having an AI engine to fuse multi-sensor data (including facial expressions, vocal tonality, eye-tracking patterns) to perform nuanced, real-time profiling of a learner’s complex cognitive and emotional states (e.g., "engaged flow," "constructive struggle," "cognitive overwhelm," "boredom"). Based on this deep understanding, the system predictively identifies points of learning friction or disengagement. It then dynamically co-creates and injects personalized micro-learning interventions (such as simplified analogies, targeted quizzes, or shifts in content modality) and optimises the learner's overall learning pathway in real-time. This moves beyond merely adjusting pre-existing content difficulty. This innovative system redefines adaptive learning by moving beyond static content adjustments to offer a truly dynamic and responsive educational experience. At its core, the system leverages a sophisticated AI engine designed to fuse and interpret multi-sensor data streams. This goes beyond simple performance metrics, incorporating rich, nuanced insights from:
Facial Expressions: Analyzing subtle cues to gauge engagement, confusion, frustration, or triumph.
Vocal Tonality: Detecting shifts in pitch, pace, and volume that indicate cognitive load, confidence, or boredom.
Eye-Tracking Patterns: Monitoring gaze fixation, saccades, and pupil dilation to understand attention, processing effort, and areas of interest or difficulty.
This continuous data fusion allows for real-time profiling of a learner's complex cognitive and emotional states. Rather than broad categorizations, the AI pinpoints specific, actionable states such as:
"Engaged Flow": Characterized by deep concentration and optimal learning.
"Constructive Struggle": Indicating a challenging but productive learning phase.
"Cognitive Overwhelm": Suggesting information overload or confusion.
"Boredom": Signifying a lack of challenge or engagement.
"Frustration": Highlighting a potential barrier to progress.
"Eureka Moment": Identifying successful breakthroughs in understanding.
The dynamic co-creation and optimization module is operatively coupled with the processor for dynamically generating or customizing the contents in one or more categories, wherein the said one or more categories may comprise the simplified analogies, targeted quizzes, shifts in content modality, contextual hints or prompts, or positive reinforcement. The one or more categories will be injected into the said module based on the personalized information identified by the learner state assessment module.
Based on this profound, real-time understanding of the learner's state, the system proactively and predictively identifies critical points of learning friction, disengagement, or even optimal receptivity. It then moves beyond pre-defined responses to: Dynamically Co-create Micro-Learning Interventions: The AI doesn't just select from a library of existing content; it intelligently generates or customizes interventions in real-time. Examples include:
Simplified Analogies: Tailored explanations that connect new concepts to familiar ideas.
Targeted Quizzes: Brief, diagnostic assessments designed to pinpoint specific misunderstandings or reinforce learning.
Shifts in Content Modality: Automatically switching from text to video, interactive simulations, or audio explanations based on the learner's current cognitive state and preferred learning style.
Contextual Hints or Prompts: Providing just enough guidance to overcome a hurdle without giving away the answer.
Positive Reinforcement: Delivering timely encouragement when progress is detected. Inject Interventions Seamlessly:
These personalized interventions are injected into the learning pathway at the precise moment they are most effective, ensuring minimal disruption and maximum impact. The system constantly adapts the sequence, difficulty, and presentation of content, not just by adjusting pre-existing content difficulty, but by fundamentally reshaping the learning journey itself. This ensures that each learner receives a truly bespoke and continuously optimized educational experience that maximizes understanding, retention, and engagement, preventing cognitive overload or disengagement before they become significant barriers.
The content creation module (104) is configured to create the education related contents by the subject matter experts using the multi sensor data. The content creation module comprises an analytical feedback module (104A) is configured to analyse the subject matter expert’ presentation style in one or more dimensions. The said one or more dimensions may comprise the clarity of explanation, pacing and flow, vocal engagement, screen presence and visual communication, content structuring and organization, audience comprehension indicators or the like.
The content creation module (104) also comprises a personalized SME support module (104B) is configured to: provide subtle, non-intrusive cues or visual indicators to the SME regarding aspects like pacing, vocal engagement, or areas where they might elaborate further, during the content recording process; receive the data-driven feedback report including specific timestamps and examples for each piece of feedback; suggest optimal points within the video content for the inclusion of interactive elements such as quizzes, polls, discussion prompts, or simulations; suggest alternative ways of explaining complex concepts, offering different analogies, examples, or even alternative teaching methodologies that might resonate better; analyse longer video segments and recommend logical breakpoints for creating shorter, more digestible micro-learning modules, improving flexibility for learners; suggest integrating external resources such as research papers, supplementary readings, or relevant online tools to enrich the learning experience; and, provide targeted skill development, interactive practice sessions, best practice resources, and progress tracking or the like for the subject matter experts.
The system's hardware serves as an intelligent recording and analysis tool for Subject Matter Experts (SMEs/faculty). While an SME creates instructional video content, the AI analyses their presentation style (e.g., clarity, pacing, vocal engagement, screen presence).
The system provides SMEs with:
Real-time and post-hoc actionable feedback on their delivery.
AI-driven suggestions for content enhancement (e.g., where to insert interactive elements, alternative explanations).
Personalized coaching modules to refine their instructional skills. SME-Centric Intelligent Content Creation & Pedagogical Development Suite: Empowering Educators with AI-Driven Insights
This innovative suite is designed to revolutionize the way Subject Matter Experts (SMEs) and faculty develop and deliver instructional video content, fostering a more engaging and effective learning experience. At its core, the system leverages advanced AI capabilities to provide comprehensive support throughout the content creation lifecycle, from initial recording to continuous pedagogical refinement.
The system's hardware functions as a sophisticated, intelligent recording and analysis tool, seamlessly integrating into the SME's content creation process. As an SME delivers instructional video content, the AI meticulously analyzes their presentation style across multiple dimensions. This includes, but is not limited to:
Clarity of Explanation: Evaluating the coherence and understandability of the presented concepts. The AI identifies instances of jargon, convoluted sentences, or areas where further simplification might be beneficial.
Pacing and Flow: Assessing the delivery speed, identifying segments that might be too fast or too slow, and suggesting optimal transitions between topics to maintain learner engagement.
Vocal Engagement: Analyzing vocal parameters such as tone, inflection, volume, and modulation to gauge the SME's enthusiasm and ability to hold the audience's attention. This also includes detecting monotonous delivery or lack of vocal variation.
Screen Presence and Visual Communication: Observing eye contact (if applicable), gestures, and overall body language to ensure effective non-verbal communication. It can also analyze the use of visual aids, their clarity, and how well they complement the verbal explanation.
Content Structuring and Organization: Providing insights into the logical flow of the content, identifying potential disjunctions, and suggesting improved sequencing of information.
Audience Comprehension Indicators: While not directly measuring learner comprehension in real-time during content creation, the AI can analyze speech patterns and presentation cues that typically correlate with higher or lower audience engagement and understanding.
Beyond real-time analysis, the suite offers a robust set of tools and resources for ongoing development and enhancement of instructional skills and content quality. These include:
Real-time and Post-hoc Actionable Feedback on Delivery:
Real-time Feedback: During the recording process, the system can provide subtle, non-intrusive cues or visual indicators to the SME regarding aspects like pacing, vocal engagement, or areas where they might elaborate further. This allows for immediate adjustments and improvements.
Post-hoc Comprehensive Reports: After the recording session, the system generates detailed, data-driven reports highlighting strengths and areas for improvement. These reports are granular, providing specific timestamps and examples for each piece of feedback, making it easy for SMEs to understand and address identified issues. For instance, it might pinpoint a specific minute mark where the pacing was too fast, or where a concept was explained with insufficient clarity.
AI-Driven Suggestions for Content Enhancement:
Interactive Element Insertion: The AI intelligently suggests optimal points within the video content for the inclusion of interactive elements such as quizzes, polls, discussion prompts, or simulations. These suggestions are based on the content's complexity, potential learning bottlenecks, and established pedagogical principles to maximize learner engagement and knowledge retention.
Alternative Explanations and Analogies: Leveraging its vast knowledge base, the AI can propose alternative ways of explaining complex concepts, offering different analogies, examples, or even alternative teaching methodologies that might resonate better with diverse learning styles.
Content Segmentation and Micro-Learning Opportunities: The system can analyze longer video segments and recommend logical breakpoints for creating shorter, more digestible micro-learning modules, improving flexibility for learners.
Resource Integration Suggestions: Based on the topic, the AI might suggest integrating external resources such as research papers, supplementary readings, or relevant online tools to enrich the learning experience.
The exam proctoring module (105) is configured to: analyse the real-time data to identify potential academic integrity breaches by using one or more combination of facial cues, sustained eye gaze shifts away from the exam interface, anomalous audio events indicating high stress potentially unrelated to exam content or the like; differentiate between normal exam stress/cognitive load and patterns indicative of external assistance, use of unauthorised materials, or impersonation, wherein the process identifies subtle cues like micro-expressions associated with deception or unusual interaction patterns with the device/environment; provide detailed information, including time-stamps, specific data points from all relevant sensors (video, audio and screen activity), and the AI's confidence level in the anomaly; provide real-time audio or text alerts to the user, if the unfavourable activity detection; execute the temporary exam pause or advise the human proctor based on the high confidence anomaly; provide a comprehensive integrity report comprises an organized summary of all detected anomalies, precise time-stamps for each event, assessment result, and the specific type of anomaly identified; and, provide access to the relevant video clips, audio recordings, and screen activity logs directly linked to the anomaly.
The AI analyses real-time data (facial cues, sustained eye gaze shifts away from the exam interface, anomalous audio events indicating high stress potentially unrelated to exam content) to identify potential academic integrity breaches.
Nuanced Anomaly Detection: Instead of simple rule-based flagging, the AI differentiates between normal exam stress/cognitive load and patterns indicative of external assistance, use of unauthorised materials, or impersonation. It can identify subtle cues like micro-expressions associated with deception or unusual interaction patterns with the device/environment.
Adaptive Proctoring & Intervention: If suspicious patterns are detected, the system can:
Log events with rich data for human review.
Issue contextual, non-intrusive prompts to the test-taker (e.g., "Please ensure your attention remains on the screen").
In high-confidence anomaly scenarios, it can temporarily pause the exam or escalate to a human proctor if integrated.
Post-Exam Integrity Analysis: Provides a comprehensive integrity report, highlighting and time-stamping suspicious events with supporting multi-sensor data, offering a more robust and evidence-based review process than traditional methods.
AI-Powered Exam Proctoring & Integrity Assurance: A Comprehensive Overview
This data includes, but is not limited to:
Facial Cues: Analyzing subtle shifts in facial expressions,
micro-expressions, and overall demeanor for indicators of stress, deception, or unusual cognitive load.
Sustained Eye Gaze Shifts: Detecting prolonged periods where the test-taker's eyes deviate significantly from the exam interface, potentially indicating the use of external materials or looking for assistance off-screen.
Anomalous Audio Events: Monitoring for unusual or suspicious audio cues, such as whispering, multiple voices, or sounds that could suggest the presence of unauthorized individuals or devices. This includes identifying high-stress vocalizations that might be unrelated to the exam content itself, but rather to an attempt to cheat.
Nuanced Anomaly Detection: Beyond Simple Rules:
Unlike rudimentary proctoring systems that rely on simplistic, rule-based flagging (e.g., "if eyes move, flag"), this AI employs a sophisticated approach to differentiate between normal human behaviors during an exam and genuine integrity violations. It's designed to understand context and nuance, distinguishing between:
Normal Exam Stress/Cognitive Load: The AI can recognize common psychological and behavioral responses to the pressure of an exam, such as momentary glances away to think, fidgeting, or natural variations in expression due to complex problem-solving.
Patterns Indicative of External Assistance: Identifying behaviors that strongly suggest a test-taker is receiving help from another person, such as looking off-camera at a specific angle, mouthing words, or specific patterns of interaction with an external device.
Use of Unauthorized Materials: Detecting instances where a test-taker is attempting to consult notes, books, or electronic devices not permitted during the exam. This might involve subtle hand movements, specific focal points, or changes in posture.
Impersonation: Analyzing biometric data and behavioral patterns to flag potential instances where the person taking the exam is not the registered individual. This could involve discrepancies in facial recognition, voice patterns, or unusual engagement with the system.
Subtle Cues: The system is trained to identify highly subtle indicators often missed by human proctors, such as micro-expressions associated with deception (e.g., fleeting signs of contempt, fear, or surprise when confronted with a challenging question they might be attempting to bypass). It also analyzes unusual interaction patterns with the device or the surrounding environment that might suggest a circumvention attempt.
Adaptive Proctoring & Intervention: Real-time Response:
A key strength of this system is its ability to adapt and intervene in real-time when suspicious patterns are detected. The intervention strategies are designed to be both effective and minimally intrusive, escalating only when confidence in an anomaly is high:
Logging Events with Rich Data: For every suspicious event, the system automatically logs detailed information, including time-stamps, specific data points from all relevant sensors (video, audio, screen activity), and the AI's confidence level in the anomaly. This rich data serves as crucial evidence for subsequent human review.
Contextual, Non-Intrusive Prompts: In cases of moderate suspicion, the system can issue gentle, on-screen prompts to the test-taker. These prompts are designed to be non-accusatory and serve as a subtle reminder to adhere to exam rules (e.g., "Please ensure your attention remains on the screen," "Maintain focus on the exam interface," or "Ensure your environment is clear"). This often helps deter minor deviations without causing undue stress or disruption.
Temporary Exam Pause or Escalation to Human Proctor: In high-confidence anomaly scenarios – where the AI's analysis strongly indicates a clear integrity breach (e.g., detected external voice providing answers, clear view of unauthorized material) – the system has the capability to:
Temporarily Pause the Exam: This can be used to interrupt ongoing cheating attempts and provide a clear warning.
Escalate to a Human Proctor: If integrated with a live proctoring service, the system can immediately alert a human proctor, providing them with the detailed anomaly data and allowing them to take over real-time monitoring and intervention.
Post-Exam Integrity Analysis: Evidence-Based Review:
Beyond real-time proctoring, the system offers invaluable capabilities for post-exam integrity analysis. It generates a comprehensive integrity report that significantly enhances the review process compared to traditional methods:
Highlighting and Time-Stamping Suspicious Events: The report provides a clear, organized summary of all detected anomalies, precise time-stamps for each event, and the specific type of anomaly identified.
Supporting Multi-Sensor Data: Crucially, the report includes supporting multi-sensor data for each flagged event. This means reviewers can access relevant video clips, audio recordings, and screen activity logs directly linked to the anomaly. This provides irrefutable evidence and context.
Robust and Evidence-Based Review: By offering a detailed, data-rich report, the system enables a far more robust, objective, and evidence-based review process for educational institutions. This reduces ambiguity, streamlines investigations, and supports fair and accurate decisions regarding academic integrity.
In an embodiment, the multi-sensor data (101) helps to extract the one or more details from the learner as well as expert using the processor instruction.
The method of operation of the system (100) comprising the steps of:
Receiving, by a processor, the multi-sensor data from the user end;
Analysing, by a learner state assessment module, the multi-sensor data for profiling of a learner’s complex cognitive and emotional states by using one or more combinations of facial expressions, vocal tonality and eye-tracking patterns;
Identifying, by a learner state assessment module, the points of learning friction or disengagement by using one or more indications, wherein the said one or more emotional states’ indications may comprise the engaged flow, constructive struggle, cognitive overwhelm, boredom, frustration, eureka moment or the like;
Generating, by a dynamic co-creation and optimization module, the contents in one or more categories based on the condition of the user, wherein the said one or more categories may comprise the simplified analogies, targeted quizzes, shifts in content modality, contextual hints or prompts, or positive reinforcement, wherein the one or more categories will be injected continuously into the said module based on the personalized information identified by the learner state assessment module;
Creating, by a content creation module, the education related contents by the subject matter experts using the multi sensor data;
Analysing, by an analytical module, the subject matter expert’ presentation style in one or more dimensions, wherein the said one or more dimensions may comprise the clarity of explanation, pacing and flow, vocal engagement, screen presence and visual communication, content structuring and organization, audience comprehension indicators or the like;
Providing, by a SME support module, the personalized support to the subject matter expert for content recording, dynamic feedback, skill enhancement and content optimization; and,
Proctoring, by an exam proctoring module, the user using the multi-sensor data for analysing the breaches, differentiating the stress/load, storing of logs, alerting the user, generating report and providing feedback.
In an embodiment, the interactive assistive system configured to learn and teach the concepts in an effective manner by using AI module. The present invention will dynamically generate the contents based on the assessment of the learner as well as SME. The user can operate the system using the user handheld devices. The user first needs to switch on the multi sensor device for initiating the teaching and learning process. The user interface or display helps to project the details to the user. The users may operate the system using the said display. The user needs to switch ON the system. The learner’s state is assessed by the processor using multi-sensor data combined with AI knowledge. Based on the learner state assessment, the system dynamically adjusts the content and its flow. The AI engine creates and modifies the method of delivery or flow of content according to the learner’s condition. The user can generate content using multi-sensor data. The AI tool analyses the content quality and flow. The AI tool suggests the best option for instant implementation or similar actions. The AI tool generates feedback on the content to enhance the skills of the SME. The AI tool will proctor/monitor exams based on real-time analysis. The AI tool provides a detailed feedback report with timestamps.
It is noted that the above-described examples of the present invention is for the purpose of illustration only. Although the present invention has been described in conjunction with a specific example thereof, numerous modifications may be possible without materially departing from the teachings and advantages of the subject matter described herein. Other substitutions, modifications and changes may be made without departing from the spirit of the present solution. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Although the embodiments herein are described with various specific embodiments, it will be obvious for a person skilled in the art to practice the embodiments herein with modifications.
, Claims:Claims
We claim,
1. An integrated assistive system (100) for teaching and learning, comprising:
a camera;
a microphone;
a speaker;
a screen capturer;
a processor (106) is operatively coupled with an AI engine;
a learner state assessment module (102) is configured to receive the multi-sensor data for profiling of a learner’s complex cognitive and emotional states by using one or more combinations of facial expressions, vocal tonality and eye-tracking patterns,
wherein the said learner state assessment module predictively identifies the points of learning friction or disengagement by using one or more indications, wherein the said one or more emotional states’ indications may comprise the engaged flow, constructive struggle, cognitive overwhelm, boredom, frustration, eureka moment or the like;
a dynamic co-creation and optimization module (103) is operatively coupled with the processor for dynamically generating or customizing the contents in one or more categories, wherein the said one or more categories may comprise the simplified analogies, targeted quizzes, shifts in content modality, contextual hints or prompts, or positive reinforcement, wherein the one or more categories will be injected into the said module based on the personalized information identified by the learner state assessment module;
a content creation module (104) is configured to create the education related contents by the subject matter experts using the multi sensor data, wherein the content creation module comprises:
an analytical feedback module (104A) is configured to analyse the subject matter expert’ presentation style in one or more dimensions, wherein the said one or more dimensions may comprise the clarity of explanation, pacing and flow, vocal engagement, screen presence and visual communication, content structuring and organization, audience comprehension indicators or the like; and,
a personalized SME support module (104B) is configured to:
- provide subtle, non-intrusive cues or visual indicators to the SME regarding aspects like pacing, vocal engagement, or areas where they might elaborate further, during the content recording process;
- receive the data-driven feedback report including specific timestamps and examples for each piece of feedback;
- suggest optimal points within the video content for the inclusion of interactive elements such as quizzes, polls, discussion prompts, or simulations;
- suggest alternative ways of explaining complex concepts, offering different analogies, examples, or even alternative teaching methodologies that might resonate better;
- analyse longer video segments and recommend logical breakpoints for creating shorter, more digestible micro-learning modules, improving flexibility for learners;
- suggest integrating external resources such as research papers, supplementary readings, or relevant online tools to enrich the learning experience; and,
- provide targeted skill development, interactive practice sessions, best practice resources, and progress tracking or the like for the subject matter experts; and,
an exam proctoring module (105) is configured to:
- analyse the real-time data to identify potential academic integrity breaches by using one or more combination of facial cues, sustained eye gaze shifts away from the exam interface, anomalous audio events indicating high stress potentially unrelated to exam content or the like;
- differentiate between normal exam stress/cognitive load and patterns indicative of external assistance, use of unauthorised materials, or impersonation, wherein the process identifies subtle cues like micro-expressions associated with deception or unusual interaction patterns with the device/environment;
- provide detailed information, including time-stamps, specific data points from all relevant sensors (video, audio and screen activity), and the AI's confidence level in the anomaly;
- provide real-time audio or text alerts to the user, if the unfavourable activity detection;
- execute the temporary exam pause or advise the human proctor based on the high confidence anomaly;
- provide a comprehensive integrity report comprises an organized summary of all detected anomalies, precise time-stamps for each event, assessment result, and the specific type of anomaly identified; and,
- provide access to the relevant video clips, audio recordings, and screen activity logs directly linked to the anomaly.
2. The integrated assistive system for teaching and learning as claimed in claim 1, wherein the system (100) provides a dynamic response using the AI engine with respect to the user teaching and learning conditions, without just selecting it from the library/memory.
3. The integrated assistive system for teaching and learning as claimed in claim 1, wherein the learner state assessment module (102) measures the facial expressions by analysing subtle cues to gauge engagement, confusion, frustration, or triumph, wherein the said module also measures the vocal tonality by detecting shifts in pitch, pace, and volume that indicate cognitive load, confidence, or boredom, wherein the said module also tracks the eye pattern by monitoring gaze fixation, saccades, and pupil dilation to understand attention, processing effort, and areas of interest or difficulty.
4. The integrated assistive system for teaching and learning as claimed in claim 1, wherein the subject matter expert support module also configured to improve vocal modulation and enhance screen presence using AI engine powered immediate feedback and guidance features.
5. The integrated assistive system for teaching and learning as claimed in claim 1, wherein the said audio or text alert may include the term “please ensure your attention remains on the screen” or the like.
6. The integrated assistive system for teaching and learning as claimed in claim 1, wherein the said processor (106) is operatively coupled with the one or more internet connectivity in wire or wireless mode for getting assistance from the AI engine to generate dynamic and customized results.
7. A method of operation of the assistive system (100) for teaching and learning, comprising:
Receiving, by a processor, the multi-sensor data (101) from the user end;
Analysing, by a learner state assessment module (102), the multi-sensor data for profiling of a learner’s complex cognitive and emotional states by using one or more combinations of facial expressions, vocal tonality and eye-tracking patterns;
Identifying, by a learner state assessment module, the points of learning friction or disengagement by using one or more indications, wherein the said one or more emotional states’ indications may comprise the engaged flow, constructive struggle, cognitive overwhelm, boredom, frustration, eureka moment or the like;
Generating, by a dynamic co-creation and optimization module (103), the contents in one or more categories based on the condition of the user, wherein the said one or more categories may comprise the simplified analogies, targeted quizzes, shifts in content modality, contextual hints or prompts, or positive reinforcement, wherein the one or more categories will be injected continuously into the said module based on the personalized information identified by the learner state assessment module;
Creating, by a content creation module (104), the education related contents by the subject matter experts using the multi sensor data;
Analysing, by an analytical module (104A), the subject matter expert’ presentation style in one or more dimensions, wherein the said one or more dimensions may comprise the clarity of explanation, pacing and flow, vocal engagement, screen presence and visual communication, content structuring and organization, audience comprehension indicators or the like;
Providing, by a SME support module (104B), the personalized support to the subject matter expert for content recording, dynamic feedback, skill enhancement and content optimization; and,
Proctoring, by an exam proctoring module (105), the user using the multi-sensor data for analysing the breaches, differentiating the stress/load, storing of logs, alerting the user, generating report and providing feedback.

Documents

Application Documents

# Name Date
1 202541093285-STATEMENT OF UNDERTAKING (FORM 3) [29-09-2025(online)].pdf 2025-09-29
2 202541093285-REQUEST FOR EXAMINATION (FORM-18) [29-09-2025(online)].pdf 2025-09-29
3 202541093285-REQUEST FOR EARLY PUBLICATION(FORM-9) [29-09-2025(online)].pdf 2025-09-29
4 202541093285-FORM-9 [29-09-2025(online)].pdf 2025-09-29
5 202541093285-FORM FOR STARTUP [29-09-2025(online)].pdf 2025-09-29
6 202541093285-FORM FOR SMALL ENTITY(FORM-28) [29-09-2025(online)].pdf 2025-09-29
7 202541093285-FORM 18 [29-09-2025(online)].pdf 2025-09-29
8 202541093285-FORM 1 [29-09-2025(online)].pdf 2025-09-29
9 202541093285-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [29-09-2025(online)].pdf 2025-09-29
10 202541093285-DRAWINGS [29-09-2025(online)].pdf 2025-09-29
11 202541093285-DECLARATION OF INVENTORSHIP (FORM 5) [29-09-2025(online)].pdf 2025-09-29
12 202541093285-COMPLETE SPECIFICATION [29-09-2025(online)].pdf 2025-09-29
13 202541093285-Proof of Right [16-10-2025(online)].pdf 2025-10-16