Sign In to Follow Application
View All Documents & Correspondence

Teaching Assistive Device

Abstract: A teaching assistive device, comprising a housing 101 incorporates four motorized omnidirectional wheels 102 on telescopic rods 103 for locomotion, a touch-enabled display 104 shows the dynamic teacher schedule, updated by a scheduler using an attendance database to assign substitutes via a communication unit, an artificial intelligence based imaging unit 105 on a telescopic link 106, synchronized with a microphone 107, records classes, an articulated artificial intelligence imaging sensor 108 with facial recognition detects student inattentiveness, triggering a speaker 110 for teacher alerts, an assessment unit 111 analyzes answer sheets, an artificial intelligence-based camera 112 detects obstacles, a STT module converts speech to text for memory storage. a chamber 111a with an OCR sensor 111c on a dual-axis lead screw arrangement 111d and a pair of articulated telescopic grippers 111e for scanning, a holographic projection unit 109 generates 3D teaching visuals via teacher voice commands.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 May 2025
Publication Number
25/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Marwadi University
Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Inventors

1. Tvisha Gami
Department of Information and Communication Technology, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
2. Chandrasinh D Parmar
Department of Information and Communication Technology, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
3. Nishith Kotak
Department of Information and Communication Technology, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a teaching assistive device that is capable of implementing comprehensive recording capabilities for classroom sessions by capturing both visual and auditory information for subsequent review and course improvement.

BACKGROUND OF THE INVENTION

[0002] The teaching assistance in the classroom addresses several key needs. The teaching assistance automates tasks like schedule management and assessment, freeing up teacher time. The device enhances student engagement through interactive elements and monitors attentiveness, allowing for timely intervention. By recording lessons and offering 3D visualizations, it caters to diverse learning styles and provides valuable resources for review and course improvement, ultimately aiming for a more efficient and effective learning environment. Teaching presents numerous challenges, including managing diverse learning needs and maintaining student engagement. Classroom management and discipline is demanding, alongside the heavy workload of grading and administrative tasks. Effectively communicating with parents and adapting to changing educational trends and technologies also pose ongoing difficulties for educators.

[0003] Traditionally used devices for teaching include blackboards/whiteboards, textbooks, projectors, and physical models. Blackboards and whiteboards require manual writing and drawing, making dynamic content delivery and automated recording difficult. Textbooks, while providing structured information, lack interactivity and adaptability for automated personalized learning. Projectors, often used for presentations, require manual setup and control, and integrating them into automated means for dynamic content based on real-time student interaction is complex. Physical models, while helpful for visualization, are static and not easily adaptable or controllable by automated means for varied demonstrations or simulations. These traditional tools generally lack the inherent digital interfaces and responsiveness necessary for seamless integration into automated teaching environments.

[0004] CN201638405U discloses a teaching robot, which includes a camera, a touch screen microcomputer controller arranged under the camera, and a speaker built in the touch screen controller. The camera is connected with the touch screen microcomputer controller through an external USB cable. The touch screen microcomputer controller is connected with the power lead. The teaching robot of the utility model is simple, convenient and intelligent in operation, can solve puzzles for students in time, saves a lot of spare time for teachers, and reduces the teaching burden of teachers.

[0005] CN105390038A relates to a classroom multifunctional timely feedback system. The device comprises a computer equipped with teaching interaction software for allowing teaching interaction to be carried out between teachers and students, a host for receiving signals emitted by a teacher terminal or a student terminal and inputting the signals to the computer, the student terminal for finishing communication with the teachers through information interaction with the host, the teacher terminal for finishing communication with the students through information interaction with the host, and a cloud service platform realizing data exchange with the computer through the internet, and used for storing the teaching data and study information of the students to allow the teachers and the students to download and use the teaching data. The classroom multifunctional timely feedback device realizes interaction of real-time teaching tests and the like between the teachers and the students in class so as to enable teacher-student interaction to be involved by everybody instead of a minority of people in the past, thereby stimulating learning enthusiasm of all students and improving quality of the teacher-student classroom interaction; and the device can enable the teachers to know the learning and absorption information of all students comprehensively and truly in real time.

[0006] Conventionally, many devices have been available in market for assisting in teaching. However, these existing devices lack in comprehensive integration of autonomous navigation, dynamic scheduling with automated substitution, synchronized multimedia recording, student inattentiveness detection with real-time alerts, automated assessment with physical document handling, and interactive holographic projections within a single, mobile platform. While, the existing solutions may address individual aspects like classroom recording or interactive whiteboards, they do not offer the synergistic combination of these advanced features for a truly intelligent and versatile teaching assistant.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a device that requires to be capable of providing a truly integrated and autonomous teaching assistance solution. The developed device needs to seamlessly combine functionalities such as self-navigating mobility within the classroom, dynamic and automated management of teacher schedules and substitutions, comprehensive recording and analysis of classroom sessions, real-time monitoring of student engagement with timely feedbacks, efficient and automated assessment of physical student work.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a device that achieves autonomous locomotion within a classroom setting, thus enabling itself to navigate and position itself as required without direct manual control.

[0010] Another object of the present invention is to develop a device that is capable of managing teacher schedules, including dynamic updates and the seamless assignment of substitute teachers to prevent class disruptions.

[0011] Another object of the present invention is to develop a device that is capable of implementing comprehensive recording capabilities for classroom sessions by capturing both visual and auditory information for subsequent review and course improvement.

[0012] Yet another object of the present invention is to develop a device that is capable of analyzing and scoring student-submitted answer sheets for streamlining assessment process for educators.

[0013] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0014] The present invention is directed towards a teaching assistive device that is capable of autonomously navigating classrooms, intelligently managing teacher schedules and substitutions, recording and analyzing lesson content, monitoring student attentiveness with automated feedback, facilitating efficient assessment of student work, and providing interactive holographic learning aids, thereby revolutionizing the educational experience for both teachers and students.

[0015] According to an embodiment of the present invention, a teaching assistive device, comprising, a housing having four motorised omnidirectional wheels installed underneath the housing by means of telescopic rods for a locomotion of the housing, a touch enabled display attached with the housing, to display a dynamic schedule of teachers, a scheduler updates the schedule in accordance with a database containing attendance data of the teachers, to prevent cancellation of classes by assigning substitute teachers, a communication unit provided in the housing generates notifications for substitute teachers regarding classes assigned to them, in accordance with the scheduler, an artificial intelligence based imaging unit mounted over the housing by means of a telescopic link, in synchronisation with a microphone mounted on the housing, records classes wherein visual date and spoken date is saved onto a memory connected with a microcontroller, to enable planning and tracking of course, an artificial intelligence based imaging sensor configured with a facial recognition module, mounted over the housing in an articulated manner, records faces of the students to determine inattentiveness, a speaker mounted on the housing to provide an audio alert to the teacher, an assessment unit provided in the housing, for analysing answer sheets submitted by students.

[0016] According to another embodiment of the present invention, the device further comprises of an artificial intelligence based camera is mounted with the housing and integrated with a processor to capture visuals in vicinity of the housing, to determine presence of obstacles, the database is accessed by means of a communication unit installed with the microcontroller, a STT (speech to text) module is configured with the microcontroller converts the spoken matter into text to be saved in the memory, the scheduler, based on completed course, generates schedule for remaining coursework as per remaining teaching days, the scheduler analyses factors responsible for delay in completion of course to generate suggestions for completing course within time, the assessment unit comprises a chamber disposed within the housing, with an opening carved along a surface of the housing to input answer sheets, wherein an OCR (optical character recognition) sensor installed within the chamber by means of a dual axis lead screw arrangement, for extraction text from the submitted answer sheet, a pair of articulated telescopic grippers are provided within the chamber for turning of the answer sheets while scanning, the assessment unit scores answer sheet and a holographic projection unit is mounted on the housing to generate 3D visualisations to assist with teaching, based on voice command from the teacher.

[0017] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a teaching assistive device.

DETAILED DESCRIPTION OF THE INVENTION

[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0020] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0021] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0022] The present invention relates to a teaching assistive device that is capable of autonomously navigating educational environments and dynamically managing teacher schedules with automated substitution. In addition, the device records and archives classroom sessions for enhanced course planning and tracking, monitors student engagement through facial recognition with real-time feedback, and automates the analysis and scoring of student assessments.

[0023] Referring to Figure 1, an isometric view of a teaching assistive device is illustrated, comprising, a housing 101 having four motorised omnidirectional wheels 102 installed underneath the housing 101 by means of telescopic rods 103, a touch enabled display 104 attached with the housing 101, an artificial intelligence based imaging unit 105 mounted over the housing 101 by means of a telescopic link 106, a microphone 107 mounted on the housing 101, an artificial intelligence based imaging sensor 108 mounted over the housing 101, a holographic projection unit 109 is mounted on the housing 101.

[0024] Figure 1 further illustrates a speaker 110 mounted on the housing 101, an assessment unit 111 provided in the housing 101, the assessment unit 111 comprises a chamber 111a disposed within the housing 101, an opening 111b carved along a surface of the housing 101, an OCR (optical character recognition) sensor 111c installed within the chamber 111a by means of a dual axis lead screw arrangement 111d, a pair of articulated telescopic grippers 111e are provided within the chamber 111a, and an artificial intelligence based camera 112 is mounted with the housing 101.

[0025] The device disclosed herein includes a housing 101 is developed to be positioned over a ground surface in proximity of a user such as students and teachers, for providing teaching assistance to students. The housing 101 is rectangular in shape, thus ensuring stability during use. The housing 101 herein incorporates all the components of the device required for assisting the users with their teaching. The housing 101 incorporates a touch enable display 104 arranged with the housing 101, to display a dynamic schedule of teachers.

[0026] The touch-enabled display 104, also referred to as a touchscreen, is an electronic visual display that detects the presence and location of a touch within the display area. In a preferred embodiment of the present invention, the touch-enabled display 104 typically consists of several layers, including a display panel, a touch sensor, and a controller.

[0027] The display panel is the outermost layer and is responsible for displaying the visual information, which is a liquid-crystal display (LCD). Beneath the display panel lies the touch sensor, which is usually a transparent conductive material, such as indium tin oxide (ITO). The touch sensor is divided into a grid of rows and columns, with each intersection forming a unique touch point.

[0028] When the user touches the display 104, the display 104 acts as a conductor, allowing electricity to flow through the point of touch, which causes a change in the electrical signal at that point, which is detected by the touch sensor. The touch sensor sends this information to the controller that interprets the data and determines the exact location of the touch. It also filters out any noise or interference that may be present in the signal. Once the controller has determined the touch location, it sends this information to a microcontroller of the device. The microcontroller functions as a central processing unit of the device, executing programmed instructions to control its operations, manage inputs and outputs, and coordinate various components for seamless functionality.

[0029] The dynamic schedule is managed by a scheduler that updates based on teacher attendance data stored in a database linked with the microcontroller, assigning substitutes to prevent class cancellations, if the assigned teacher is absent. The scheduler is a component within the microcontroller, operates to maintain and update the teacher schedule dynamically. The scheduler involves data acquisition, where it continuously accesses the database containing teacher attendance data and the existing schedule; rule-based processing, where it applies predefined rules to this data, such as identifying absent teachers based on the attendance records; conflict detection, where it identifies classes affected by teacher absences; substitution logic, where it employs protocols to find suitable substitute teachers based on their availability, subject expertise, and potentially their existing workload; schedule modification, where it updates the schedule by assigning the substitute teacher to the affected class; and notification triggering, where it signals a communication unit to inform the assigned substitute teacher about their new class. The scheduler also incorporates optimization protocols to minimize disruptions and ensure equitable distribution of substitute assignments. Furthermore, it analyzes completed coursework against the allocated teaching days to generate schedules for remaining topics and identify factors causing delays, offering suggestions for timely completion.

[0030] The communication unit notifies substitute teachers, that is activated by the microcontroller. In an embodiment of a present invention, establishing a wireless connection between the microcontroller and a computing unit (includes, but not limited to smartphone, tablet or laptop) and inbuilt with a user-interface that is accessed by the substitute teachers to get notifications regarding the classes assigned to them.

[0031] The communication module used herein includes, but not limited to Wi-Fi (Wireless Fidelity) module, Bluetooth module, GSM (Global device for Mobile Communication) module. The communication module used herein is preferably a Wi-Fi module that is a hardware component that enables the microcontroller to connect wirelessly with the computing unit. The Wi-Fi module works by utilizing radio waves to transmit and receive data over short distances. The core functionality relies on the IEEE 802.11 standards, which define the protocols for wireless local area networking (WLAN). Once connected, the module allows the microcontroller to send and receive data through data packets.

[0032] During the classes, the microcontroller activates an artificial intelligence-based imaging unit 105 installed on the housing 101 via a telescopic link 106 to record visuals of sessions held nearby. The imaging unit 105 includes an image-capturing module comprising a set of lenses that capture visuals of the classes conducted in proximity to the housing 101. The recorded footage is stored in the memory of the imaging unit 105 as optical data. Additionally, the imaging unit 105 features a processor embedded with artificial intelligence protocols. This processor analyzes the optical data, extracts relevant information from the captured images, and converts the extracted data into digital pulses and bits, which are then transmitted to the microcontroller.

[0033] For efficient recording, the microcontroller actuates the telescopic link 106 to adjust the position of the imaging unit 105 for recording from the different angles. The extension/retraction of the link 106 is powered pneumatically by the microcontroller by employing a pneumatic unit associated with the link 106, including an air compressor, air cylinders, air valves and piston which works in collaboration to aid in extension and retraction of the link 106. The pneumatic unit is operated by the microcontroller, such that the microcontroller actuates valve to allow passage of compressed air from the compressor within the cylinder, the compressed air further develops pressure against the piston and results in pushing and extending the piston. The piston is connected with the link 106 and due to applied pressure, the link 106 extends and similarly, the microcontroller retracts the link 106 by closing the valve resulting in retraction of the piston. Thus, the microcontroller regulates the extension/retraction of the link 106 in order to position the imaging unit 105 efficiently for enabling quality recording of the sessions.

[0034] In synchronization with the imaging unit 105, the microcontroller activates a microphone 107 installed on the housing 101 to record the audio of the sessions. This microphone 107 contains a small diaphragm connected to a moving coil. When sound waves of the teacher hit the diaphragm, the coil vibrates, causing it to move back and forth within a magnetic field and generate an electrical current. This electrical signal is then sent to the microcontroller. The visual data from the imaging unit 105 and the corresponding spoken data is converted to text via a STT (speech to text) module is configured with the microcontroller, are then saved onto a memory connected to the microcontroller for enabling course planning and tracking.

[0035] The Speech-to-Text (STT) module works through a multi-stage process to transcribe spoken audio into written text. The module first receives an audio signal captured by the microphone 107. This analog signal is then converted into a digital format through a process called acoustic encoding, where the continuous sound waves are sampled at regular intervals and represented as a sequence of numerical values. These digital representations are then fed into an acoustic model, which is trained on vast amounts of speech data and their corresponding phonetic transcriptions. The acoustic model breaks down the audio into smaller phonetic units (phonemes) and determines the probability of each unit occurring at specific points in the audio stream. Following this, a lexicon or pronunciation dictionary is consulted, which contains the likely pronunciations of words. Finally, a language model, also trained on extensive text corpora, analyzes the sequence of likely phonemes and words, considering grammatical rules, contextual probabilities, and common word sequences to determine the most probable and coherent textual output. This output, representing the transcribed speech, is then made available for further processing or storage.

[0036] An artificial intelligence-based imaging sensor 108 installed over the housing 101 in an articulated manner that is activated by the microcontroller to record faces of the students in order to determine inattentiveness via facial recognition module configured with the imaging sensor 108. The imaging sensor 108, likely a camera (visible light or infrared), captures real-time video frames of the classroom. These frames are then processed by a face detection protocols within the module to locate and isolate the faces of students present in the scene. Once faces are detected, a facial landmark detection component identifies key points on each face, such as the corners of the eyes, the tip of the nose, and the contours of the mouth and jawline. These landmarks are then used by a feature extraction component to create a unique numerical representation or "faceprint" for each student.

To determine inattentiveness, the facial recognition module analyzes these faceprints and the movement of the identified facial landmarks over time. Specific patterns of movement or the lack thereof, such as prolonged gaze aversion (not looking towards the teacher or the front), excessive head tilting or drooping, closed eyes, or repeated fidgeting without visual engagement, indicative of inattentiveness. The artificial intelligence is trained on data correlating facial cues with attentiveness levels, processes these patterns. If a student's facial cues and movement patterns match the learned characteristics of inattentiveness for a certain duration or intensity, the facial recognition module sends a signal to the microcontroller. This signal then triggers a speaker 110 mounted on the housing 101 to emit an audio alert to the teacher, drawing their attention to the potentially inattentive student.

[0037] The speaker 110 works by converting the electrical signal into the audio signal. The speaker 110 consists of a cone known as a diaphragm attached to a coil-shaped wire placed between two magnets. When the electric signal is passed through the voice coil, a varying magnetic field is generated by the coil that interacts with the magnet causing the diaphragm to move back and forth. The movement of the diaphragm pushes and pulls air creating sound waves just like the electrical signal received and used to notify the user to provide an audio alert to the teacher.

[0038] For example, this device assesses if the teacher is teaching slowly, if students are having trouble understanding the topic, or if there are too many class disruptions (like noise, students talking, etc.). After analyzing the cause of any identified problems, the robot makes smart suggestions to help fix them. For instance, if the teacher's pace is too fast or too slow, it suggests adjusting the teaching speed. If students are not understanding the material, it recommends using more visual tools like 3D projections, educational videos, and interactive quizzes. The robot then prepares a detailed report showing what topics were covered, which subjects are ahead or behind schedule, student attention patterns, teaching pace and quality, and suggested improvements.

[0039] In an embodiment of the present invention, a ball and socket joint is arranged with the imaging sensor 108, that is controlled by the microcontroller to allow the imaging sensor 108 to capture all the students efficiently. The ball and socket joint provides a rotation to the imaging sensor 108 for aiding the imaging unit 105 to turn at a required angle. The ball and socket joint is a coupling consisting of a ball joint securely locked within a socket joint, where the ball joint is able to move in a 360-dgree rotation within the socket thus, providing the required rotational motion to the imaging unit 105. The ball and socket joint is powered by a DC (direct current) motor that is actuated by the microcontroller thus providing multidirectional movement to the imaging unit 105.

[0040] In case the teacher provides input commands regarding 3D (three-dimensional) visualisations to assist via the computing unit, then the microcontroller actuates a holographic projection unit 109 is mounted on the housing 101 to generate 3D visualisations to assist with teaching. The holographic projection unit 109 generates 3D visualizations for teaching by employing principles of light diffraction and interference. The unit 109 typically contains a light source, often a laser or an array of LEDs, which illuminates a spatial light modulator (SLM). The SLM is a dynamic optical element that manipulates the phase and amplitude of the incoming light waves according to a digitally encoded holographic pattern. This pattern, calculated based on the desired 3D object, essentially encodes how the light is bent and interfered to reconstruct the image. The modulated light then passes through a series of lenses and mirrors that further shape and direct the light waves into space. When these manipulated light waves interfere with each other in the viewing area, they reconstruct the light field that have been emitted by a real 3D object, creating the illusion of a three-dimensional image floating in mid-air. The viewer perceives this image from different angles, experiencing a sense of depth and spatial presence without the need for special glasses. The holographic pattern displayed on the SLM is rapidly updated, allowing for dynamic 3D visualizations that responds to voice commands or input commands via the computing unit.

[0041] For assessments, the students submit their answer sheets into an assessment unit 111 provided in the housing 101, comprises a chamber 111a arranged within the housing 101, with an opening 111b carved along a surface of the housing 101 to input their answer sheets. As soon as the students submitted their answer sheet into the assessment unit 111 as detected via the imaging unit 105, the microcontroller actuates a pair of articulated telescopic grippers 111e are installed within the chamber 111a to hold the submitted sheets and turn of the answer sheets while scanning.

[0042] The extension/retraction of the telescopic grippers 111e is regulated by the microcontroller by employing the pneumatic unit associated with the gripper, in a same manner as the telescopic link 106 works in order to grip the answer sheets and turn them during scanning via an OCR (optical character recognition) sensor 111c arranged within the chamber 111a by means of a dual axis lead screw arrangement 111d, that are activated by the microcontroller in synchronization to extract text from the submitted answer sheet.

[0043] The OCR (Optical Character Recognition) sensor 111c works to extract text from the submitted answer sheet through a sequence of operational components. First, an illumination source within the sensor 111c unit shines light onto the answer sheet. A scanner or camera then captures a digital image of the sheet. This image undergoes pre-processing, which involves steps like noise reduction, deskewing (correcting for tilting), contrast adjustment, and binarization (converting to black and white) to enhance the clarity of the text.

[0044] Next, a text localization or segmentation component identifies regions within the image that contain text and separates individual lines, words, and characters. The core of the process is character recognition, which typically employs one or both of the following techniques: pattern matching, where isolated character images (glyphs) are compared to a database of known fonts and characters, or feature extraction, where distinctive features of each character (lines, curves, loops, intersections) are identified and compared to stored abstract representations.

[0045] The OCR sensor 111c utilizes artificial intelligence and machine learning models, particularly neural networks, trained on vast datasets of text and images to improve accuracy and handle variations in fonts, handwriting, and image quality. After recognition, a post-processing stage applies techniques like spell-checking, contextual analysis using a language model, and formatting to improve the accuracy and readability of the extracted text. Finally, the recognized text is outputted as digital data, ready for analysis by the assessment unit 111. The OCR sensor 111c is moved by the dual axis lead screw arrangement 111d that is controlled by the microcontroller in sync with the OCR sensor 111c, to scans different parts of the answer sheet, and the extracted text is then scored and saved to student profiles created via the teacher via the computing unit.

[0046] The dual axis lead screw arrangement 111d utilizes two lead screws to control the movement and positioning of the OCR sensor 111c in two axes. The two axis lead screw arrangement 111d comprises of a pair of lead screws both are positioned perpendicular each other. Each screws have its own dedicated lead screw and corresponding nut assembly. Each lead screw is driven by a motor, allowing individual control and movement of the OCR sensor 111c for scanning different parts of the answer sheet.

[0047] In addition, if the teacher needs to relocate or move the housing 101 in the classroom then the user provides commands regarding relocation of the housing 101 via the computing unit. Upon processing the user input command via the computing unit, the microcontroller actuates four motorised omnidirectional wheels 102 arranged underneath the housing 101 to move the housing 101 towards the user-desired location. The omnidirectional wheels 102 consist of a wheel connected to a motor via a shaft and is engineered to allow movement in any direction without altering the housing’s orientation, providing exceptional maneuverability. When the microcontroller actuates the wheels 102, the motor rotates either clockwise or counter-clockwise, transferring motion through the shaft to the wheel. This enables the housing 101 to move smoothly in any direction, making these wheels 102 highly effective for relocating and precisely positioning the housing 101.

[0048] While the configured telescopic rods 103 (preferably in range 4-6) are configured in between the wheels 102 and the housing 101, that are controlled by the microcontroller on the command of the user via the computing unit to extend and retract as per need. The rod herein is powered by the pneumatic unit associated with the device. The rods 103 extend and retract in the same manner as the telescopic link 106 described earlier.

[0049] While moving, the microcontroller activates an artificial intelligence based camera 112 installed with the housing 101 and integrated with a processor to capture visuals in vicinity of the housing 101 for determining presence of obstacles. The artificial intelligence-based camera 112 work in same manner as the imaging unit 105 disclosed above, for detecting presence of obstacles. Upon detection of the obstacles, the microcontroller regulates the actuation of the wheels 102 to translate the housing 101 while avoiding collision.

[0050] Lastly, a battery (not shown in figure) is associated with the device to supply power to electrically powered components which are employed herein. The battery is comprised of a pair of electrode named as a cathode and an anode. The battery uses a chemical reaction of oxidation/reduction to do work on charge and produce a voltage between their anode and cathode and thus produces electrical energy that is used to do work in the device.

[0051] The present invention work best in the following manner, where the housing 101 is equipped with four motorized omnidirectional wheels 102 on telescopic rods 103, achieves locomotion. The touch-enabled display 104 shows the dynamic schedule of teachers and updated by the scheduler using the database containing teacher attendance data to assign substitute teachers and prevent class cancellations. The communication unit generates notifications for substitute teachers based on the scheduler. The artificial intelligence based imaging unit 105, mounted over the housing 101 via the telescopic link 106 and synchronized with the microphone 107 records classes and saving visual and spoken data onto the memory connected with the microcontroller for course planning and tracking. The artificial intelligence based imaging sensor 108 is configured with the facial recognition module and mounted over the housing 101 via ball and socket joint in articulated manner records student faces to determine inattentiveness and actuating the speaker 110 for audio alerts to the teacher.

[0052] In continuation, the assessment unit 111 featuring the chamber 111a with the opening 111b, analyzes submitted answer sheets using the OCR sensor 111c moved by the dual axis lead screw arrangement 111d. The pair of articulated telescopic grippers 111e within the chamber 111a facilitates turning of the answer sheets during scanning. The assessment unit 111 scores answer sheets and saves the scores onto individual student profiles for progress tracking. The holographic projection unit 109 is mounted on the housing 101 generates 3D visualizations based on voice commands from the teacher to aid instruction. The artificial intelligence based camera 112 integrated with the processor captures visuals to detect obstacles and triggering the microcontroller to actuate the wheels 102 and the telescopic rods 103 for collision avoidance. The STT module configured with the microcontroller converts spoken matter into text for saving in the memory. The scheduler generates schedules for remaining coursework based on completed material and remaining teaching days and analyzes delay factors to suggest completion strategies. The database is accessed via the communication unit installed with the microcontroller.

[0053] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention.
, Claims:1) A teaching assistive device, comprising:

i) a housing 101 having four motorised omnidirectional wheels 102 installed underneath the housing 101 by means of telescopic rods 103 for a locomotion of the housing 101;
ii) a touch enabled display 104 attached with the housing 101, to display 104 a dynamic schedule of teachers, wherein a scheduler updates the schedule in accordance with a database containing attendance data of the teachers, to prevent cancellation of classes by assigning substitute teachers;
iii) a communication unit provided in the housing 101 generates notifications for substitute teachers regarding classes assigned to them, in accordance with the scheduler;
iv) an artificial intelligence-based imaging unit 105 mounted over the housing 101 by means of a telescopic link 106, in synchronisation with a microphone 107 mounted on the housing 101, records classes wherein visual date and spoken date is saved onto a memory connected with a microcontroller, to enable planning and tracking of course;
v) an artificial intelligence-based imaging sensor 108 configured with a facial recognition module, mounted over the housing 101 in an articulated manner, records face of the students to determine inattentiveness to accordingly actuate a speaker 110 mounted on the housing 101 to provide an audio alert to the teacher; and
vi) an assessment unit 111 provided in the housing 101, for analysing answer sheets submitted by students.

2) The device as claimed in claim 1, wherein an artificial intelligence-based camera 112 is mounted with the housing 101 and integrated with a processor to capture visuals in vicinity of the housing 101, to determine presence of obstacles to trigger the microcontroller to actuate the wheels 102 to translate the housing 101 while avoiding collision.

3) The device as claimed in claim 1, wherein the database is accessed by means of a communication unit installed with the microcontroller.

4) The device as claimed in claim 1, wherein a STT (speech to text) module is configured with the microcontroller converts the spoken matter into text to be saved in the memory.

5) The device as claimed in claim 1, wherein the scheduler, based on completed course, generates schedule for remaining coursework as per remaining teaching days.

6) The device as claimed in claim 1, wherein the scheduler analyses factors responsible for delay in completion of course to generate suggestions for completing course within time.

7) The device as claimed in claim 1, wherein the assessment unit 111 comprises a chamber 111a disposed within the housing 101, with an opening 111b carved along a surface of the housing 101 to input answer sheets, wherein an OCR (optical character recognition) sensor 111c installed within the chamber 111a by means of a dual axis lead screw arrangement 111d, for extraction text from the submitted answer sheet.

8) The device as claimed in claim 1, wherein a pair of articulated telescopic grippers 111e are provided within the chamber 111a for turning of the answer sheets while scanning, wherein the assessment unit 111 scores answer sheet and saves the scores onto individual profiles of students for tracking of progress.

9) The device as claimed in claim 1, wherein a holographic projection unit 109 is mounted on the housing 101 to generate 3D visualisations to assist with teaching, based on voice command from the teacher.

Documents

Application Documents

# Name Date
1 202521050672-STATEMENT OF UNDERTAKING (FORM 3) [27-05-2025(online)].pdf 2025-05-27
2 202521050672-REQUEST FOR EXAMINATION (FORM-18) [27-05-2025(online)].pdf 2025-05-27
3 202521050672-REQUEST FOR EARLY PUBLICATION(FORM-9) [27-05-2025(online)].pdf 2025-05-27
4 202521050672-PROOF OF RIGHT [27-05-2025(online)].pdf 2025-05-27
5 202521050672-POWER OF AUTHORITY [27-05-2025(online)].pdf 2025-05-27
6 202521050672-FORM-9 [27-05-2025(online)].pdf 2025-05-27
7 202521050672-FORM FOR SMALL ENTITY(FORM-28) [27-05-2025(online)].pdf 2025-05-27
8 202521050672-FORM 18 [27-05-2025(online)].pdf 2025-05-27
9 202521050672-FORM 1 [27-05-2025(online)].pdf 2025-05-27
10 202521050672-FIGURE OF ABSTRACT [27-05-2025(online)].pdf 2025-05-27
11 202521050672-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [27-05-2025(online)].pdf 2025-05-27
12 202521050672-EVIDENCE FOR REGISTRATION UNDER SSI [27-05-2025(online)].pdf 2025-05-27
13 202521050672-EDUCATIONAL INSTITUTION(S) [27-05-2025(online)].pdf 2025-05-27
14 202521050672-DRAWINGS [27-05-2025(online)].pdf 2025-05-27
15 202521050672-DECLARATION OF INVENTORSHIP (FORM 5) [27-05-2025(online)].pdf 2025-05-27
16 202521050672-COMPLETE SPECIFICATION [27-05-2025(online)].pdf 2025-05-27
17 Abstract.jpg 2025-06-12