Sign In to Follow Application
View All Documents & Correspondence

A System To Provide Real Time Classroom Events In Remote Clasrooms And Method Thereof

Abstract: A system for providing real-time classroom events in remote classrooms. The system comprises of at least one Classroom having at least one Instructor and at least one Remote Classroom having at least one Student. The Classroom comprises of Gesture Recognition Unit to capture gesture of the Instructor, Audio Capturing Unit to capture the Instructor’s voice and Control Unit, wherein the Control Unit comprises of Processing Unit, Projection Unit capable of projecting Presentation Slides and Memory Unit capable of storing Processing Module and information including the Presentation Slides and the captured audio packets. Remote Classroom comprises of at least one Device capable of displaying the classroom events on the Device to the Student(s), such that the real time events in Classroom comprising of gestures of the Instructor with audio packets are processed to overlay on the Presentation Slides for real-time display on the Device in the Remote Classroom. Figure [1]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2015
Publication Number
15/2017
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
sunita@skslaw.org
Parent Application
Patent Number
Legal Status
Grant Date
2024-01-12
Renewal Date

Applicants

AMRITA VISHWA VIDYAPEETHAM
Amritapuri Campus, Clappana PO 690 525, Kollam, Kerala, India

Inventors

1. NARAYANAN, Ramkumar
AmritaSindhu 001, Amritapuri Ashram, Kollam, Kerala 690 546
2. RANGAN, P.Venkat
VC Quarters, Amrita Vishwa Vidyapeetham, Amrita Nagar PO, Ettimadai, Coimbatore, Tamil Nadu 641 112
3. HARIHARAN, Balaji
MA Math, Kollam, Kerala 690 546

Specification

Claims:1. A system for providing real-time classroom events in remote classrooms wherein said system comprises of at least one Classroom (C) having at least one Instructor (I) and at least one Remote Classroom (RC) having at least one Student (S), wherein
? said Classroom (C) comprises of at least one Gesture Recognition Unit (11000) to capture gesture of said Instructor (I) as sensor feed, at least one Audio Capturing Unit (12000) to capture said Instructor’s (I) voice as audio packets and at least one Control Unit (13000), wherein
? said Control Unit (13000) comprises of at least one each of Processing Unit (13200), Projection Unit (13300) capable of projecting Presentation Slides and Memory Unit (13100) capable of storing Processing Module (PM) and information including said Presentation Slides (P) and said captured audio packets,
? said Remote Classroom (RC) comprises of at least one Device (D) capable of displaying the classroom events on said Device (D) to said Student (S), such that said real time events in Classroom (C) comprising of gestures of said Instructor (I) with audio packets are processed to overlay on said Presentation slides (P) for real-time display on said Device (D) in said Remote Classroom (RC).

2. A system for providing real-time classroom events in remote classrooms as claimed in claim 1 wherein said Processing Module (PM) comprises of
? Means of Homography Calibration (13210) to provide caliberation of homography of said sensor feed and said board feed in said Processing Unit (13200) to yield homographic matrix (13214)
? Means of Contour Extraction (13220) to extract the contour of said Instructor (I) from said sensor feed in said Processing Unit (13200) to yield extracted contour (EC) of said Instructor (I),
? Means of Warping (13230) to overlay said extracted contour (EC) on said board feed to yield Overlayed Image (OI)
? Means of Streaming (13240) to stream series of Overlayed Images (OI1, OI2, OI3 …..OIn) to said Device of said Remote Classroom

3. A system for providing real-time classroom events in remote classrooms as claimed in Claim 1 wherein said Presentation Slides (P) are input to said Processing Unit (13200) as board feed.

4. A method for providing real-time classroom events in remote classrooms as claimed in claim 1 wherein said method comprises steps of
a) storing said Presentation Slides (P) in said Memory Unit (13100),
b) projecting said Presentation Slides (P) on said Presentation Screen (10000),
c) providing Presentation Slides (P) as board feed to the Processing Unit (13200)
d) capturing gestures of said Instructor (I) that overlay on said Presentation Screen as sensor feed,
e) capturing said Instructor’s voice as audio packets and storing the same in Memory Unit (13100)
f) overlaying of said sensor feed on board feed by said Processing Module (PM) in said Processing Unit (13200) along with corresponding audio packets retrieved from said Memory Unit (13100) to yield overlaid image (OI),
g) iterating steps (c) to (f) to yield a series of consecutive overlaid images (OI1, OI2, OI3….OIn),
h) compiling said series of consecutive overlaid images to yield Overlayed Image Feed (OIF) by said Processing Module (PM) in Processing Unit (13200) and
i) streaming said Overlayed Image Feed (OIF) to said Students (S) on said Device in said Remote Classroom (RC)

5. A method for providing real-time classroom events in remote classrooms as claimed in claim 2 wherein said method of operating said Processing Module (PM) comprises the steps of
? caliberating homography of said sensor feed and said board feed in said Processing Unit (13200) by Means of Homography Caliberation (13210) to yield homographic matrix (13214),
? extracting contour of said Instructor (I) from said sensor feed in said Processing Unit (13200) by means of Contour Extraction (13220) to yield extracted contour (EC) of said Instructor (I),
? overlaying said extracted contour (EC) on said board feed by said means of warping (13230) to yield Overlayed Image (OI)
? streaming series of Overlayed Images (OI1, OI2, OI3 …..OIn) to said Device of said Remote Classroom by said Means of Streaming (13240).

6. A system for providing real-time classroom events in remote classrooms wherein said system comprises of at least one Classroom (CC) having at least one Instructor (I), at least one Remote Classroom (CRC) having at least one Student (CS) wherein
? said Classroom (CC) comprises of at least one each of Presentation Screen (CI), Gesture Recognition Unit (C11000), Audio Capturing Unit (C12000), Control Unit (C13000) and Board Recognition Unit (C14000)
? said Remote Classroom (CRC) comprises of at least one Device CD) and Students (S) wherein said Device (CD) is capable of displaying the classroom events Device with screen.

7. A system for providing real-time classroom events in remote classrooms wherein said system comprises of at least one Classroom (C) having at least one Instructor (I) wherein
? said Classroom (C) comprises of at least one Gesture Recognition Unit (11000) to capture gesture of said Instructor (I) as sensor feed, at least one Audio Capturing Unit (12000) to capture said Instructor’s (I) voice as audio packets and at least one Control Unit (13000), wherein
? said Control Unit (13000) comprises of at least one each of Processing Unit (13200), Projection Unit (13300) capable of projecting Presentation Slides and Memory Unit (13100) capable of storing Processing Module Storage (PMS) and information including said Presentation Slides (P) and said captured audio packets.
, Description:FIELD OF THE INVENTION:
The present invention relates to a system to provide real-time classroom events to students at distant locations. More specifically the present invention relates to a system and method to provide students in remote classrooms the real-time classroom experience where the instructor’s contour is extracted and overlayed on the presentation slides, enabling students to see the gesture based cues of the instructor along with the audio on the presentation slide in the remote classrooms.

BACKGROUND OF THE INVENTION:
The concept of computer based education has existed since the early 1980’s. During the initial phase, computer based education was majorly limited to impart education/training through data storage devices for example CD-ROMs and Floppy disks. Computer based education, with the evolution of the internet and portable computing devices has completely changed. Electronic-Learning (eLearning) commonly means any learning/training outside of a traditional classroom that utilizes electronic devices for e.g. Tablet, Computer, PDA, Mobile etc. and a network. The basic idea is to facilitate students by replicating the actual classroom at distant locations in form of remote classrooms. Today, eLearning has become a widely used tool and has found importance across various universities and educational / training institutions.

Classrooms are generally dynamic environment where the instructor utilizes smart board or presentation screen along with various gestures and voice based cues in order to teach. The gestures an instructor makes in the classroom are as pervasive as the smart board/presentation screen.
For example, when an instructor/teacher is teaching primary grade students about basic mathematical concepts like counting numbers of objects, instructors frequently use nonverbal behaviors such as pointing, counting on fingers, circling objects with the finger, etc.

The conventional methods replicate the dynamic environment of classrooms in the remote classroom using camera or screen share methods. The use of camera as in case of video conferencing allows students in remote classroom to view the instructor along with his gestures and voice but the presentation screen which is focused by the instructor in the classroom is not clearly visible to the students in remote classroom. Whereas other techniques like screen share methods only facilitates the presentation of the screen in the remote classroom. Lot of important gesture based information that the instructor conveys in the classroom is missed by the students in the remote classrooms. Students in the remote classroom will have to guess the area of instructors focus on the board from audio sources and other contextual means.

A IEEE research paper titled “Robust spatio-temporal matching of electronic slides to presentation Videos” discloses a method to automatically align the electronics slides of the presentation screen to the captured video of the instructor. Accordingly students are able to view the instructor along with the presentation slide but still missing out on the important gestures made by the instructor for example pointing specific areas of the board, as the method merely facilitates viewing the instructor on the presentation screen in the remote classroom.

Therefore, there is a need of a system that is able to replicate not only the teaching entity i.e. smart board/presentation screen but also the instructor’s gesture based information to provides a real-time classroom experience at distant locations.
OBJECT AND SUMMARY OF THE INVENTION:
The main object of the present invention is to provide a system for real-time classroom events to students at distant locations in remote classrooms.

Another object of the present invention is to provide a system to extract the instructor’s contour and overlay said contour on the presentation slides, and streaming the same to students in remote classrooms enabling them to see the gesture based cues of the instructor.

Yet another object of the present invention is to provide a system that enables students in remote classrooms to view instructor’s gesture aligned on the presentation slides such that the instructor’s gestures are in direct consonance with the gestures made by the instructor with respect to the presentation screen in the classroom.

Yet another object of the present invention is to provide a system where the students in remote classroom are enabled with instructor’s audio.

Yet another object of the present invention is to provide a method to extract the instructor’s contour and overlay said contour on the presentation screen, and streaming the same to students in remote classroom enabling them to see the gesture based cues of the instructor

Accordingly, the present invention relates to a system to provide real-time classroom experience to students at distant locations in remote classrooms. More specifically the present invention provides a system and method to extract the instructor’s contour and overlay it on the presentation slide and stream it to the students in remote classrooms.
The system comprises of at least one Classroom having at least one Instructor and at least one Remote Classroom having at least one Student. The Classroom comprises of at least one Gesture Recognition Unit to capture gestures of the Instructor as sensor feed, at least one Audio Capturing Unit to capture the Instructor’s voice as audio packets and at least one Control Unit, wherein the Control Unit comprises of at least one each of Processing Unit, Projection Unit capable of projecting Presentation Slides and Memory Unit capable of storing Processing Module and information including the Presentation Slides and the captured audio packets. The Presentation Slides are input to the Processing Unit as board feed. The Remote Classroom comprises of at least one Device capable of displaying the classroom events on the Device to said Student(s), such that the real time events in Classroom comprising of gestures of the Instructor with audio packets are processed to overlay on the Presentation slides for real-time display on the Device in the Remote Classroom.

The Processing Module stored in the Memory Unit comprises of Means of Homography, Means of Contour Extraction, Means of Warping and Means of Streaming. Means of Homography to provide calibration of homography of the sensor feed and the board feed in the Processing Unit to yield homographic matrix. Means of Contour Extraction to extract the contour of the Instructor from the sensor feed. Means of Warping to overlay the extracted contour on the board feed to yield Overlayed Image. Means of Streaming to stream series of Overlayed Images to the Device of the Remote Classroom.

The present invention also provides the method for providing real-time Classroom events in Remote Classrooms. The method comprises the steps of storing the Presentation Slides in the Memory Unit., projecting the stored Presentation Slides on the Presentation Screen, providing Presentation Slides as board feed to the Processing Unit and capturing gestures of the Instructor that overlay on the Presentation Screen as sensor feed. The method further comprises the steps of capturing the Instructor’s voice as audio packets and storing them in the Memory Unit, Overlaying the sensor feed on board feed by the Processing Module in the Processing Unit along with corresponding audio packets retrieved from the Memory Unit to yield overlaid image. The method further provide steps to obtain series of consecutive overlaid images by repeating the preceding steps, and then compiling the consecutive overlaid images by the Processing Module to yield Overlayed Image Feed (OIF). The Overlayed Image Feed is streamed to the Students on the Device(s) in the Remote Classroom.

The method of operating the Processing Module comprises the steps of calibrating homography of the sensor feed and the board feed in the Processing Unit by Means of Homography Caliberation to yield homographic matrix. The method further comprising the steps of extracting contour of the Instructor from the sensor feed in the Processing Unit by means of Contour Extraction to yield extracted contour of the Instructor. Overlaying the extracted contour on the board feed by means of warping to yield Overlayed Image. The method further comprises the step of streaming series of Overlayed Image to the Device of the Remote Classroom by Means of Streaming.

In another aspect of invention, present invention discloses a system for providing real-time classroom events in remote classrooms wherein the system comprises of at least one Classroom having at least one Instructor, at least one Remote Classroom having at least one Student. The Classroom comprises of at least one each of Presentation Screen, Gesture Recognition Unit, Audio Capturing Unit, Control Unit and Board Recognition Unit. The Remote Classroom comprises of at least one Device and Students wherein said Device is capable of displaying the classroom events Device with screen.

In yet another aspect of invention, present invention discloses a system for providing real-time classroom events in remote classrooms wherein the system comprises of at least one Classroom having at least one Instructor. The Classroom comprises of at least one Gesture Recognition Unit to capture gesture of the Instructor as sensor feed, at least one Audio Capturing Unit to capture the Instructor’s voice as audio packets and at least one Control Unit. The Control Unit comprises of at least one each of Processing Unit, Projection Unit capable of projecting Presentation Slides and Memory Unit capable of storing Processing Module Storage and information including the Presentation Slides and the captured audio packets.

BRIEF DISCRIPTION OF THE DRAWINGS
Fig 1 is a block diagram representation of the system of the present invention, depicting the Instructor (I), Classroom (C) and its components comprising of Presentation Screen (10000), Gesture Recognition Unit (11000), Audio Capturing Unit (12000) and a Control Unit (13000). Control Unit comprises of a Memory Unit (13100), Processing Unit (13100) and Projection Unit (13200). It further depicts the Remote Classroom (RC) along with its component a Device (D) and Students (S).
Fig 2 (a) illustrates the process flowchart of the Processing Module ( PM) processed by the Processing Unit (13200) which comprises of Means of Homography Calculation (13210), Means of Contour Extraction (13220), Means of Warping (13230) and Means of Streaming (13240).
Fig 2(b) illustrates the process flowchart of the Means of Homography Calculation (13210).
Fig 2(c) illustrates the process flowchart of the Means of Warping (13230).
Fig 3 is a block diagram representation of another embodiment of the present invention
Fig 4(a) depicts an example Classroom where the Instructor is teaching.
Fig 4(b) depicts the extracted contour of the Instructor
Fig 4(c) depicts the Instructor’s contour image after Warping.
Fig 4(d) depicts the overlayed image that is streamed to the Remote Classroom.

DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIVE EXAMPLES:

The present invention relates to a system to provide real-time classroom events to students at distant locations in remote classrooms. More specifically present invention provides a system and method to extract the Instructor’s contour and overlay it on the presentation slide and stream it to the students in Remote Classrooms.

While teaching, Instructors generally for the purpose of explaining make lot of gestures including pointing to different section of the smart-board/ presentation screen. These gestures based cues are important as they convey a lot of information in a dynamic environment such as a classroom. The students are able to correlate with the areas of focus on the board with the aid of the gestures.

The present invention provides a real-time system and method to replicate the dynamic environment of the Classroom in the Remote Classrooms by Overlaying Instructor’s contour over the presentation slide along with the Instructor’s voice, thereby facilitating the students in the remote classroom to view the Instructor and his gestures along with voice on the Presentation Slides.

In a preferred embodiment, the Classroom (C) where the Instructor (I) is physically present comprises of a Presentation Screen (10000), a Gesture Recognition Unit (11000) which consist of a motion sensor and a camera, an Audio Capturing Unit (12000) and a Control Unit (13000). The Control Unit (13000) comprises of Memory Unit (13100), Processing Unit (13200) and Projecting Unit (13300).

The Remote Classroom (RC) where the Instructor (I) is not physically present comprising of a Device (D) and Students (S) as depicted in Fig. 1.

The Instructor (I) in the Classroom (C) uses Presentation Slides (P) as a teaching aid. The Presentation Slides (P) are stored in the Memory Unit (13100) and projected by the Projecting Unit (13300). The projected Presentation Slides (P) forms the Presentation Screen (10000) on which the Instructor (I) teaches. The Gesture Recognition Unit (11000) captures the captures gestures of the Instructor (I) as sensor feed for input to Processing Unit (13200) of Control Unit (13000). The Presentation Slides (P) stored in the Memory Unit (13100) are input to the Processing Unit (13200) as board feed. The Audio Capturing Unit (12000) captures Instructor’s voice as audio packets and stores in the Memory Unit (13100). The Processing Unit (13200) process Processing Module (PM) that provides means to overlay the sensor feed over the board feed and stream it to the Devices (D) in the Remote Classrooms (RC).

The Device (D) in the Remote Classrooms (RC) are capable of displaying the audio-visuals events of the Classroom (C). The students (S) in the Remote Classroom are able to view the Instructor’s contour overlayed on the Presentation Slide on their Devices (D) along with the Instructor’s voice. The process of extracting the Instructor’s contour and overlaying it on the Presentation Slides (P) and streaming it to Remote Classroom (RC) is as per the process described in detail below.

The process flowchart of the Processing Module (PM) is depicted in Fig 2(a). The Processing Module (PM) overlays the Instructor’s contour on the Presentation Slides and stream it to Remote Classroom (RC). Processing Module (PM) is stored in the Memory Unit (13100) and is processed by the Processing Unit (13200). The Processing Module (PM) comprises of steps explained in detail below:
Step 1: Means of Homography Calculation (13210)
Step 2: Means of Contour Extraction (13220)
Step 3: Means of Warping (13230)
Step 4: Means of Streaming (13240)

Step 1. Means of Homography Calibration
Means of Homography Calibration (13200) as depicted in Fig 2(b) calculates the homography matrix between the sensor feed and the board feed. The Instructor’s extracted contour cannot be directly overlayed on the board feed as they are in different perspectives. First image registration is performed on both the feeds i.e. sensor feed and board feed. The initial images are from sensor-feed and board-feed (13211). SIFT keypoints and descriptors are computed for each of the keypoints for both the images separately (13212). Scale Invariant Feature Transform (SIFT) algorithm by David Lowe is applied to determine the SIFT keypoints. The descriptors for both the images for all keypoints are compared with each other and strong matches are found (13213).

Random sample consensus (RANSAC) algorithm is used to determine the homography matrix (13214). The homography matrix is a 3x3 matrix as depicted in equation below. It maps the corresponding points between both images i.e. sensor feed and board feed.

Board Image Perspective = ¦(h11&h12&h13@h21&h22&h23@h31&h32&h33) ? Sensor Image Perspective

Step 2. Means of Contour Extraction: The depth and color streams are extracted of the image from Gesture Recognition Unit i.e. sensor feed using appropriate software. Mask image containing the Instructor’s contour is calculated using appropriate software and then compared with the sensor’s color stream. Background subtraction is done on the color image to yield extracted contour (EC).

Step 3. Means of Warping: After extracting the Instructor’s contour from the sensor-feed i.e. extracted contour (EC), the extracted contour (EC) is multiplied with the Homography Matrix (13231). The purpose is to bring the extracted contour in alignment i.e. in same perspective with the board-feed, so that when overlayed the extracted contour of the Instructor can point to the exact point on the board where the Instructor is pointing in the Classroom (C). The holes created after said perspective transform of the two images are filled using standard interpolation techniques (13232). The obtained image is a warped image (13233). The warped image is overlayed on the board’s feed (13234) to yield Overlayed Image (OI).

Step 4. Means of Streaming: The Overlayed Image (OI) is then encode with the corresponding audio packets stored in the Memory Unit (13100) and streamed as a series of Overlayed Images (OI1, OI2……..OIn) by the Processing Unit of the Control Unit by any known means including wirelessly, to the Devices (D) in the Remote Classrooms (RC).

In another embodiment, the Instructor (CI) is teaching on a Black Board (C10000) in the Classroom (CC) as depicted in Fig 3. The Classroom where the Instructor (CI) is physically present comprises of at least one Gesture Recognition Unit (C11000), Audio Capturing Unit (C12000), Control Unit (C13000) and Board Recognition Unit (C14000). The Gesture Recognition Unit (C11000) comprises of a camera and a motion sensor, the Control Unit (C13000) comprises of Memory Unit (C13100) and Processing Unit (C13200). The Board Capturing Unit (C14000) comprises of a camera.

The Gesture Recognition Unit (C11000) captures the Instructor’s gestures and provides it as an input image to the Processing Unit (C13200) i.e. sensor feed. The Board Capturing Unit (C14000) captures the Black Board (C10000) and provides it as an input image to the Processing Unit (C13200) i.e. board feed. The Processing Module (PM) as illustrated provides means to overlay the sensor feed over the board feed and stream it to the Devices (CD) of the students (CS) in the Remote Classrooms (CRC).

In another embodiment, the Classroom (C) where the Instructor (I) is physically present comprises of a Presentation Screen (10000), a Gesture Recognition Unit (11000) which consist of a motion sensor and a camera, an Audio Capturing Unit (12000) and a Control Unit (13000). The Control Unit (13000) comprises of Memory Unit (13100), Processing Unit (13200) and Projecting Unit (13300). The Remote Classroom (RC) where the Instructor (I) is not physically present comprising of a Device (D) and Students (S) as depicted in Fig. 1.

The Instructor (I) in the Classroom (C) uses Presentation Slides (P) as a teaching aid. The Presentation Slides (P) are stored in the Memory Unit (13100) and projected by the Projecting Unit (13300). The projected Presentation Slides (P) forms the Presentation Screen (10000) on which the Instructor (I) teaches. The Gesture Recognition Unit (11000) captures the captures gestures of the Instructor (I) as sensor feed for input to Processing Unit (13200) of Control Unit (13000). The Presentation Slides (P) stored in the Memory Unit (13100) are input to the Processing Unit (13200) as board feed. The Audio Capturing Unit (12000) captures Instructor’s voice as audio packets and stores in the Memory Unit (13100). The Processing Unit (13200) process Processing Module Storage (PMS) that provides means to overlay the sensor feed over the board feed and stores the series of Overlayed Image for future use at any location.

The invention is now explained with the help of non-limiting examples

EXAMPLE:
Fig 4(a) illustrates an example setup of a Classroom (C) where the instructor (I) is teaching on the Presentation Screen (10000). The Presentation Slides (P) stored in Memory Unit (13100) are projected by the Projection Unit (13300) that forms the Presentation Screen on which the Instructor teaches. The Gesture Recognition Unit (11000) captures the Instructor‘s gestures and inputs it to Processing Unit (13200) as sensor feed. The Presentation Slides (P) stored in the Memory Unit (13100) of Control Unit (13000) are inputed to the Processing Unit (13200) board feed. The Audio Capturing unit captures the voice of the Instructor as audio packets and stores in the Memory Unit (13100). The Processing Unit processes the Processing Module (PM) and computes the homography between the sensor feed and the board feed (13210). The Instructor’s contour is extracted from the sensor feed (13220). Fig 4(b) illustrates the extracted contour of the Instructor. The Instructor is referring to the base of the cylinder of the piston at the particular moment in the Classroom (C). If the extracted contour of the Instructor as shown in Fig 4(b) is overlayed on the board feed then the overlayed feed of the Instructor would no longer be pointing at the same content. Image warping is performed, to bring the extracted contour in the same perspective as the board feed. The extracted contour of the Instructor is multiplied with the homography matrix obtained by Means of Homography Calculation (13210) to obtain the warped image. Fig 4(c) depicts the extracted contour image after warping obtained by Means of Warping (13230). The warped image is overlayed on the board feed along with the audio packets and Streamed to the Remote Classrooms (RC). Fig 4(d) depicts the overlayed image after warping, the extracted contour of the Instructor is pointing the same content as being pointed by the Instructor in the Classroom (C). All the examples described herein are not limited to distance learning. A person skilled in the art will appreciate that the concepts presented above are equally applicable to executive and board meetings, training, project management, etc.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 5370-CHE-2015-IntimationOfGrant12-01-2024.pdf 2024-01-12
1 Form 5 [07-10-2015(online)].pdf 2015-10-07
2 5370-CHE-2015-PatentCertificate12-01-2024.pdf 2024-01-12
2 Form 3 [07-10-2015(online)].pdf 2015-10-07
3 Form 20 [07-10-2015(online)].pdf 2015-10-07
3 5370-CHE-2015-AMMENDED DOCUMENTS [16-11-2023(online)].pdf 2023-11-16
4 Drawing [07-10-2015(online)].pdf 2015-10-07
4 5370-CHE-2015-EDUCATIONAL INSTITUTION(S) [16-11-2023(online)].pdf 2023-11-16
5 Description(Complete) [07-10-2015(online)].pdf 2015-10-07
5 5370-CHE-2015-EVIDENCE FOR REGISTRATION UNDER SSI [16-11-2023(online)].pdf 2023-11-16
6 abstract 5370-CHE-2015.jpg 2015-11-16
6 5370-CHE-2015-FORM 13 [16-11-2023(online)].pdf 2023-11-16
7 Petition Under Rule 137 [18-02-2016(online)].pdf 2016-02-18
7 5370-CHE-2015-MARKED COPIES OF AMENDEMENTS [16-11-2023(online)].pdf 2023-11-16
8 Other Document [18-02-2016(online)].pdf 2016-02-18
8 5370-CHE-2015-RELEVANT DOCUMENTS [16-11-2023(online)].pdf 2023-11-16
9 5370-che-2015 POWER OF ATTORENEY 2222016.pdf 2016-07-01
9 5370-CHE-2015-Response to office action [16-11-2023(online)].pdf 2023-11-16
10 5370-che-2015 CORRESPONDENCE-PA-F5-F1 2222016.pdf 2016-07-01
10 5370-CHE-2015-Response to office action [31-10-2023(online)].pdf 2023-10-31
11 5370-che-2015 FORM-5 2222016.pdf 2016-07-01
11 5370-CHE-2015-FORM-26 [30-10-2023(online)].pdf 2023-10-30
12 5370-che-2015 FORM-1 2222016.pdf 2016-07-01
12 5370-CHE-2015-Correspondence to notify the Controller [27-10-2023(online)].pdf 2023-10-27
13 5370-CHE-2015-FORM 18 [22-03-2018(online)].pdf 2018-03-22
13 5370-CHE-2015-US(14)-HearingNotice-(HearingDate-01-11-2023).pdf 2023-09-14
14 5370-CHE-2015-Correspondence, No Objection Certificate, Marked Copy of Form-1, Form-5, Clean Copy of Form-1 Form-5 And Form-8_16-03-2021.pdf 2021-03-16
14 5370-CHE-2015-FER.pdf 2020-05-11
15 5370-CHE-2015-FORM-8 [24-02-2021(online)].pdf 2021-02-24
15 5370-CHE-2015-MARKED COPIES OF AMENDEMENTS [09-11-2020(online)].pdf 2020-11-09
16 5370-CHE-2015-AMMENDED DOCUMENTS [09-11-2020(online)].pdf 2020-11-09
16 5370-CHE-2015-FORM 13 [09-11-2020(online)].pdf 2020-11-09
17 5370-CHE-2015-FER_SER_REPLY [09-11-2020(online)].pdf 2020-11-09
18 5370-CHE-2015-FORM 13 [09-11-2020(online)].pdf 2020-11-09
18 5370-CHE-2015-AMMENDED DOCUMENTS [09-11-2020(online)].pdf 2020-11-09
19 5370-CHE-2015-FORM-8 [24-02-2021(online)].pdf 2021-02-24
19 5370-CHE-2015-MARKED COPIES OF AMENDEMENTS [09-11-2020(online)].pdf 2020-11-09
20 5370-CHE-2015-Correspondence, No Objection Certificate, Marked Copy of Form-1, Form-5, Clean Copy of Form-1 Form-5 And Form-8_16-03-2021.pdf 2021-03-16
20 5370-CHE-2015-FER.pdf 2020-05-11
21 5370-CHE-2015-FORM 18 [22-03-2018(online)].pdf 2018-03-22
21 5370-CHE-2015-US(14)-HearingNotice-(HearingDate-01-11-2023).pdf 2023-09-14
22 5370-che-2015 FORM-1 2222016.pdf 2016-07-01
22 5370-CHE-2015-Correspondence to notify the Controller [27-10-2023(online)].pdf 2023-10-27
23 5370-che-2015 FORM-5 2222016.pdf 2016-07-01
23 5370-CHE-2015-FORM-26 [30-10-2023(online)].pdf 2023-10-30
24 5370-CHE-2015-Response to office action [31-10-2023(online)].pdf 2023-10-31
24 5370-che-2015 CORRESPONDENCE-PA-F5-F1 2222016.pdf 2016-07-01
25 5370-che-2015 POWER OF ATTORENEY 2222016.pdf 2016-07-01
25 5370-CHE-2015-Response to office action [16-11-2023(online)].pdf 2023-11-16
26 5370-CHE-2015-RELEVANT DOCUMENTS [16-11-2023(online)].pdf 2023-11-16
26 Other Document [18-02-2016(online)].pdf 2016-02-18
27 5370-CHE-2015-MARKED COPIES OF AMENDEMENTS [16-11-2023(online)].pdf 2023-11-16
27 Petition Under Rule 137 [18-02-2016(online)].pdf 2016-02-18
28 5370-CHE-2015-FORM 13 [16-11-2023(online)].pdf 2023-11-16
28 abstract 5370-CHE-2015.jpg 2015-11-16
29 5370-CHE-2015-EVIDENCE FOR REGISTRATION UNDER SSI [16-11-2023(online)].pdf 2023-11-16
29 Description(Complete) [07-10-2015(online)].pdf 2015-10-07
30 5370-CHE-2015-EDUCATIONAL INSTITUTION(S) [16-11-2023(online)].pdf 2023-11-16
30 Drawing [07-10-2015(online)].pdf 2015-10-07
31 Form 20 [07-10-2015(online)].pdf 2015-10-07
31 5370-CHE-2015-AMMENDED DOCUMENTS [16-11-2023(online)].pdf 2023-11-16
32 Form 3 [07-10-2015(online)].pdf 2015-10-07
32 5370-CHE-2015-PatentCertificate12-01-2024.pdf 2024-01-12
33 Form 5 [07-10-2015(online)].pdf 2015-10-07
33 5370-CHE-2015-IntimationOfGrant12-01-2024.pdf 2024-01-12

Search Strategy

1 2020-01-2816-56-43_28-01-2020.pdf

ERegister / Renewals

3rd: 10 Apr 2024

From 07/10/2017 - To 07/10/2018

4th: 10 Apr 2024

From 07/10/2018 - To 07/10/2019

5th: 10 Apr 2024

From 07/10/2019 - To 07/10/2020

6th: 10 Apr 2024

From 07/10/2020 - To 07/10/2021

7th: 10 Apr 2024

From 07/10/2021 - To 07/10/2022

8th: 10 Apr 2024

From 07/10/2022 - To 07/10/2023

9th: 10 Apr 2024

From 07/10/2023 - To 07/10/2024

10th: 10 Apr 2024

From 07/10/2024 - To 07/10/2025