Sign In to Follow Application
View All Documents & Correspondence

A System For Providing Real Time Interpretation Of Sign Language And Method Thereof

Abstract: A System For Providing Real-Time Interpretation Of Sign Language And Method Thereof A system for providing real-time interpretation of sign language of a Person (X) to a Person (Y) in User Selected Language. The System comprises of at least one each of Gesture Capturing Unit to capture the gestures of said Person (X) as Captured Gesture, at least one Control Unit to interpret said Captured Gestures and at least one Graphical User Interface to display the Conversion Result. The Control Unit comprises of at least one each of Memory Unit capable of storing the Captured Gesture, Trained Data Set and Processing Module and at least one Processing Unit capable of processing the Processing Module for providing real-time interpretation of sign language. (Fig 1)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 August 2016
Publication Number
08/2018
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
sunita@skslaw.org
Parent Application
Patent Number
Legal Status
Grant Date
2023-12-15
Renewal Date

Applicants

AMRITA VISHWA VIDYAPEETHAM
Amritapuri, Clappana PO Kollam 690525, Kerala, India

Inventors

1. MADATHILKULANGARA, Geetha
AJ901 Amritanjali Apartments MA Math Kollam, Kerala 690525, India
2. KAIMAL, Dr. M Ramachandra
Pranavam, 52A Kattachal Road Trivandrum, Kerala 695 006, India

Specification

Claims:WE CLAIM:

1. A system for providing real-time interpretation of sign language of a Person (X) to a Person (Y) in User Selected Language (UL), wherein
? said System (1000) comprises of at least one each of Gesture Capturing Unit (100) to capture the gestures of said Person (X) as Captured Gesture (CG), at least one Control Unit (110) to interpret said Captured Gestures (CG) and at least one Graphical User Interface (120) to display the Conversion Result (CR), wherein
? said Control Unit (110) comprises of at least one each of Memory Unit (111) capable of storing the Captured Gesture (111(CG)), Trained Data Set (111(TD)) and Processing Module (111(PM)) and at least one Processing Unit (112) capable of processing the Processing Module (PM)
for providing real-time interpretation of sign language.

2. The system for providing real-time interpretation of sign language as claimed in claim 1 wherein said Conversion Result (CR) is audio and visual conversion of said Captured Gestures (CG) to a User Selected Language (UL).

3. The method for providing real-time interpretation of sign language as claimed in claim 1 wherein said method comprises the steps of
a) storing said Processing Module (PM) in the Memory Unit (111(PM)),
b) storing said Trained Data Set (TD) in the Memory Unit (111(TD)),
c) capturing gestures of said Person (X) as Captured Gestured (CG) using said Gesture Capturing Unit (100),
d) storing said Captured Gestured (CG) in the Memory Unit (111(CG)),
e) processing of said stored captured gestures (CG) in said Processor (112) using said Processing Module (PM).

4. The method for providing real-time interpretation of sign language as claimed in claim 3 wherein processing said stored captured gestures (CG) using said Processing Module (PM) comprises the steps of
a) performing of user language selection by Person (Y) from available Trained Data set stored in Memory Unit (111(TD)),
b) processing of said Captured Gestures (CG) to obtain set of strokes,
c) grouping said set of strokes to obtain conversion result (CR) for display on GUI (120).

5. The method for providing real-time interpretation of sign language as claimed in claim 4 wherein said grouping said set of strokes comprises the steps of:
a) detecting the start and end of each said set of strokes thereby eliminating Movement Epenthesis between each said set of strokes to obtain meaningful set of strokes,
b) grouping and converting said meaningful set of strokes in the User Selected Language (UL) as Conversion Result (CR), and
c) Displaying the Conversion Result (CR) on the Graphical User Interface (120).

6. The method for providing real-time interpretation of sign language wherein said trained dataset as claimed in claim 1 and 3 is created by the method comprising the steps of
(a) Capturing gestures and storing the same as Sample Gestures (SG),
(b) processing of said Sample Gestures (SG)
(c) obtaining set of strokes , and
(d) storing said set of strokes with corresponding words of Natural Language.

7. A system for providing interpretation of sign language of plurality of Persons (P1 to Pn) in Available Language (AL), wherein
? said System (1000) comprises of at least one each of Gesture Capturing Unit (100) to capture the gestures of said Persons (P1 to Pn) as Captured Gesture (CG), at least one Control Unit (110) to interpret said Captured Gestures (CG) and at least one Graphical User Interface (120) to display the Conversion Result (CR), wherein
? said Control Unit (110) comprises of at least one each of Memory Unit (111) capable of storing the Captured Gesture (111(CG)), Trained Data Set (111(TD)), Processing Module (111(PM)) and Conversion Result (CR ) and at least one Processing Unit (112) capable of processing the Processing Module (PM)
for providing real-time interpretation of sign language.

8. The system for providing interpretation of sign language of plurality of Persons (P1 to Pn) as claimed in claim 7 wherein said Conversion Result (CR) are stored in said Memory Unit (111) to be viewed for future reference or as evidence.
, Description:FIELD OF THE INVENTION:
The present invention relates to a system to provide real-time interpretation of sign language. More specifically the present invention relates to a system and method to facilitate real time interpretation of sign language in a user selected language that is understandable to user who does not understand sign language gestures. The present invention can also be used to record conversation of two or more persons talking sign language for future reference or as evidence.

BACKGROUND OF THE INVENTION:
Sign language is a movement language which expresses certain semantic information through series of hand and arm motion, facial expressions and head/body postures. Sign languages are well structured languages with a phonology, morphology, syntax and grammar distinctive from spoken languages. It is the basic communication medium between the deaf people. Every country has its own sign language. A translator is usually needed when an ordinary person wants to communicate with a deaf person. The urge to support the integration of deaf people into the hearing society made the automatic sign language recognition, an area of interest for the researchers.

The major challenge that exists in the recognition of continuous sign language sentence is Movement Epenthesis. When a person gesticulates a full sentence of sign language in a continuous fashion, there will be meaningless movement segments called Movement Epenthesis frames or inter sign-transitions between two consecutive words. These Movement Epenthesis frames bridges the end frame of one word and the start frame of next word. Hence sign recognition systems need to identify or ignore the Moment Epenthesis frames in order to segment the words properly prior to the recognition of each word.

The existing sign recognition systems majorly deal with Movement Epenthesis by explicitly modeling it with dedicated Hidden Markov Model as published in paper titled ‘A Framework for Recognizing the Simultaneous Aspects of American Sign Language’ in Computer Vision and Image Understanding, 2001. IEEE published paper titled ‘Transition movement models for large vocabulary continuous sign language recognition’ and ‘Real-time American sign language recognition using desk and wearable computer based video’ also use Hidden Markov Model. In published paper titled ‘ASL Recognition Based on a Coupling between HMMs and 3D Motion Analysis’ context dependent signs are used to model Movement Epenthesis. Having dedicated Hidden Markov Model for each word considerably increases the complexity and also has the problem of misclassifications due to fixed threshold.

IEEE published paper titled ‘An HMM-Based threshold model approach for gesture recognition’ uses filler models or a garbage model. However it is difficult to obtain all possible gesture patterns corresponding to non sign patterns for training the filler models. Even the same two adjacent words when swapped in the order of occurrence will give rise to different Moment Epenthesis patterns. Hence inserting a new word amounts to inserting new Moment Epenthesis pattern with all possible words coming adjacent to it in the filler model. This makes N2 number of movement epenthesis models for N signs.

IEEE published paper titled ‘Sign Language Spotting with a Threshold Model Based on Conditional Random Fields’ uses conditional random fields to remove Movement Epenthesis from a sentence. This approach does not result in sign recognition, but just segmentation of the sentence. IEEE published paper titled ‘Handling movement epenthesis and hand segmentation ambiguities in continuous sign language recognition using nested dynamic programming’ discuses the idea of handling Moment Epenthesis using the concept of nested dynamic programming and an enhanced level matching process wherein they allow for the possibility of Moment Epenthesis to exist when no good match can be found. It has also optimized the search process using dynamic programming. Though it has reduced the overhead of training explicit non-sign patterns, it still requires a good number of matching comparisons to be made for each of the word segment. Moreover searching using a start frame and end frame alone may not be a sufficient feature in all cases to confirm for presence of a word at that location. At the same time searching for a match of all the frames from start to end also may not work since variable frame rates of videos, and speed of gesturing, or other spatio temporal variability of gesturing may cause loss of synchronization between frames to be compared of each particular word.

Therefore, there is a need of a sign language recognition system that effectively deals with the Moment Epenthesis hence to provide real-time interpretation of sign language system.

OBJECT AND SUMMARY OF THE INVENTION:
In order to obviate the drawbacks in the existing state of art, the main object of the present invention is to provide a system for real-time interpretation of sign language.

Another object of the present invention is to provide a system to deal with the Moment Epenthesis problem effectively.

Yet another object of the present invention is to provide a method for real-time interpretation of sign language.

Yet another object of the present invention is to provide a system and method to record conversation of two or more persons talking sign language for future reference or as evidence.

Accordingly the present invention provides a system for real-time interpretation of sign language. More specifically the present invention relates to a system and method to facilitate real time interpretation of sign language in a user selected language that is understandable to user who does not understand sign language gestures. The present invention provides a system that captures the gestures of the person talking in sign language; the gestures are converted into set of strokes based on key maximum curvature points. The present system is based on probabilistic prediction of strokes, grouping together the meaningful strokes and isolating the filler stokes or Moment Epenthesis.

The present invention also provides a system and method to record conversation of two or more persons talking sign language for future reference or as evidence.

A system for providing real-time interpretation of sign language of a Person (X) to a Person (Y) in User Selected Language. The system comprises of at least one each of Gesture Capturing Unit to capture the gestures of Person (X) as Captured Gesture, at least one Control Unit to interpret the Captured Gestures and at least one Graphical User Interface to display the Conversion Result. The Control Unit comprises of at least one each of Memory Unit capable of storing the Captured Gesture, Trained Data Set and Processing Module and at least one Processing Unit capable of processing the Processing Module. The Conversion Result is an audio and visual conversion of the Captured Gestures to a User Selected Language that is selected by Person (Y).

BRIEF DESCRIPTION OF THE DRAWINGS
Fig 1 is a block diagram representation of the present invention, the System (1000) and its components comprising of Capturing Unit (100), Control Unit (110) and Graphical User Interface (120) used to convert the gestures of Person (X) in language selected by Person (Y).
Fig 2 depicts process flowchart of converting the sign language gestures into user selected language.
Fig 2(a) depicts process flowchart of processing captured gestures.
Fig 3 depicts an example of stoke based representation of gesture.
Fig 4 illustrates an example of key maximum curvature points marked for different sign gestures.
Fig 5 illustrates process flowchart of kth order Markov model based grouping of stokes.
Fig 6 illustrates stokes sequence of a sentence.
Fig 7 illustrates process flowchart of association rule mining based grouping of stokes.
Fig 8 illustrates process flowchart of N-gram based stroke sequence prediction based grouping of stokes.
Fig 9 illustrates process flowchart of Finite State based grouping of stokes.
Fig 10 is a block diagram representation of the system according to another embodiment.
DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIVE EXAMPLES:
The present invention relates to a system to provide real-time interpretation of sign language. More specifically the present invention relates to a system and method to facilitate real time interpretation of sign language in a user selected language that is understandable to user who does not understand sign language gestures.

In a preferred embodiment as illustrated in Fig. 1 and Fig, 2 , the System (1000) for providing real-time interpretation of sign language gesticulated by Person (X) to Person (Y) in User Selected Language (UL) comprises of at least one each of Gesture Capturing Unit (100) to capture the gestures of Person (X) as Captured Gesture (CG), at least one Control Unit (110) to interpret said Captured Gestures (CG) and at least one Graphical User Interface (120) to display the Conversion Result (CR). The Control Unit (110) comprises of at least one each of Memory Unit (111) capable of storing the Captured Gesture (111(CG)), Trained Data Set (111(TD)) and Processing Module (111(PM)) and at least one Processing Unit (112) capable of processing the Processing Module (111(PM)) stored in Memory Unit (111). Figure 1.

The Trained Data Set (111(TD)) is created and stored (200) in the Memory Unit (111). The Trained Data Set (111(TD)) is created by storing various Sample Gestures (SG). The Sample Gestures (SG) are processed to obtain set of strokes, grouped together and then corresponding word of natural language are stored for each gesture. Multiple Trained Data Set (111(TD)) are stored for example English, Hindi, French etc. The User Selected Language (UL) is the language in which the gestures of Person (X) are converted. The selection of User Selected Language (UL) by Person (Y) is based on the stored Trained Data Set (111(TD)). For example if Trained Data Set (111(TD)) is available in English and French, Person (Y) will only be able to select between English and French. Processing Module (111(PM)) is also stored (210) in the Memory Unit (111). The Capturing Unit (100) captures the gestures (220) of the Person (X) as Captured Gestures (CG) and then stored (230) them in the Memory Unit (111). The Processing Module (111(PM)) is processed by the Processing Unit (112) to interpret (240) Captured Gestures (CG) of the Person (X). Figure 2

Interpreting Captured Gestures comprises of following steps (Figure 2(a)):
• User language selection: The Person (Y) choose from the available languages in the Trained Data Set (111(TD)) in which he wants the gestures of the Person (X) to be converted.
• Obtaining set of strokes: The Captured Gestures (CG) of the Person (X) are processed to obtain set of strokes. Explained in detail below.
• Grouping set of strokes: The set of strokes are grouped to form sentences. The sentences are then matched with the Trained Data Set to convert the sentences into the user selected language to obtain Conversion Result (CR). Explained in detail below.
• Displaying: The Conversion Result (CR) is displayed to the Person (Y) on the Graphical User Interface (120).

Obtaining set of strokes:
Obtaining set of stokes is the first step towards obtaining Conversion Result (CR). The Captured Gestures (CG) are gestures/signs of Person (X) and are represented as strokes. Stroke can be defined as part of a sign segmented out based on the Key Maximum Curvature Points (KMCP) of the global trajectory. KMCPs are points on the trajectory where major curvature changes occur. This gesture trajectory representation incorporates both global and local features of the gesture. The global trajectory of each sign is segmented into a set of strokes cutting at the KMCPs. To incorporate the local information of gesture, the hand shape of the key frame at the stroke endpoints is also made as part of the stroke. Fig 3 shows the stroke representation of a sign language word. The word has 4 KMCPs shown as dots in the trajectory. The key frames at each KMCP are shown in the figure which is also considered for feature extraction. Each stroke feature vector has size [1X45](Global feature [1X32] and local feature vector[1X13] KMCPs which are the major curvature changes in the trajectory are marked with white dots in the Fig 4.

The trajectories are divided into three segments by cutting at the KMCPs. The stroke based representation has helped in better recognition of gestures by accommodating the slight variability’s in the trajectory when different users gesticulate with spatio-temporal variations. Signs of the same words showing spatio temporal variability show similarity at stroke level. Shapes in Fig 4 depict an example. Since strokes are shared between words, the number of strokes will be less than the number of words. Hence it reduces the training samples. Combination of local and global motion has resolved the ambiguity of recognizing signs having same global motion but different hand conjurations. Another advantage of the model is that, we do not need any dedicated models for representing each sign.

Grouping set of strokes:
The set of strokes are grouped using any of the four below mentioned techniques to eradicate Movement Epenthesis in continuous sign language. When Person (X) gestures a full sentence of sign language in a continuous fashion, recognition becomes challenging. The challenge involved is the presence of filler strokes which bridges two consecutive words called the inter sign-transitions or movement epenthesis. One movement epenthesis may include multiple strokes sometimes. Each stroke corresponds to a trajectory or a sequence of frames. Sub-gesture problem which is another challenge in this context, happens when a dynamic gesture of one particular word matches partly with the gesture of another word. The set of strokes are grouped together to form meaningful sentences and eliminating the Moment Epenthesis.

The present invention provides solution to movement epenthesis segmentation based on probabilistic prediction of strokes. The representation of signs as stroke sequence has paved the way to this method. The Captured Gestures (CG) are converted into stroke sequence like (Sx; Sy:::::Sz). Unless the usual approach of searching for a filler gesture in the sentence and eliminate them, the present method goes for grouping together the meaningful strokes together which will naturally isolate the filler strokes or ME. There is also no need of a training phase for movement epenthesis strokes which makes the entire framework less complex. The present invention also resolves the problem of sub-gestures mentioned above. Grouping of meaningful strokes is done using four methods of stroke sequence predictions which are explained below.

1. All kth order Markov Model based Moment Epenthesis (ME) segmentation
In All kth order Markov model, all orders of Markov Model are generated to predict the occurrence of an End Stroke of a word. Algorithm 1 starts with 0th order Markov model and iteration increments the order of Markov until an abrupt change in the likelihood probability is seen which indicates the end of a word as shown in Fig 5.

The probability of occurrence of state Si given previous states is

The probability of a sequence of states is

wherein, S1, S2, S3 are the stroke sequence states. In the testing phase, using these estimated Markov model parameters the probability of a sequence can be found.

The probability of a sequence,

When a continuous sign language corresponding to a continuous stream of sentences gets gesticulated, a stream of strokes (Sn)neZ are observed. On the arrival of each symbol S0 = s (stroke), the probability of occurrence of the symbol is estimated, given the infinite past history . The notation S-1-8 indicates the infinite past history. The past could be approximated as the context according to Markov chain property. Hence,

This algorithm provides a way of finding the end of a word stroke based on the pattern of change in the estimated likelihood probability of current and previous iterations. and are the likelihood probability of current and previous iterations respectively, wherein

and refer to the past history considered in the previous, current and next iterations respectively

The past history of next iteration is calculated as follows:

where is a threshold.

It is possible that the single or multiple ME strokes follow this abrupt change in the probability of sequence.

Hence in order to determine the start of next word the iteration from 0th order Markov is again restarted and wait for an abrupt change in likelihood. In addition to it the Kth order Markov process is also capable of solving sub gesture problem. Some gestures are part of another gesture,
So it may possible that the super gesture may be misclassified as sub gesture. In this approach, the algorithm segments the sentence into words only if there occurs an abrupt change in the likelihood probability of a sequence. If the sequence contains a super gesture, then segmentation occurs only if there exists an abrupt decrease in probability. Thus this method can clearly distinguish between a sub gesture and super gesture. In the testing phase, the probability of sequence can be computed using these estimated transition probabilities. Fig 5 shows the flow diagram.

2. Association Rule Mining (ARM) based ME segmentation
In alternative method, the present invention provides the ARM based ME segmentation which uses the idea of Association Analysis. The methodology of Association Analysis which is useful for discovering interesting relationships hidden in the large sets of data is applied here for stroke prediction. Figure 7 shows the flow diagram. The method aims to extract the interesting correlations, frequent patterns, associations or casual structures among set of data items in some data repositories.

The ARM finds out association rules that satisfy two measures known as minimum support and confidence from a given data repository. In this method the idea is to segment the ME strokes.

For instance let S=S1, S2, S3,…..SN be the set of all possible strokes in the SL Word Dictionary and
T = t1, t2, t3,….. tN be the set of all transactions
wherein N being the number of transactions which corresponds to each ISL word stroke sequence.

The transactions (Si-1Si-2-> Si) in which stroke sequence Si-1Si-2 precedes Si are created for all corresponding SL word stroke sequences from i=1 to N. For example one such transaction is S1S2-> S3 in which the stroke sequence S1S2 precedes the stroke S3.

The strength of an association rule say X -> Y, where X and Y are subsets of S, which is the set of all transaction Stroke Sequences and X n Y = NULL are measured in terms of its “support” and “confidence”.

The support determines how often a rule is applicable to a given data set, while confidence determines how frequently strokes in Y appear in transactions that contain X. Another important property of an item set is its support count, which refers to the number of transactions that contain a particular stroke. Mathematically, the support count, s(X), for a stroke X can be stated as follows:

where |.| indicates the no: of elements in the set. The formal definitions of these metrics are

This support and confidence measure is used to calculate the sequence support and whenever there occurs an abrupt change in the support and confidence measure, the method segments the sentence indicating the end of a word. The output from this segmentation module is a set of stroke sequences corresponding to the words in the sentence. The text corresponding to the word is then identified by considering the order and stroke sequences. The resulted output is the text sentence containing the segmented words.

3. N-gram based stroke sequence predication for ME segmentation
In this approach N-gram sequence prediction model is employed for segmenting out meaningful stroke sequence corresponding to the words in the ISL Dataset. N-gram sequence prediction tables are created by considering all substrings of length N. For each of the SL word stroke sequence, entries are made in all possible N-Gram tables. N-Gram sequence prediction table were maintained as hash tables with sequence number as index for easy access of the predicted sequence. Shown below is an example of sample stroke sequences and the corresponding N-Gram tables. When a SL stroke sequence of a sentence arrives, it is checked for a hit in the N-Gram tables. A miss in the N-Gram table shows the presence of ME segments. Fig 8 indicates the flow diagram.

Example: NGram Based Method

4. Finite state machine based solution for ME.
The fourth solution proposed for solving ME segmentation problem was by the use of finite state machines which can correctly differentiate between a word and a non-word. In this solution the finite state automatas for each of the SL words is constructed in the vocabulary.

In the next step when a continuous SL sentence arrives, the automatas whose first state matches with the current state are identified out of all automotas. After that the identified automatas are further scanned until an end state is reached.

In another embodiment, the present system is used to store conversation for future reference or evidence between two or more persons communication in sign language. Fig 9 shows the flow diagram.

The System (1000) for providing interpretation of sign language of plurality of Persons (P1 to Pn) in the Available Language (AL). The System (1000) comprises of at least one each of Gesture Capturing Unit (100) to capture the gestures of said Persons (P1 to Pn) as Captured Gesture (CG), at least one Control Unit (110) to interpret said Captured Gestures (CG) and at least one Graphical User Interface (120) to display the Conversion Result (CR). Figure 10

The Control Unit (110) comprises of at least one each of Memory Unit (111) capable of storing the Captured Gesture (111(CG)), Trained Data Set (111(TD)), Processing Module (111(PM)) and Conversion Result (CR) as future reference or evidence and at least one Processing Unit (112) capable of processing the Processing Module (PM) to provide interpretation of sign language.

The Conversion Result can be viewed later on Graphical User Interface in all the Available Language (AL).

The present invention is now described with illustrations and non-limiting examples:

EXAMPLE 1:
A Person (Y) uses the System (1000) to interpret sign language of Person (X). The System (1000) converts the gestures that the Person (X) makes while communicating through sign language into a language spoken by the Person (Y). The Person (Y) selects the language in which he wants the gestures to be converted out of the available options on the Graphical User Interface (120). Trained Data Base (111(TM)) and Processing Module (111(PM)) are stored in the Memory Unit (111). The Capturing Unit (100) captures the gesture of Person (X) and stores in Memory Unit (111). The Processing Module (111(PM)) is processed by the Processing Unit (112) to convert the stored Captured Gestures (CG) into User Selected Language (UL). The Captured Gestures (CG) are converted into set of strokes. The set of strokes are grouped together to eradicate Moment Epenthesis and convert the gestures of into User Selected language (USL) in real time.

EXAMPLE 2:
Another implementation of the present system is interpreting the sign language and storing the Conversion Result (CR) for further reference or evidence. The System (1000) can be placed in an environment where two or more people are communicating in sign language. The Gesture Capturing Unit (100) captures the gesture of the Persons (P1-Pn) and are stored as Captured Gestures (CG) in the Memory Unit (111). The Processing Unit (112) process the Processing Module (111(PM)) to interpret the Captured Gestures (CG). The Interpretation of the sign language is stored in multiple languages for future reference or evidence.

Documents

Application Documents

# Name Date
1 201641028504-IntimationOfGrant15-12-2023.pdf 2023-12-15
1 Form 5 [22-08-2016(online)].pdf 2016-08-22
2 201641028504-PatentCertificate15-12-2023.pdf 2023-12-15
2 Form 3 [22-08-2016(online)].pdf_48.pdf 2016-08-22
3 Form 3 [22-08-2016(online)].pdf 2016-08-22
3 201641028504-FORM-8 [11-10-2023(online)].pdf 2023-10-11
4 Form 20 [22-08-2016(online)].pdf 2016-08-22
4 201641028504-AMMENDED DOCUMENTS [29-09-2023(online)].pdf 2023-09-29
5 Drawing [22-08-2016(online)].pdf 2016-08-22
5 201641028504-EDUCATIONAL INSTITUTION(S) [29-09-2023(online)].pdf 2023-09-29
6 Description(Complete) [22-08-2016(online)].pdf 2016-08-22
6 201641028504-EVIDENCE FOR REGISTRATION UNDER SSI [29-09-2023(online)].pdf 2023-09-29
7 Assignment [22-09-2016(online)].pdf 2016-09-22
7 201641028504-FORM 13 [29-09-2023(online)].pdf 2023-09-29
8 abstract 201641028504.jpg 2016-09-28
8 201641028504-MARKED COPIES OF AMENDEMENTS [29-09-2023(online)].pdf 2023-09-29
9 201641028504-Response to office action [29-09-2023(online)].pdf 2023-09-29
9 Correspondence by Agent_Form5_10-10-2016.pdf 2016-10-10
10 201641028504-Response to office action [14-09-2023(online)].pdf 2023-09-14
10 Other Patent Document [26-10-2016(online)].pdf 2016-10-26
11 201641028504-Correspondence to notify the Controller [12-09-2023(online)].pdf 2023-09-12
11 Form 26 [26-10-2016(online)].pdf 2016-10-26
12 201641028504-US(14)-HearingNotice-(HearingDate-15-09-2023).pdf 2023-08-09
12 Correspondence by Agent_Form1 Power Of Attorney_07-11-2016.pdf 2016-11-07
13 201641028504-FER.pdf 2021-10-17
13 Correspondence by office_Rule 6 (1A)_28-07-2017.pdf 2017-07-28
14 201641028504-Correspondence_Power of Attorney_02-08-2021.pdf 2021-08-02
14 201641028504-PETITION UNDER RULE 137 [21-08-2017(online)].pdf 2017-08-21
15 201641028504-FORM-26 [22-07-2021(online)].pdf 2021-07-22
15 Correspondence By Agent_Petition Under Rule 137_28-08-2017.pdf 2017-08-28
16 201641028504-ABSTRACT [16-07-2021(online)].pdf 2021-07-16
16 201641028504-FORM 18 [26-09-2019(online)].pdf 2019-09-26
17 201641028504-MARKED COPIES OF AMENDEMENTS [16-07-2021(online)].pdf 2021-07-16
17 201641028504-AMMENDED DOCUMENTS [16-07-2021(online)].pdf 2021-07-16
18 201641028504-CLAIMS [16-07-2021(online)].pdf 2021-07-16
18 201641028504-FORM 13 [16-07-2021(online)].pdf 2021-07-16
19 201641028504-COMPLETE SPECIFICATION [16-07-2021(online)].pdf 2021-07-16
19 201641028504-FER_SER_REPLY [16-07-2021(online)].pdf 2021-07-16
20 201641028504-DRAWING [16-07-2021(online)].pdf 2021-07-16
21 201641028504-COMPLETE SPECIFICATION [16-07-2021(online)].pdf 2021-07-16
21 201641028504-FER_SER_REPLY [16-07-2021(online)].pdf 2021-07-16
22 201641028504-CLAIMS [16-07-2021(online)].pdf 2021-07-16
22 201641028504-FORM 13 [16-07-2021(online)].pdf 2021-07-16
23 201641028504-AMMENDED DOCUMENTS [16-07-2021(online)].pdf 2021-07-16
23 201641028504-MARKED COPIES OF AMENDEMENTS [16-07-2021(online)].pdf 2021-07-16
24 201641028504-FORM 18 [26-09-2019(online)].pdf 2019-09-26
24 201641028504-ABSTRACT [16-07-2021(online)].pdf 2021-07-16
25 Correspondence By Agent_Petition Under Rule 137_28-08-2017.pdf 2017-08-28
25 201641028504-FORM-26 [22-07-2021(online)].pdf 2021-07-22
26 201641028504-Correspondence_Power of Attorney_02-08-2021.pdf 2021-08-02
26 201641028504-PETITION UNDER RULE 137 [21-08-2017(online)].pdf 2017-08-21
27 201641028504-FER.pdf 2021-10-17
27 Correspondence by office_Rule 6 (1A)_28-07-2017.pdf 2017-07-28
28 201641028504-US(14)-HearingNotice-(HearingDate-15-09-2023).pdf 2023-08-09
28 Correspondence by Agent_Form1 Power Of Attorney_07-11-2016.pdf 2016-11-07
29 201641028504-Correspondence to notify the Controller [12-09-2023(online)].pdf 2023-09-12
29 Form 26 [26-10-2016(online)].pdf 2016-10-26
30 201641028504-Response to office action [14-09-2023(online)].pdf 2023-09-14
30 Other Patent Document [26-10-2016(online)].pdf 2016-10-26
31 201641028504-Response to office action [29-09-2023(online)].pdf 2023-09-29
31 Correspondence by Agent_Form5_10-10-2016.pdf 2016-10-10
32 201641028504-MARKED COPIES OF AMENDEMENTS [29-09-2023(online)].pdf 2023-09-29
32 abstract 201641028504.jpg 2016-09-28
33 201641028504-FORM 13 [29-09-2023(online)].pdf 2023-09-29
33 Assignment [22-09-2016(online)].pdf 2016-09-22
34 201641028504-EVIDENCE FOR REGISTRATION UNDER SSI [29-09-2023(online)].pdf 2023-09-29
34 Description(Complete) [22-08-2016(online)].pdf 2016-08-22
35 201641028504-EDUCATIONAL INSTITUTION(S) [29-09-2023(online)].pdf 2023-09-29
35 Drawing [22-08-2016(online)].pdf 2016-08-22
36 201641028504-AMMENDED DOCUMENTS [29-09-2023(online)].pdf 2023-09-29
36 Form 20 [22-08-2016(online)].pdf 2016-08-22
37 Form 3 [22-08-2016(online)].pdf 2016-08-22
37 201641028504-FORM-8 [11-10-2023(online)].pdf 2023-10-11
38 Form 3 [22-08-2016(online)].pdf_48.pdf 2016-08-22
38 201641028504-PatentCertificate15-12-2023.pdf 2023-12-15
39 Form 5 [22-08-2016(online)].pdf 2016-08-22
39 201641028504-IntimationOfGrant15-12-2023.pdf 2023-12-15

Search Strategy

1 2021-01-1814-49-56E_18-01-2021.pdf

ERegister / Renewals

3rd: 12 Mar 2024

From 22/08/2018 - To 22/08/2019

4th: 12 Mar 2024

From 22/08/2019 - To 22/08/2020

5th: 12 Mar 2024

From 22/08/2020 - To 22/08/2021

6th: 12 Mar 2024

From 22/08/2021 - To 22/08/2022

7th: 12 Mar 2024

From 22/08/2022 - To 22/08/2023

8th: 12 Mar 2024

From 22/08/2023 - To 22/08/2024

9th: 12 Mar 2024

From 22/08/2024 - To 22/08/2025

10th: 14 Aug 2025

From 22/08/2025 - To 22/08/2026