Sign In to Follow Application
View All Documents & Correspondence

A Multimodal System And Method Facilitating Gesture Creation Through Scalar And Vector Data

Abstract: A device and a method facilitating generation of one or more intuitive gesture sets for the interpretation of a specific purpose are disclosed. Data is captured in a scalar and a vector form which is further fused and stored. The intuitive gesture sets generated after the fusion are further used by one or more components/devices/modules for one or more specific purpose. Also incorporated is a system for playing a game. The system receives one or more actions in a scalar and a vector form from one or more user in order to map the action with at least one pre stored gesture to identify a user in control amongst a plurality of users and interpret the action of user for playing the game. In accordance with the interpretation, an act is generated by the one or more component of the system for playing the game. FIGURE 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 March 2012
Publication Number
47/2013
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2021-02-08
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
NIRMAL BUILDING, 9TH FLOOR, NARIMAN POINT, MUMBAI 400021, MAHARASHTRA, INDIA.

Inventors

1. SINGH, ANIRUDDHA
TATA CONSULTANCY SERVICES, BENGAL INTELLIGENT PARK, BUILDING - D PLOT NO. - A2 M2 & N2 BLOCK-EP, SALT LAKE ELECTRONICS COMPLEX, SECTOR - V, KOLKATA- 700091, WEST BENGAL, INDIA
2. CHAKRAVARTY, KINGSHUK
TATA CONSULTANCY SERVICES, BENGAL INTELLIGENT PARK, BUILDING - D PLOT NO.- A2 M2 & N2 BLOCK-EP, SALT LAKE ELECTRONICS COMPLEX, SECTOR - V, KOLKATA- 700091, WEST BENGAL, INDIA
3. GUPTA, ROHIT KUMAR
TATA CONSULTANCY SERVICES, BENGAL INTELLIGENT PARK, BUILDING - D PLOT NO. - A2 M2 & N2 BLOCK-EP, SALT LAKE ELECTRONICS COMPLEX, SECTOR - V, KOLKATA- 700091, WEST BENGAL, INDIA
4. PAL, ARPAN
TATA CONSULTANCY SERVICES, BENGAL INTELLIGENT PARK, BUILDING - D PLOT NO. - A2 M2 & N2 BLOCK-EP, SALT LAKE ELECTRONICS COMPLEX, SECTOR - V, KOLKATA- 700091, WEST BENGAL, INDIA
5. BASU, ANUPAM
TATA CONSULTANCY SERVICES, PLOT A2 M2 & N2 BLOCK-GP, SALT LAKE ELECTRONICS COMPLEX, SECTOR - V, KOLKATA- 700091, WEST BENGAL, INDIA

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
A MULTIMODAL SYSTEM AND METHOD FACILITATING GESTURE CREATION THROUGH SCALAR AND VECTOR DATA
Applicant
TATA Consultancy Services Limited A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.

BACKGROUND OF THE INVENTION
The basic objective of Human Computer Interaction (HCI) is to improve the interaction between users and computers by making computers more usable and receptive to user's needs. Furthermore, HCI seeks to design a system which would decrease the hurdles between the human's action instructing a specific task to be accomplished and the computer understands of the same. HCI, using visual information as an input, has wide applications ranging from computer games to control of robots. The main advantage of using visual information as an input is that it makes communication with the computer possible from a distance without the need of any physical contact. Visual information comprising of movement due to skeleton points is chiefly beneficial when the environment surrounding the user is noisy, where speech commands would prove to be less cognizable. On the other hand, speech commands are beneficial when the user is visually impaired or is incapable of offering hand gestures as an input to the computer.
At present lot of system and methods are available for enabling the interaction of user with that of compute or machine. Most of them use either visual gesture for controlling or interacting with the machine or uses direction of sound by which user is detected. Although all these methods have made the HCI easier but there are numerous challenges with these current Human Computer Interaction methodologies. The individual mode of interaction using either visual or just speech input is less accurate. The existing vocabulary or dictionary for visual, sound and speech gestures is inadequate. In addition, as the number of gestures increases, the recognizing capability of the gestures by the classifier is reduced. Also, in the case of skeleton based tracking of human postures for detection of gestures, there is a difficulty in tracking the skeleton points when they come close to each other. Moreover, when there are multiple users, the computer may erroneously register a controlling user. Thus, the recognition accuracy of a controlling user reduces in the

case of a multi-user system. Also, no work has been done for combining or fusing the directionality of sound or speech simultaneously with visual or touch gestures to create a multimodal gesture command.
Thus, there is a need for creating an intuitive gesture set combining the directionality of sound or speech simultaneously with visual or touch gestures to achieve accuracy in the interaction between humans and computers and to provide a solution of recognizing user in control amongst one or more users.
SUMMARY OF THE INVENTION
The present invention provides a device that facilitates generation of one or more intuitive gesture sets to be interpreted further for a specific purpose. The device comprises of a gesture creator including one or more sensors adapted to sense data in a scalar and a vector form from at least one user. The gesture creator further comprises of a fusing module configured to process the data to fuse the vector and the scalar form of data for generating one or more intuitive gesture sets and a tangible media adapted to store the generated intuitive gesture sets in order to create a gesture library to be used for further interpretation by a reciprocating module. The device facilitates generation of the intuitive gesture sets such that the reciprocating module further uses the gesture library for mapping a similar gesture set with the gesture set stored in the tangible media for further interpretation for the specific purpose.
The present invention also provides a computer implemented method that facilitates generation of one or more intuitive gesture sets to be interpreted further for a specific purpose. The method comprising steps of sensing data in a scalar and a vector form from at least one user, processing the sensed data in order to fuse the scalar and vector form of the data to generate one or more intuitive gestures sets and storing the generated intuitive gesture sets in order to create a gesture library to be used for further interpretation. The method facilitates the generation of the intuitive gesture

sets such that the further interpretation is performed by using the stored intuitive gesture sets for mapping with a similar gesture set to interpret them for a specific purpose.
The present invention further provides a system for playing a game. The system comprises of a user interface configured to receive one or more actions in a scalar and a vector form from one or more users playing the game, a processor configured to identify a correlation among each of the scalar and vector data with respect to spatial and temporal correspondence to identify a user in control amongst the one or more users. The processor further comprises of a gesture library configured to map the correlated scalar and vector form of data with at least one pre stored intuitive gesture sets to identify a distinct interpretation for the action of user in control and a reciprocating module configured to generate an act in response to the distinct interpretation for the user in control based upon the mapping for playing the game.
OBJECTS OF THE INVENTION
It is the prime object of the invention to provide a system and method for generating intuitive gesture set.
It is another object of the invention to provide a system and method for generating the intuitive gesture set to be interpreted further for a specific purpose.
It is another object of the invention to provide a system and method for sensing data in scalar and vector form from one or more user.
It is another object of the invention to provide a system for playing a game.
DETAILED DESCRIPTION
Some embodiments of this invention, illustrating its features, will now be discussed:

The words "comprising", "having", "containing", and "including", and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms "a", "an", and "the" include plural references unless the context clearly dictates otherwise. Although any systems, methods, apparatuses, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and parts are now described. In the following description for the purpose of explanation and understanding reference has been made to numerous embodiments for which the intent is not to limit the scope of the invention.
One or more components of the invention are described as module for the understanding of the specification. For example, a module may include self-contained component in a hardware circuit comprising of logical gate, semiconductor device, integrated circuits or any other discrete component. The module may also be a part of any software programme executed by any hardware entity for example processor. The implementation of module as a software programme may include a set of logical instructions to be executed by the processor or any other hardware entity. Further a module may be incorporated with the set of instructions or a programme by means of an interface.
The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.
The present invention relates to a device and a method for facilitating generation of one or more intuitive gesture sets. The device and method generates one or more intuitive gesture sets which are further interpreted for a specific purpose. Data is

captured in a scalar and a vector form which is further fused and stored. The intuitive gesture sets generated after the fusion are further used by one or more components/devices/modules for one or more specific purpose.
In accordance with an embodiment, referring to figure 1, the device (100) comprises of a gesture creator (102) for creating one or more intuitive gestures. The gesture creator (102) further comprises of a fusing module (104) and a tangible media (106) for storing the intuitive gestures thus created. These intuitive gestures are further interpreted for one or more specific purpose.
Referring to figure 6, the gesture creator (102) includes one or more sensors (101) adapted to sense data in a plurality of form. The sensors (101) may include but is not limited to one or more microphone or one or more camera. The gesture creator (102) captures data from one or more user. The user should be in the range of microphone and the camera. The sensors (101) including camera and microphone senses data in a scalar form and a vector form. The camera and the microphone capture data in the scalar and the vector form.
The scalar form of data may include data in visual form or data in acoustic form and the data in vector form include direction of the acoustic form of data and the X,Y,Z cloud points of the visual data (as shown in step 302 of figure 3). The data in visual form includes but is not limited to gesture created due to relative motion between skeleton points like pointing left, clapping and moving hand for detecting the user (as shown in step 304 of figure 3). The data in acoustic form includes but is not limited to gesture created due to normal speech and sound or audio that human being or any source can generate easily, similar to whistling, clapping, and ringtone.
As illustrated in figure 6, the visual and sound gestures are captured by the plurality of microphones and cameras and are further transmitted to a processor for behavior analyses of one or more users.

The data in the acoustic form captured by the microphone undergoes filtering through noise and echo-cancellation (as shown in step 318 of figure 3). Figure 4 illustrates the technique of getting the sound direction by calculating the azimuth angle and elevation angle and thereby identifying the "active user" from the background. The active user is the user in control of the device. Referring to figure 4, the azimuth is the angle formed between reference directions (North) and a line from the microphone array to a point of interest projected on the same plane as the reference direction. The elevation is the angle between the horizontal plane and the reference beam joining the microphone array and the point of interest.
If the microphone array is arranged in a horizontal fashion then one dimensional function of sound power versus azimuth is obtained (as shown in fig 4a). If two orthogonal pairs of microphones are used then sound power along with elevation is obtained (as shown in fig 4b).
One dimensional (ID) sound image is registered with the camera image (both sharing the common center)
m = pixel column number
fov= camera field of view in radians
M = image width in pixels
Sound angle.θ m is obtained by:


This sound angle is then used to calculate the corresponding interaural time delay, dm. as

Where Fsamp is the sampling frequency (44.1 KHz), Dmikes is the spacing between the microphones, and VSound is the speed of the sound (344m/s).
The fusing module (104) is configured to process the data to fuse the vector and the scalar form of data for generating one or more intuitive gesture sets. The scalar and vector form of data is fused in accordance with Bayesian rules.
A Bayesian multimodal fusion classifier is incorporated in the fusing module (104) to classify between the active user and "back ground" using skeleton and sound features described above. The decision rule for a two-class minimum Bayes risk classifier is expressed in terms of the probability density likelihood ratio as:

measurement vector y for classes w1 and w2, respectively.
Sound direction (x) and skeleton direction (s) are assumed to be statistically independent features, allowing their joint probability density to be expressed as the product of two independent densities. For two classes, "active user (A)" and "background (B)", these density functions can be written as:


Hence the likelihood ratio can be expressed as:

Where, the partial probability densities can be approximated by Gaussian distribution or uniform distribution as and when required depending on different situations.
The gesture creator (102) further comprises of a tangible media (106) which is adapted to store the generated intuitive gesture sets in order to create a gesture library (206) (as shown in figure 2) to be used for further interpretation by a reciprocating module (108). The tangible media (106) includes but is not limited to a storage device or a hard disk.
The reciprocating module (108) further uses the gesture library (206) for mapping a similar gesture set with the gesture set stored in the tangible media (106) for further interpretation. This interpretation is carried out for specific purposes which includes but not limited to interpretation of gestures sets for playing a game, for controlling a vehicle, or for operating any electric device.
The present invention also relates to a system for playing a game. The system receives one or more actions in a scalar and a vector form from one or more user in order to map the action with at least one pre stored gesture to identify a user in control amongst a plurality of users and interpret his action for playing the game. In

accordance with the interpretation, an act is generated by the one or more component of the system for playing the game.
In accordance with an embodiment, referring to figure 2, the system (200) comprises of a user interface (202) which is configured to receive one or more actions in a scalar and a vector form from one or more users playing the game. The scalar form of data includes data in visual or acoustic form or a combination thereof and vector form of data includes the direction of the acoustic form of data and the X,Y,Z cloud points of the visual data as described above.
The system further comprises of a processor (204), which is configured to identify a correlation among each of the scalar and vector data with respect to spatial and temporal correspondence to identify a user in control amongst the one or more users. The data in the acoustic form is detected using the microphone array (as shown in step 316 of figure 3). The processor (204) detects the key command word (gesture in the acoustic form) (as shown in step 320 of figure 3). The processor (204) identifies if the detected gesture in the acoustic form is a registered command gesture (as shown in step 322). Further, the data in the acoustic form captured by the microphone undergoes filtering through noise and echo-cancellation (as shown in step 318). The direction of the acoustic form of data is obtained by the azimuth and elevation method as described above.
In addition, the processor (204), detects the user's skeleton (data in the visual form) using the camera (as shown in step 306 of figure 3). The direction of the user's skeleton head joints is determined using the azimuth and elevation calculation as described above. The direction of the data in the acoustic form and the direction of the skeleton head joints are matched/fused (as shown in step 308 of figure 3) using the adapted Bayes algorithm which is described above. If the direction of the data in

the acoustic form and direction of the skeleton head joint correlates with respect to spacial and temporal correspondence (as shown in step 310), then the head joint is recognized as the active user (as shown in step 312 of figure 3).
The processor (204) further comprises of a gesture library (206). The gesture library (206) is used for the storing the pre determined intuitive gesture sets (the creation of such intuitive gesture sets is described above). The gesture library (206) is configured to map the correlated scalar and vector form of data with at least one pre stored intuitive gesture set to identify a distinct interpretation for the action of the user in control.
The processor (204) further comprises of a reciprocating module (108), which is configured to generate an act in response to the distinct interpretation for the user in control based upon the mapping with intuitive gestures stored in the gesture library (108) for playing the game. The act in response further includes one or more actions for playing the game.
In accordance with a preferred embodiment, the system further comprises of an operating device (206) which is adapted to display the act in response to the distinct interpretation for the action of user in control. The operating device (206) includes but is not limited to a screen or a monitor.
By way of specific example, a hypothetical image of the user in control may get displayed on a display and will perform the same action as done by user. For example, if user in control (as identified from a direction of his voice) says clap and also performs action of clap by his hands then the hypothetical image will also perform clap in a similar manner.
BEST MODE/EXAMPLE FOR WORKING OF THE INVENTION
The device and method illustrated for facilitating the generation of one or more intuitive gesture sets to be interpreted further for a specific purpose may be

illustrated by working examples stated in the following paragraph; the process is not restricted to the said examples only:
Example 1: Let us consider that a user is using a multi-touch enabled system. Here, multi-touch is used for 2D zooming. Mutli-touch is an input in a visual form. The sound "Zoooooo..." can be used as an input in an acoustic form. By combining the input in a visual form (multi-touch) and the input in acoustic form (sound "Zoooooo..."), 3D Zoom (depth zoom) can be produced for any multi-touch enabled system. Thus, by combining a gesture in visual form with a gesture in acoustic form, an intuitive gesture set can be created.
Example 2: As illustrated in the figure 5, gesture 1 and gesture 2 explain how the same audio gesture can be combined with the same visual gesture to mean a different gesture. Gesture 1 exhibits sound made by the right hand while the gesture 2 exhibits sound made by the left hand. Though the visual gesture in both the cases is the same, by detecting the direction of the sound and thus combining the gesture in the acoustic form and gesture in the visual form, two different gestures can be created. These two different gestures can be used to interpret two different actions made by the user.
Example 3: Application of the sound and visual gesture
Referring to figure 7, in the multiplayer indoor game, the combination of 3D visual and 3D sound is used to determine an intuitive gesture, which is further processed to derive a meaning of the gesture. The combination of 3D visual and 3D sound is considered as a 2-tuple. Multiple such tuples over time are used to analyze the complete meaning of the sequence of tuples. Let us illustrate this statement with the indoor game, squash.
In the squash game, two players are involved where a tennis ball is hit alternately by each player using a squash racket. The arrangement is shown in the figure 7. where

the sensing device is placed behind the glass wall in such a way that the players are not distracted. The sensing device continuously captures the skeleton points of both the players and also the sound direction.
The instances of sounds are as follows:
a. Ball hitting the glass wall
b. Ball hitting the side wall
c. Ball hitting the racket
d. Ball hitting the floor
e. Players' shoes hitting the floor
f. Other sounds.
The above sounds can be modeled using HMM and can be used to detect the type of sound during the game.
Thus the feature vector is given below:
F = {skeleton points (ski) for all i and both the players, sound type and sound direction}
Based on the above features we can derive the following information:
a. exact time instances of different sounds
b. type of sound (contact point of the ball with certain object)
c. location of the skeleton points for both the players

Given the above information we would be able to derive the following
a. The active player (hitting the ball) and the passive player
b. Predict the direction of the ball assuming the ball will traverse in straight line.
Any deviation of the same can be derived from the actual contact information
(sound location) and hence the swing of the ball be derived.
c. The change in the skeleton points over time between two sound instances.
This will allow in analyzing on how the player reacts to the situation when
the player is an active player and how the player takes the positions when in
passive state.
Thus the fitness level and the skill of the player can be analyzed in real-time during the progress of the match.
ADVANTAGES OF THE INVENTION
1. The combination of gesture in acoustic form with the gesture in visual form creates a new gesture set thus enhancing the already existing library of the gesture set.
2. The detection direction of the sound source helps in accurately identifying the user in control of the device.
3. The difficulty in tracking the skeleton points of human postures when they come close to each other is overcome by combining the directionality of the gesture in the acoustic form with the gesture in the visual form.
4. The gesture set including the combination of visual and sound gesture aids the classifier in classifying amongst the gestures when the number of gestures increases.

WE CLAIM:
1. A device that facilitates generation of one or more intuitive gesture sets to be interpreted further for a specific purpose, the device comprising: a gesture creator including one or more sensors adapted to sense data in a scalar and a vector form from at least one user, the gesture creator further comprising:
a fusing module configured to process the data to fuse the vector and the scalar form of data for generating one or more intuitive gesture sets; and
a tangible media adapted to store the generated intuitive gesture sets
in order to create a gesture library to be used for further interpretation
by a reciprocating module;
such that the reciprocating module further uses the gesture library for
mapping a similar gesture set with the gesture set stored in the tangible media
for further interpretation for the specific purpose.
2. The device as claimed in claim 1, wherein the specific purpose includes but is not limited interpretation of gesture sets for playing a game or for controlling a vehicle, operating any electronic device etc.
3. The device as claimed in claim 1, wherein the sensors may include but is not limited to one or more microphone, one or more camera or a combination thereof.
4. The device as claimed in claim 1, wherein the scalar form of data includes data in visual or acoustic form or a combination thereof and vector form of data includes the direction of the acoustic form of data and X,Y,Z cloud points of the visual data.
5. The device as claimed in claim 4, wherein the data in visual form includes but is not limited to gesture created due to relative motion between skeleton points.

6. The device as claimed in claim 1, wherein the fusing module fuses the scalar and vector form of data by using an algorithm which may include but is not limited to Bayesian Rules.
7. The device as claimed in claim 1. wherein the tangible media includes but is not limited to a storage device or a hard disk.
8. A computer implemented method that facilitates generation of one or more intuitive gesture sets to be interpreted further for a specific purpose, the method comprising steps of:
sensing data in a scalar and a vector form from at least one user;
processing the sensed data in order to fuse the scalar and vector form of the
data to generate one or more intuitive gestures sets; and
storing the generated intuitive gesture sets in order to create a gesture library
to be used for further interpretation;
such that the further interpretation is performed by using the stored intuitive
gesture sets for mapping with a similar gesture set to interpret them for a
specific purpose.
9. The computer implemented method as claimed in claim 8, wherein the specific purpose includes but is not limited interpretation of gesture sets for playing a game, for controlling a vehicle, operating any electronic device etc.
10. The computer implemented method as claimed in claim 8, wherein the scalar and vector form of data is fused by using an algorithm which may include but is not limited to Bayesian Rules.
11. The computer implemented method as claimed in claim 8, wherein the sensing further comprises of sensing data in visual or acoustic form or a combination thereof and sensing the direction of the acoustic form of data.
12. A system for playing a game, the system comprising;
a user interface configured to receive one or more actions in a scalar and a vector form from one or more users playing the game;

a processor configured to identify a correlation among each of the scalar and vector data with respect to spatial and temporal correspondence to identify a user in control amongst the one or more users, the processor further comprising:
a gesture library configured to map the correlated scalar and vector form of data with at least one pre stored intuitive gesture sets to identify a distinct interpretation for the action of user in control; a reciprocating module configured to generate an act in response to the distinct interpretation for the user in control based upon the mapping for playing the game.
13. The system as claimed in claim 12, wherein the system further comprises of an operating device adapted to display the act in response to the distinct interpretation for the action of user in control.
14. The system as claimed in claim 13. wherein the operating device includes but is not limited to a screen or a monitor.
15. The system as claimed in claim 12, wherein the scalar form of data includes data in visual or acoustic form or a combination thereof and vector form of data includes the direction of the acoustic form of data and the X.Y,Z cloud points of the visual data.
16. The system as claimed in claim 12, wherein the data in visual form includes but is not limited to gesture created due to relative motion between skeleton points.
17. The system as claimed in claim 12, wherein the act in response further includes one or more actions for playing the game.

Documents

Application Documents

# Name Date
1 805-MUM-2012-RELEVANT DOCUMENTS [28-09-2023(online)].pdf 2023-09-28
1 Form 3 [22-12-2016(online)].pdf 2016-12-22
2 805-MUM-2012-OTHERS [20-09-2017(online)].pdf 2017-09-20
2 805-MUM-2012-RELEVANT DOCUMENTS [30-09-2022(online)].pdf 2022-09-30
3 805-MUM-2012-US(14)-HearingNotice-(HearingDate-20-01-2021).pdf 2021-10-03
3 805-MUM-2012-FER_SER_REPLY [20-09-2017(online)].pdf 2017-09-20
4 805-MUM-2012-IntimationOfGrant08-02-2021.pdf 2021-02-08
4 805-MUM-2012-DRAWING [20-09-2017(online)].pdf 2017-09-20
5 805-MUM-2012-PatentCertificate08-02-2021.pdf 2021-02-08
5 805-MUM-2012-COMPLETE SPECIFICATION [20-09-2017(online)].pdf 2017-09-20
6 805-MUM-2012-Written submissions and relevant documents [03-02-2021(online)].pdf 2021-02-03
6 805-MUM-2012-CLAIMS [20-09-2017(online)].pdf 2017-09-20
7 ABSTRACT1.jpg 2018-08-11
7 805-MUM-2012-Correspondence to notify the Controller [18-01-2021(online)].pdf 2021-01-18
8 805-MUM-2012-FORM-26 [18-01-2021(online)].pdf 2021-01-18
8 805-MUM-2012-FORM 3.pdf 2018-08-11
9 805-MUM-2012-FORM 26(9-4-2012).pdf 2018-08-11
9 805-MUM-2012-Proof of Right [16-10-2020(online)].pdf 2020-10-16
10 805-MUM-2012-FORM 2.pdf 2018-08-11
10 805-MUM-2012-Written submissions and relevant documents [17-09-2020(online)].pdf 2020-09-17
11 805-MUM-2012-FORM 2(TITLE PAGE).pdf 2018-08-11
11 805-MUM-2012-PETITION UNDER RULE 137 [06-09-2020(online)].pdf 2020-09-06
12 805-MUM-2012-FORM 18.pdf 2018-08-11
12 805-MUM-2012-RELEVANT DOCUMENTS [06-09-2020(online)].pdf 2020-09-06
13 805-MUM-2012-FORM 1.pdf 2018-08-11
13 805-MUM-2012-Response to office action [03-09-2020(online)].pdf 2020-09-03
14 805-MUM-2012-Correspondence to notify the Controller [01-09-2020(online)].pdf 2020-09-01
14 805-MUM-2012-FORM 1(18-7-2012).pdf 2018-08-11
15 805-MUM-2012-FER.pdf 2018-08-11
15 805-MUM-2012-FORM-26 [01-09-2020(online)].pdf 2020-09-01
16 805-MUM-2012-DRAWING.pdf 2018-08-11
16 805-MUM-2012-Response to office action [01-09-2020(online)].pdf 2020-09-01
17 805-MUM-2012-US(14)-HearingNotice-(HearingDate-03-09-2020).pdf 2020-08-08
17 805-MUM-2012-DESCRIPTION(COMPLETE).pdf 2018-08-11
18 805-MUM-2012-CORRESPONDENCE.pdf 2018-08-11
18 805-MUM-2012-FORM 13 [30-01-2020(online)].pdf 2020-01-30
19 805-MUM-2012-CORRESPONDENCE(9-4-2012).pdf 2018-08-11
19 805-MUM-2012-PETITION UNDER RULE 137 [30-01-2020(online)].pdf 2020-01-30
20 805-MUM-2012-CORRESPONDENCE(18-7-2012).pdf 2018-08-11
20 805-MUM-2012-RELEVANT DOCUMENTS [30-01-2020(online)].pdf 2020-01-30
21 805-MUM-2012-CLAIMS.pdf 2018-08-11
21 805-MUM-2012-Written submissions and relevant documents [30-01-2020(online)].pdf 2020-01-30
22 805-MUM-2012-ABSTRACT.pdf 2018-08-11
22 805-MUM-2012-Correspondence to notify the Controller (Mandatory) [10-01-2020(online)].pdf 2020-01-10
23 805-MUM-2012-FORM-26 [10-01-2020(online)].pdf 2020-01-10
23 805-MUM-2012-HearingNoticeLetter-(DateOfHearing-15-01-2020).pdf 2019-12-23
24 805-MUM-2012-HearingNoticeLetter-(DateOfHearing-15-01-2020).pdf 2019-12-23
24 805-MUM-2012-FORM-26 [10-01-2020(online)].pdf 2020-01-10
25 805-MUM-2012-ABSTRACT.pdf 2018-08-11
25 805-MUM-2012-Correspondence to notify the Controller (Mandatory) [10-01-2020(online)].pdf 2020-01-10
26 805-MUM-2012-CLAIMS.pdf 2018-08-11
26 805-MUM-2012-Written submissions and relevant documents [30-01-2020(online)].pdf 2020-01-30
27 805-MUM-2012-CORRESPONDENCE(18-7-2012).pdf 2018-08-11
27 805-MUM-2012-RELEVANT DOCUMENTS [30-01-2020(online)].pdf 2020-01-30
28 805-MUM-2012-CORRESPONDENCE(9-4-2012).pdf 2018-08-11
28 805-MUM-2012-PETITION UNDER RULE 137 [30-01-2020(online)].pdf 2020-01-30
29 805-MUM-2012-CORRESPONDENCE.pdf 2018-08-11
29 805-MUM-2012-FORM 13 [30-01-2020(online)].pdf 2020-01-30
30 805-MUM-2012-DESCRIPTION(COMPLETE).pdf 2018-08-11
30 805-MUM-2012-US(14)-HearingNotice-(HearingDate-03-09-2020).pdf 2020-08-08
31 805-MUM-2012-DRAWING.pdf 2018-08-11
31 805-MUM-2012-Response to office action [01-09-2020(online)].pdf 2020-09-01
32 805-MUM-2012-FER.pdf 2018-08-11
32 805-MUM-2012-FORM-26 [01-09-2020(online)].pdf 2020-09-01
33 805-MUM-2012-Correspondence to notify the Controller [01-09-2020(online)].pdf 2020-09-01
33 805-MUM-2012-FORM 1(18-7-2012).pdf 2018-08-11
34 805-MUM-2012-FORM 1.pdf 2018-08-11
34 805-MUM-2012-Response to office action [03-09-2020(online)].pdf 2020-09-03
35 805-MUM-2012-FORM 18.pdf 2018-08-11
35 805-MUM-2012-RELEVANT DOCUMENTS [06-09-2020(online)].pdf 2020-09-06
36 805-MUM-2012-PETITION UNDER RULE 137 [06-09-2020(online)].pdf 2020-09-06
36 805-MUM-2012-FORM 2(TITLE PAGE).pdf 2018-08-11
37 805-MUM-2012-FORM 2.pdf 2018-08-11
37 805-MUM-2012-Written submissions and relevant documents [17-09-2020(online)].pdf 2020-09-17
38 805-MUM-2012-FORM 26(9-4-2012).pdf 2018-08-11
38 805-MUM-2012-Proof of Right [16-10-2020(online)].pdf 2020-10-16
39 805-MUM-2012-FORM 3.pdf 2018-08-11
39 805-MUM-2012-FORM-26 [18-01-2021(online)].pdf 2021-01-18
40 805-MUM-2012-Correspondence to notify the Controller [18-01-2021(online)].pdf 2021-01-18
40 ABSTRACT1.jpg 2018-08-11
41 805-MUM-2012-CLAIMS [20-09-2017(online)].pdf 2017-09-20
41 805-MUM-2012-Written submissions and relevant documents [03-02-2021(online)].pdf 2021-02-03
42 805-MUM-2012-PatentCertificate08-02-2021.pdf 2021-02-08
42 805-MUM-2012-COMPLETE SPECIFICATION [20-09-2017(online)].pdf 2017-09-20
43 805-MUM-2012-IntimationOfGrant08-02-2021.pdf 2021-02-08
43 805-MUM-2012-DRAWING [20-09-2017(online)].pdf 2017-09-20
44 805-MUM-2012-US(14)-HearingNotice-(HearingDate-20-01-2021).pdf 2021-10-03
44 805-MUM-2012-FER_SER_REPLY [20-09-2017(online)].pdf 2017-09-20
45 805-MUM-2012-RELEVANT DOCUMENTS [30-09-2022(online)].pdf 2022-09-30
45 805-MUM-2012-OTHERS [20-09-2017(online)].pdf 2017-09-20
46 Form 3 [22-12-2016(online)].pdf 2016-12-22
46 805-MUM-2012-RELEVANT DOCUMENTS [28-09-2023(online)].pdf 2023-09-28

Search Strategy

1 SearchQueries_15-03-2017.pdf

ERegister / Renewals

3rd: 25 Mar 2021

From 26/03/2014 - To 26/03/2015

4th: 25 Mar 2021

From 26/03/2015 - To 26/03/2016

5th: 25 Mar 2021

From 26/03/2016 - To 26/03/2017

6th: 25 Mar 2021

From 26/03/2017 - To 26/03/2018

7th: 25 Mar 2021

From 26/03/2018 - To 26/03/2019

8th: 25 Mar 2021

From 26/03/2019 - To 26/03/2020

9th: 25 Mar 2021

From 26/03/2020 - To 26/03/2021

10th: 25 Mar 2021

From 26/03/2021 - To 26/03/2022

11th: 23 Feb 2022

From 26/03/2022 - To 26/03/2023

12th: 24 Mar 2023

From 26/03/2023 - To 26/03/2024

13th: 25 Mar 2024

From 26/03/2024 - To 26/03/2025

14th: 06 Mar 2025

From 26/03/2025 - To 26/03/2026