Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Classification Of Mcving Object During Video Surveillance

Abstract: A system for classifying moving objects during video based surveillance comprising steps of: capturing silhouette image of moving object, resizing the captured image, computing average height to width ratio and center of gravity for the object in the resized image, dividing resized image, comparing the average height to average width of the object and further comparing variance of center of gravity with the predetermined threshold value to classify the object in the captured silhouette into predetermined classes.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 July 2010
Publication Number
02/2013
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2018-07-31
Renewal Date

Applicants

TATA CONSULTANCY SERVICES LIMITED
NIRMAL BUILDING, 9TH FLOOR, NARIMAN POINT, MUMBAI 400021, MAHARASHTRA, INDIA.
ANNA UNIVERSITY
ANNA UNIVERSITY MADRAS INSTITUTE OF TECHNOLOGY CAMPUS OF ANNA UNIVERSITY, CHROMPET CHENNAI 600044 INDIA

Inventors

1. SETHAN BEHRAM
TATA CONSULTANCY SERVICES MAKER TOWERS 'E' BLOCK, 11TH FLOOR, CUFFE PARADE, COLABA MUMBAI-400 005, MAHARASHTRA INDIA
2. JOHN MALA
ANNA UNIVERSITY MADRAS INSTITUTE OF TECHNOLOGY CAMPUS OF ANNA UNIVERSITY, CHROMPET CHENNAI 600044 INDIA
3. PALANIAPPAN PRASATH
ANNA UNIVERSITY MADRAS INSTITUTE OF TECHNOLOGY CAMPUS OF ANNA UNIVERSITY, CHROMPET CHENNAI 600044 INDIA
4. GANAPATHI SUMITHRA
ANNA UNIVERSITY MADRAS INSTITUTE OF TECHNOLOGY CAMPUS OF ANNA UNIVERSITY, CHROMPET CHENNAI 600044 INDIA

Specification

FORM 2
THE PATENTS ACT, 1970
{39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention: A SYSTEM AND METHOD FOR CLASSIFICATION OF MOVING OBJECT DURING VIDEO
SURVEILLANCE
Applicants TATA Consultancy Services Limited
A company Incorporated in India under The Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
and
Anna University
An Indian university having address at
Madras Institute of Technology campus of Anna University,
Chrompet, Chennai 600044, India
The following specification particularly describes the invention and the manner in which it is to be performed.

FIELD OF THE INVENTION
The present invention refates to a system and method for video surveiffance. More particularly trie invention relates to a system and method for classification of moving object during the video surveillance.
BACKGROUND OF THE INVENTION
It is quite strenuous to have manual surveillance round the clock in sensitive areas. Even with video cameras fitted in most of the places where security is a concern, the volume of the data generated by video is so enormous that it might demand prohibitively large data storage requirements-There is ever growing need for effective video surveillance. The word surveillance may be applied to observation from a distance by means of electronic equipment such as CCTV cameras. Surveillance is useful to private and government security agencies to maintain social control, recognize and monitor threats, and prevent/investigate, trespassing and criminal activities. The video surveillance is literally used everywhere including in sensitive areas and like airports, nuclear power plants, laboratories, banks. It is also used at traffic signals, streets, doors etc. Organizations responsible for conducting such surveillance typically deploy a plurality of sensors (e.g., Closed Circuit Television Video (CCTV) and infrared cameras, radars, etc.) to ensure security and wide-area awareness.
In the current state of the art, the classification of the moving object during the video surveillance in to predefined classes like human, animal (cattle) and vehicle is done-by different processes. Some of the inventions which try to address the moving object classification in the surveillance are:
US patent 7639840 granted to Hanna, et al. teaches a method and apparatus for video surveillance. A sequence of scene imagery representing a field of view is received. One or more moving objects are identified within the sequence of scene imagery and then classified in accordance with one or more extracted spatio-temporal features by using motion detection masks. This classification may then be applied to determine whether the moving object and/or its behavior fit one or more known events or behaviors that are causes for alarm,
In this prior art spatio-temporal signatures and feature vectors describing a moving object are used to classify the-moving object. All frames containing moving object are stored and analyzed

so as to come to decision regarding the classification of the moving object. This system requires large memory storage and heavy computations for calculation of spatio-temporal signatures and feature vectors of the moving object and the background as well.
Robust Real-Time Periodic Motion Detection, Analysis and Applications by Ross Cutler and Larry S. Davis, source : IEEE Transactions on Pattern Analysis and Machine Intelligence archive Volume 22 , Issue 8 (August 2000) table of contents Pages: 781 - 796 Year of Publication: 2000 teaches a method for moving object classification is given. The method classifies human, dog and others (vehicle). !n the method described therein, the self-similarity of the object as it evolves in time is computed. Self-similarity is periodic and time-frequency analysis is used to detect and characterize periodic motion. Also, the inherent 2D lattice structure of similarity matrices (absolute correlation) is used in characterization of periodicity of the movements. The periodicity pattern is used to distinguish between the three classes of interest.
Algorithms for Cooperative Multi Sensor Surveillance This paper appears In: Proceedings of the IEEE Issue Date: Oct 2001 Volume: 89 Issue: 10 On page(s): 1456 - 1477 by Robert T. Collins teaches a Video Surveillance and Monitoring (VSAM) team at Carnegie Mellon University (CMU) has developed an end-to-end, multi camera surveillance system that allows a single human operator to monitor activities in a cluttered environment using a distributed network of active video sensors. It automatically collect and disseminate real-time information to improve the situational awareness of security providers and decision makers.
In this prior art object detection is done by layered adaptive background subtraction and the classification is done by neural network classifier which is again computationally expensive.
The techniques used in the prior arts like temporal differencing, background subtraction, optical flow, use of motion detection mask, periodic motion and image correlation matching for the classification of the moving object in to predetermined classes are computationally expensive and requires more memory space for storing and analyzing sequence of frames in the video containing object of interest.
None of the prior art references discussed above propose the system and method for classification of moving object in the video surveillance in to predefined classes, using computationally inexpensive method which is more economic, using simple logic for discriminating object during video surveillance, utilizes less memory storage space avoiding the storage of frames and utilizes less memory storage space while computing discriminatory feature.

Hence it is evident that there is a need to have a system and method for classification of moving object in the video surveillance in to predefined classes which is
• computationally inexpensive
• using simple logic for discriminating object during video surveillance
• utilizes less memory storage space avoiding storage of frames containing the objects of interest.
• Utilizes less memory storage space while analyzing frames for discrimination of the object in the video.
OBJECTIVES OF THE INVENTION
The principle object of the present invention is to propose a system and method that can classify any moving object during video surveillance into predefined classes
Another significant object of the invention is to use simpler logic and computationally economic method for the classification of moving object during video surveillance
Another object of the invention is to store only the set of center of gravities computed from the sequence of the frames for variance computation thereby requiring lesser memory space.
Yet another object of the invention is to provide a system and method for classification of moving object in the video surveillance which is computationally inexpensive
SUMMARY OF THE INVENTION
Before the present methods, systems, and hardware enablement are described, it is to be understood that this invention in not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments of the present invention and which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present invention which will be limited only by the appended claims.
The present invention provides a system for classifying moving objects during vi deo based surveillance comprising:

a) at least one video capturing means configured to capture a silhouette image of a moving object falling within the operating range of video capturing means;
b) a means for storing the program instructions that are configured to cause the processor:
i) to resize the captured silhouette image, wherein resizing scale factor of the silhouette
image is calculated by using the dimensions of upper half of the captured silhouette
image; ii) to compute an average height to width ratio and center of gravity of the object in the
resized silhouette image, wherein center of gravity is calculated by using only the
upper half of the object; iii) to divide the lower half of the captured image into two parts by a vertical line through
the center of gravity and analyze one of the lower half and calculate the variance of
center of gravity; iv) to compare the average height to average width of the object and further comparing
variance of center of gravity with the predetermined threshold value; v) to classify the object in the captured silhouette into predetermined classes, wherein
the classification is done on the basis of the calculated values of average height,
average width and variance of center of gravity.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing summary, as well as the following detailed description of preferred embodiments, are better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings example constructions of the invention; however, the invention is. not limited to the specific methods and system disclosed. In the
drawings:
Figure 1 shows an arrangement of video cameras and their operating range.
Figure 2 illustrates the lower left half of a captured silhouette image.
Figure 3 shows a flowchart for a system and method for classification of moving object during video surveillance into human, cattle and vehicle.
DETAILED DESCRIPTION OF THE INVENTION
Some embodiments of this invention, illustrating all its features, will now be discussed in detail.

The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and methods are now described.
The disclosed embodiment is merely exemplary of the invention, which may be embodied in various forms.
Silhouette: is a side view of an object or scene consisting of the outline and a featureless interior, with the silhouetted object,
Center of gravity: The average co-ordinate of the region of interest or the binary image under consideration in the context of image processing.
Surveillance: is the monitoring of the behavior, activities, or other changing information, usually of people and often in a surreptitious manner. It most usually refers to observation of individuals or groups by government organizations, but disease surveillance, for example, is monitoring the progress of a disease in a community.
Video capturing means: is a means for capturing video. It can be a video camera, closed-circuit television (CCTV) camera or IP camera.
Processing system: is the system in accordance with the present invention, wherein each camera has its associated processing system for analysis of captured image and for classification the moving object coming in the operating range of the video surveillance.
The present invention provides a system for classifying moving objects during vi deo based surveillance comprising:

c) at least one video capturing means configured to capture a silhouette image of a moving object falling within the operating range of video capturing means;
d) a means for storing the program instructions that are configured to cause the processor:
i) to resize the captured silhouette image, wherein resizing scale factor of the silhouette
image is calculated by using the dimensions of upper half of the captured silhouette
image; ii) to compute an average height to width ratio and center of gravity of the object in the
resized silhouette image, wherein center of gravity is calculated by using only the
upper half of the object; iii) to divide the lower half of the captured image into two parts by a vertical line through
the center of gravity and analyze one of the lower half and calculate the variance of
center of gravity; iv) to compare the average height to average width of the object and further comparing
variance of center of gravity with the predetermined threshold value; v) to classify the object in the captured silhouette into predetermined classes, wherein
the classification is done on the basts of the calculated values of average height,
average width and variance of center of gravity.
In accordance with an exemplary embodiment, at least four video capturing means hereon called as camera placed on the circumference of the area to be covered as shown in Figure 1 so as to ensure that at least one of the four cameras captures the silhouette of a moving object.
The processing system {not shown) is attached each camera, so as to enable each camera to analyze captured silhouette image and to classify the moving object falling in the operating range of the video surveillance as shown in figure 1, wherein each camera is capable of processing the captured frames independently and aid in final decision regarding classification of the moving object into predetermined classes like human, cattle and vehicle. Further the system can optionally sound an alarm as and when an object of interest that is a human, vehicle or cattle comes into the field of surveillance.
The processing system can be a specific purpose computer in which a means for storing the program instructions that are configured to cause the processor to perform tasks like:
• resizing the captured image,
• computing average height to width ratio and center of gravity for the object in the resized image

• comparing the average height to average width of the object and further comparing variance of center of gravity with the predetermined threshold value
• to classify the object in the captured silhouette into predetermined classes
In genera! a large memory storage space is required in the video surveillance system. If there is no constraint on memory storage space requirements, complete video recordings of the particular area under surveillance could be stored. However, it is difficult to look through (analyze) the entire recording to retrieve the relevant frames in which the object of interest is captured, in contexts that require analysis of the video sequence, for example, investigation of crimes. The invention presented herein could be used in such scenarios in order to extract those frames that contain objects of interest from the stored video stream.
The system and method proposed herein is meant for classifying the moving object that is being tracked in a video stream.
In accordance with Figure 3, a moving object coming into the operating range of the video surveillance system is tracked. The tracking of the moving object could be carried out using any one of the methods known in the state of art. Each camera in the surveillance system will try to capture a silhouette image of the moving object. As one processing system is connected to each of the four cameras, the four different cameras would extract the same set of parameters of the moving object in operating range.
For the camera which does not manage to capture a silhouette image of the moving object the processing system associated with that particular camera will automatically withdraw from the decision process. If a human or a cow comes in the field of view, in such a scenario, the silhouette based processing system would participate in the decision process. If a vehicle comes in the field of view both silhouette and non-silhouette based processing systems would participate in the decision process.
The silhouette image of the moving object captured by at least one camera is divided into two parts namely an upper half and a lower half. The captured silhouette image is resized, wherein resizing scale factor of the captured silhouette image is calculated by using the dimensions of upper half of the captured silhouette image. This is to ensure that the movements as reflected in the lower half do not influence the resizing scale. The entire image is resized based on this scale factor. After resizing the captured silhouette image, an average height to width ratio of the object in the resized silhouette image is calculated. Further, the center of gravity of the object in the resized silhouette image is calculated, wherein center of gravity is calculated by using only the

upper half of the object. A vertical line through center of gravity divides the lower half of the captured image into two parts namely lower left half and lower right half.
The processing system further analyses one of the lower half and calculates the variance of center of gravity, wherein the variance of center of gravity means the change in position of the part of object in the said lower half of the image with respect to the center of gravity of the said object. The discriminatory information used herein is based on the nature of the oscillation of the center of gravity of the object in the lower left half (LLH). It is sufficient to consider frames typically at a rate 1 frame/sec. The center of gravity of the object in LLH is computed for particular number of consecutive frames. The mean and variance of the CG are computed. Due to the swing of the legs exhibited by the nature of human walk, center of gravity variance will be very large. However, for vehicles, the center of gravity variance is insignificant, as the rotation of the wheels does not affect the position of the center of gravity. The nature of the leg movement of the cattle gives rise to a center of gravity variance which will be much larger than that of the vehicle.
The classification of the object in the video surveillance is done by a logic in which the ratio of average height to width of the object is compared with the value one and variance in center of gravity is compared with a predetermined threshold value. The logic used while taking decision about the classification of the object captured in the silhouette image is as below:
DECISION LOGIC
1. If H/W >1 and CG Variance > Threshold Decision - HUMAN 'SILHOUETTE
2. If H/W <1 and CG Variance > Threshold Decision - CATTLE 'SILHOUETTE
3. If H/W <1 and CG Variance < Threshold
Decision- VEHICLE ('SILHOUETTE or-NON-SILHOUETTE)
4. If H/W>1 and CG Variance < Threshold
Three possibilities:
HUMAN - MON-SILHOUETTE CATTLE - NON-SILHOUETTE VEHICLE -NON-SILHOUETTE This particular camera based system withdraws from decision process.

Where,
H/W = average height to width ratio
CG Variance = variance in the center of gravity
In accordance with the figure 3, after the computation of average height to width ratio as shown in step 104, the average height to width ratio is compared with numerical value 1 as shown in step 106. The variance in the center of gravity is compared with predetermined threshold value as given in step 108 and 110. If the average height to width ratio is greater than 1, in other words if average height of the object is greater than the average width and the variance in the center of gravity is greater than predetermined threshold the object is classified under human category as per step 112. Likewise the object is classified as cattle as shown in step 116 if the average height is less than average width and variance of center of gravity is greater than predetermined threshold value and the object is classified as a vehicle as shown in step 118 if the average height is less than average width and variance of center of gravity is less than predetermined threshold value. If the average height is greater than average width and variance in the center of gravity less than predetermined threshold value then the system for classifying moving objects while video based surveillance withdraws from the decision process for classifying the moving object as shown in the step 114.
WORKING OF THE INVENTION:
In order to test the invention, MPEG2 Videos were used. Every thirtieth frame of the MPEG2 video is fed for computation. The nth frame referred herein is actually the 30n,h frame of the original MPEG2 video. The invention has been tested with regards to calculation of variance in center of gravity (CG. variance) for various moving objects are as given below:
EXAMPLE 1: Human (Walking)
As shown in figure 4, the system for classifying moving objects during video based surveillance is tested. A man is allowed to walk through the operating range of the video surveillance. The original frame with background is captured via video capturing means. The object is segmented and extracted from the captured frame. The extracted object is divided in to two parts namely Upper half and lower half. Considering the upper half of the image center of gravity (CG.) is calculated and a vertical line passing through CG. divides the lower half into two parts, namely lower left half and lower right half. Considering lower left half of the extracted object, the variance of the part of the object, which is resided in the lower left half, from the vertical line passing through CG. is calculated. Similarly, 16 consecutive frames are analyzed for the calculation of

variance in center.of gravity. In this particular example the C.G variance computation for consecutive 16 frames is 16.4000.
EXAMPLE 2: Moving two wheeler (Vehicle)
While analyzing the moving two-wheeler (vehicle) in the range of video surveillance as shown in figure 5, following the same procedure as explained in the 1st example, 16 consecutive frames are analyzed. In this particular case the C.G variance computation for consecutive 16 frames is 1.6100.
EXAMPLE 3: Moving Car (Vehicle)
While analyzing the moving car in the range of video surveillance as shown in figure 6, following the same procedure as explained in the 1st example, 16 consecutive frames are analyzed. In this particular case the C.G variance computation for consecutive 16 frames is 0.2400.
ADVANTAGES OF THE INVENTION:
1) The present invention provides a system for classifying moving objects during video based surveillance in which only the set of centre of gravities computed from a sequence of frames has to be stored for variance computation. There is no necessity to store the object images from a sequence of frames. Hence saving memory space of the video surveillance system.
2) The present invention uses less complicated logic for the classification of the moving object during video surveillance
3) The present invention is computationally inexpensive

WE CLAIM
1) A system for classifying moving objects during video based surveillance comprising:
a) at least one video capturing means configured to capture a silhouette image of a moving object falling within the operating range of video capturing means;
b) a means for storing the program instructions that are configured to cause the processor:
i) to resize the captured silhouette image, wherein resizing scale factor of the silhouette
image is calculated by using the dimensions of upper half of the captured silhouette
image; ii) to compute an average height to width ratio and center of gravity of the object in the
resized silhouette image, wherein center of gravity is calculated by using only the
upper half of the object; iii) to divide the lower half of the captured image into two parts by a vertical line through
the center of gravity and analyze one of the lower half and calculate the variance of
center of gravity; iv) to compare the average height to average width of the object and further comparing
variance of center of gravity with the predetermined threshold value; v) to classify the object in the captured silhouette into predetermined classes, wherein
the classification is done on the basis of the calculated values of average height,
average width and variance of center of gravity.
2) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein video capturing means can be a video camera, closed-circuit television (CCTV) camera or IP camera.
3) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein each video capturing means individually has the system for classifying moving objects while video based surveillance.
4) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein each video capturing means is capable of giving decision regarding classification of the moving object captured in the silhouette image.
5) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein the object is classified as a human if the average height is greater than average

width and variance of center of gravity is greater than predetermined threshold value, the object is classified as a cattle if the average height is less than average width and variance of center of gravity is greater than predetermined threshold value and the object is classified as a vehicle if the average height is less than average width and variance of center of gravity is less than predetermined threshold value.
6) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein one or more video capturing means which has not captured the silhouette image of the moving object will withdraw from the process of classification of the moving object.
7) A system for classifying moving objects during video based surveillance as claimed in claim 1, wherein if the average height is greater than average width and variance of center of gravity is less than predetermined threshold value then the system for classifying moving objects while video based surveillance withdraws from the decision process for classifying the moving object.
8) A system for classifying moving objects during video based surveillance as claimed in claim 1 further may sound an alarm automatically on detection of object of interest in the operating range of the video capturing means.
9) A method for classifying moving objects during video based surveillance comprising steps of:

a) capturing a silhouette image of a moving object in the operating range of video surveillance using at least one video capturing means;
b) resizing the captured silhouette image, wherein a scaling factor of the silhouette image is calculated by utilizing the dimensions of an upper half of the captured silhouette image;
c) computing an average height to width ratio and a center of gravity of the object in the resized silhouette image, wherein center of gravity is calculated by using upper half of the object, wherein a vertical line through the center of gravity is used to divide the lower half of the captured image into two parts that is a lower left half and a lower right half;
d) analyzing one of the lower half and calculate the variance of center of gravity;
e) comparing the average height to average width of the object and further comparing variance of center of gravity with the predetermined threshold value; and

f) classifying the object in the captured silhouette into predetermined classes, wherein characterized by the classification is done on the basis of the calculated values of average height, average width and variance of center of gravity.
10) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein video capturing means can be a video camera, closed-circuit television (CCTV) camera, or IP camera.
11) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein each video capturing means individually has the system for classifying moving objects while video based surveillance.
12) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein each video capturing means is capable of giving decision regarding classification of the moving object captured in the silhouette image.
13) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein the object is classified as a human if the average height is greater than average width and variance of center of gravity is greater than predetermined threshold value, the object is classified as a cattle if the if the average height is less than average width and variance of center of gravity is greater than predetermined threshold value, and the object is classified as a vehicle if the average height is less than average width and variance of center of gravity is less than predetermined threshold value.
14) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein the video capturing means which have not captured the silhouette image of the moving object will withdraw from the process of classification of the moving object.
15) A method for classifying moving objects during video based surveillance as claimed in claim 9, wherein if the average height is greater than average width and variance of center of gravity is less than predetermined threshold value then the system for classifying moving objects while video based surveillance withdraws from the decision process for classifying the moving object.
16) A method for classifying moving objects during video based surveillance as claimed in claim 9 further may sound an alarm automatically on detection of object of interest in the operating range of the video capturing means.

17) A system and method for classifying moving objects while video based surveillance substantially as herein described with reference to and as illustrated by the accompanying drawings.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 2162-MUM-2010-FORM 26(13-12-2010).pdf 2010-12-13
1 2162-MUM-2010-RELEVANT DOCUMENTS [27-09-2023(online)].pdf 2023-09-27
2 2162-MUM-2010-FORM 1(13-12-2010).pdf 2010-12-13
2 2162-MUM-2010-RELEVANT DOCUMENTS [30-09-2022(online)].pdf 2022-09-30
3 2162-MUM-2010-RELEVANT DOCUMENTS [23-09-2021(online)].pdf 2021-09-23
3 2162-MUM-2010-CORRESPONDENCE(13-12-2010).pdf 2010-12-13
4 2162-MUM-2010-RELEVANT DOCUMENTS [30-03-2020(online)].pdf 2020-03-30
4 2162-MUM-2010-FORM 3(16-10-2012).pdf 2012-10-16
5 2162-MUM-2010-RELEVANT DOCUMENTS [26-03-2019(online)].pdf 2019-03-26
5 2162-MUM-2010-CORRESPONDENCE(16-10-2012).pdf 2012-10-16
6 Form 3 [28-11-2016(online)].pdf 2016-11-28
6 2162-mum-2010-abstract.pdf 2018-08-10
7 2162-MUM-2010-FORM 4(ii) [25-09-2017(online)].pdf 2017-09-25
7 2162-mum-2010-claims.pdf 2018-08-10
8 2162-MUM-2010-OTHERS [24-10-2017(online)].pdf 2017-10-24
8 2162-MUM-2010-CORRESPONDENCE(21-3-2012).pdf 2018-08-10
9 2162-MUM-2010-CORRESPONDENCE(28-9-2011).pdf 2018-08-10
9 2162-MUM-2010-FER_SER_REPLY [24-10-2017(online)].pdf 2017-10-24
10 2162-MUM-2010-COMPLETE SPECIFICATION [24-10-2017(online)].pdf 2017-10-24
10 2162-mum-2010-correspondence.pdf 2018-08-10
11 2162-MUM-2010-CLAIMS [24-10-2017(online)].pdf 2017-10-24
11 2162-mum-2010-description(complete).pdf 2018-08-10
12 2162-MUM-2010-Correspondence to notify the Controller (Mandatory) [01-01-2018(online)].pdf 2018-01-01
12 2162-mum-2010-drawing.pdf 2018-08-10
13 2162-MUM-2010-FER.pdf 2018-08-10
13 2162-MUM-2010-Written submissions and relevant documents (MANDATORY) [17-01-2018(online)].pdf 2018-01-17
14 2162-mum-2010-form 1.pdf 2018-08-10
14 2162-MUM-2010-PatentCertificate31-07-2018.pdf 2018-07-31
15 2162-MUM-2010-FORM 18.pdf 2018-08-10
15 2162-MUM-2010-IntimationOfGrant31-07-2018.pdf 2018-07-31
16 2162-mum-2010-form 2(title page).pdf 2018-08-10
16 abstract1.jpg 2018-08-10
17 2162-MUM-2010-HearingNoticeLetter.pdf 2018-08-10
17 2162-mum-2010-form 2.pdf 2018-08-10
18 2162-MUM-2010-FORM 3(21-3-2012).pdf 2018-08-10
18 2162-mum-2010-form 3.pdf 2018-08-10
19 2162-MUM-2010-FORM 3(28-9-2011).pdf 2018-08-10
20 2162-MUM-2010-FORM 3(21-3-2012).pdf 2018-08-10
20 2162-mum-2010-form 3.pdf 2018-08-10
21 2162-mum-2010-form 2.pdf 2018-08-10
21 2162-MUM-2010-HearingNoticeLetter.pdf 2018-08-10
22 2162-mum-2010-form 2(title page).pdf 2018-08-10
22 abstract1.jpg 2018-08-10
23 2162-MUM-2010-FORM 18.pdf 2018-08-10
23 2162-MUM-2010-IntimationOfGrant31-07-2018.pdf 2018-07-31
24 2162-MUM-2010-PatentCertificate31-07-2018.pdf 2018-07-31
24 2162-mum-2010-form 1.pdf 2018-08-10
25 2162-MUM-2010-Written submissions and relevant documents (MANDATORY) [17-01-2018(online)].pdf 2018-01-17
25 2162-MUM-2010-FER.pdf 2018-08-10
26 2162-MUM-2010-Correspondence to notify the Controller (Mandatory) [01-01-2018(online)].pdf 2018-01-01
26 2162-mum-2010-drawing.pdf 2018-08-10
27 2162-MUM-2010-CLAIMS [24-10-2017(online)].pdf 2017-10-24
27 2162-mum-2010-description(complete).pdf 2018-08-10
28 2162-MUM-2010-COMPLETE SPECIFICATION [24-10-2017(online)].pdf 2017-10-24
28 2162-mum-2010-correspondence.pdf 2018-08-10
29 2162-MUM-2010-CORRESPONDENCE(28-9-2011).pdf 2018-08-10
29 2162-MUM-2010-FER_SER_REPLY [24-10-2017(online)].pdf 2017-10-24
30 2162-MUM-2010-CORRESPONDENCE(21-3-2012).pdf 2018-08-10
30 2162-MUM-2010-OTHERS [24-10-2017(online)].pdf 2017-10-24
31 2162-MUM-2010-FORM 4(ii) [25-09-2017(online)].pdf 2017-09-25
31 2162-mum-2010-claims.pdf 2018-08-10
32 Form 3 [28-11-2016(online)].pdf 2016-11-28
32 2162-mum-2010-abstract.pdf 2018-08-10
33 2162-MUM-2010-RELEVANT DOCUMENTS [26-03-2019(online)].pdf 2019-03-26
33 2162-MUM-2010-CORRESPONDENCE(16-10-2012).pdf 2012-10-16
34 2162-MUM-2010-RELEVANT DOCUMENTS [30-03-2020(online)].pdf 2020-03-30
34 2162-MUM-2010-FORM 3(16-10-2012).pdf 2012-10-16
35 2162-MUM-2010-RELEVANT DOCUMENTS [23-09-2021(online)].pdf 2021-09-23
35 2162-MUM-2010-CORRESPONDENCE(13-12-2010).pdf 2010-12-13
36 2162-MUM-2010-RELEVANT DOCUMENTS [30-09-2022(online)].pdf 2022-09-30
36 2162-MUM-2010-FORM 1(13-12-2010).pdf 2010-12-13
37 2162-MUM-2010-FORM 26(13-12-2010).pdf 2010-12-13
37 2162-MUM-2010-RELEVANT DOCUMENTS [27-09-2023(online)].pdf 2023-09-27

Search Strategy

1 searchstrategy_15-03-2017.pdf

ERegister / Renewals

3rd: 24 Sep 2018

From 29/07/2012 - To 29/07/2013

4th: 24 Sep 2018

From 29/07/2013 - To 29/07/2014

5th: 24 Sep 2018

From 29/07/2014 - To 29/07/2015

6th: 24 Sep 2018

From 29/07/2015 - To 29/07/2016

7th: 24 Sep 2018

From 29/07/2016 - To 29/07/2017

8th: 24 Sep 2018

From 29/07/2017 - To 29/07/2018

9th: 24 Sep 2018

From 29/07/2018 - To 29/07/2019

10th: 01 Jul 2019

From 29/07/2019 - To 29/07/2020

11th: 29 Jul 2020

From 29/07/2020 - To 29/07/2021

12th: 28 Jul 2021

From 29/07/2021 - To 29/07/2022

13th: 01 Jul 2022

From 29/07/2022 - To 29/07/2023

14th: 21 Jul 2023

From 29/07/2023 - To 29/07/2024

15th: 29 Jul 2024

From 29/07/2024 - To 29/07/2025

16th: 18 Jul 2025

From 29/07/2025 - To 29/07/2026