Sign In to Follow Application
View All Documents & Correspondence

Method And Apparatus For Detecting Talking Segments In A Video Sequence Using Visual Cues

Abstract: A method and system for detecting temporal segments of talking faces in a video sequence using visual cues is disclosed. The system detects talking segments by classifying talking and non-talking segments in a sequence of image frames using visual cues. The present disclosure detects temporal segments of talking faces in video sequences by first localizing face  eyes and hence mouth region. Then  the localized mouth regions across the video frames are encoded in terms of integrated gradient histogram (IGH) of visual features and quantified using evaluated entropy of the IGH. The time series data of entropy values from each frame is further clustered using online temporal segmentation (K-Means clustering) algorithm to distinguish talking mouth patterns from other mouth movements. Such segmented time series data is then used to enhance the emotion recognition system. FIG. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 April 2012
Publication Number
36/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-03-20
Renewal Date

Applicants

SAMSUNG R&D INSTITUTE INDIA - BANGALORE PRIVATE LIMITED
#2870, ORION BUILDING, BANGMANE CONDTELLATION BUSINESS PARK, OUTER RING ROAD, DODDANEKUNDI CIRCLE, MARATHAHALLI POST, BANGALORE - 560037

Inventors

1. VELUSAMY  Sudha
Employed at Samsung India Software Operations Pvt. Ltd.  having its office at  Bagmane Lakeview  Block "B"  No. 66/1  Bagmane Tech Park  C V Raman Nagar  Byrasandra  Bangalore – 560093  Karnataka  India
2. GOPALAKRISHNAN  Viswanath
Employed at Samsung India Software Operations Pvt. Ltd.  having its office at  Bagmane Lakeview  Block "B"  No. 66/1  Bagmane Tech Park  C V Raman Nagar  Byrasandra  Bangalore – 560093  Karnataka  India
3. NAVATHE  Bilva Bhalachandra
Employed at Samsung India Software Operations Pvt. Ltd.  having its office at  Bagmane Lakeview  Block "B"  No. 66/1  Bagmane Tech Park  C V Raman Nagar  Byrasandra  Bangalore – 560093  Karnataka  India
4. KANNAN  Hariprasad
Employed at Samsung India Software Operations Pvt. Ltd.  having its office at  Bagmane Lakeview  Block "B"  No. 66/1  Bagmane Tech Park  C V Raman Nagar  Byrasandra  Bangalore – 560093  Karnataka  India

Specification

FIELD OF THE INVENTION

The present invention relates to image processing  computer vision and machine learning  and more particularly relates to emotion recognition in a video sequence.

BACKGROUND OF THE INVENTION

With recent developments in technology  significant attention has been given to enhancing human computer interaction (HCI). In particular  engineers and scientists try to capitalize off basic human attributes  such as voice  gaze  gesture and emotional state  in order to improve HCI. The ability of a device to detect and respond to human emotions is known as "Affective Computing."

Automatic facial expression recognition is a key component in the research field of human computer interaction. It also plays a major role in human behavior modeling which has huge potential in applications like video conferencing  gaming  surveillance and the like. Most of the research in automatic facial recognition targets to identify six basic emotions (sadness  fear  anger  happiness  disgust  surprise) on posed facial expression datasets prepared under controlled laboratory conditions. Researchers have adopted static as well as dynamic methods to infer upon different emotions in the facial expression datasets. Static methods analyze frames in a video sequence independently  while dynamic methods consider a group of consecutive frames to infer upon a particular emotion.

The mouth region of human face contains highly discriminative information regarding the human emotion and plays a key role in the facial expression recognition. However  in a normal use case scenario like video conferencing  there will be significant temporal segments of the concerned person talking  and any facial expression recognition system that relies on the mouth region for inferring emotions can potentially be misled by the random and complex formations around the lip region. The temporal segment information regarding talking segments in a video sequence is quite important in this context as it can be used enhance the existing emotion recognition systems.

Not many major works in emotion recognition have addressed the condition of ""talking faces"" under which the Action Units (AU) inferred for the mouth region can go potentially wrong resulting in a wrong emotion classification. Currently  known methods target to determine active speakers in a multi-person environment and do not intend to temporally segment lip activities of a single person into talking and non-talking (which includes neutral as well as various emotion segments) phases. As a result  the current systems suffer from drawbacks of failing to capture exact emotions.

Due to abovementioned reasons it is evident that there are currently no methods that intend to temporally segment lip activities into talking and non-talking phases and exact classification of emotions.

OBJECT OF THE INVENTION

The principal object of the embodiments herein is to provide a system and method for detecting talking segments in visual cues.

Another object of the invention is to provide an unsupervised temporal segmentation method for detecting talking faces.

SUMMARY OF THE INVENTION

Accordingly the invention provides a method for detecting and classifying talking segments of a face in a visual cue and the method comprising normalizing and localizing the face region for each frame of the visual cue and obtains a histogram of structure descriptive features of the face for the frame in the visual cue. Further  the method derives an integrated gradient histogram (IGH) from the descriptive features for the frame in the visual cue  then computing entropy of the integrated gradient histogram (IGH) for the frame in the visual cue and then the method performs segmentation of the IGH to detect talking segments for the face in the visual cues and analyzing the segments for the frame in the visual cues for inferring emotions.

Accordingly the invention provides a computer program product for detecting and classifying talking segments of a face in a visual cue and the product comprising an integrated circuit. Further  the integrated circuit comprising at least one processor  at least one memory having a computer program code within the circuit  the at least one memory and the computer program code configured to  with the at least one processor cause the product to normalize and localize the face region for each frame of the visual cue. Then the computer program product obtains a histogram of structure descriptive features for the frame in the visual cue and derive integrated gradient histogram (IGH) from the descriptive features for the frame in the visual cue and compute entropy of the integrated gradient histogram (IGH) for the frame in the visual cue  further the computer program product perform segmentation of the IGH to detect talking segments for the face in the visual cues and analyze the segments for the frame in the visual cues for inferring emotions.

These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood  however  that the following descriptions  while indicating preferred embodiments and numerous specific details thereof  are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof  and the embodiments herein include all such modifications.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS

This invention is illustrated in the accompanying drawings  throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein will be better understood from the following description with reference to the drawings  in which:

FIG. 1 illustrates a flow diagram of an exemplary method of recognizing emotions of a character in a video sequence  according to embodiments as disclosed herein;

FIG. 2 illustrates a detailed flow diagram of an exemplary method of detecting talking segments in video sequences using visual cues  according to embodiments as disclosed herein; and

FIG. 3 illustrates a computing environment implementing the application  according to embodiments disclosed herein.

DETAILED DESCRIPTION OF THE INVENTION

The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly  the examples should not be construed as limiting the scope of the embodiments herein.

The embodiments herein achieve a system and method to detect talking and non-talking segments in a sequence of image frames using visual cues. The method uses visual cues since in this regard audio cues may also come from different persons in range other than the target speaker and can mislead the detection. Moreover  the method targets to classify talking and non-talking segments in which the non-talking segments may have different expressions with audio such as laughter  exclamation and the like. Hence  visual cues have to be used in distinguishing between the same. The method identifies temporal segments of talking faces in video sequences by estimating uncertainties involved in representation of mouth or lip movements. In one embodiment  mouth movements are encoded onto an Integrated Gradient Histogram (IGH) of Local Binary Pattern (LBP) values after the initial mouth localization step. The uncertainties in the mouth movements are quantified by evaluating entropy of the IGH. The time series data of entropy values from each frame is further clustered using online K-Means algorithm to distinguish talking mouth patterns from other mouth movements.

The visual cues mentioned throughout the invention may be a photograph or video containing sequence of frames.

Referring now to the drawings  and more particularly to FIGS. 1 through 3  where similar reference characters denote corresponding features consistently throughout the figures  there are shown preferred embodiments.

FIG. 1 illustrates a flow diagram of an exemplary method of recognizing emotions of a character in a video sequence  according to embodiments as disclosed herein. As depicted in the figure 100  the method obtains (101) video frames from the video and it detects (102) the face by anchoring the pupils location. Then the method checks (103) whether the user is talking. If the method finds that the user is not talking  then the method gets (104) the features of the whole face. Further  the method predicts (105) the action units (AUs) which represents the muscular activity that produces facial appearance changes as defined by Facial Action Coding System (FACS). Based on the action units the method infers (106) the emotions of the user. In one embodiment  the method identifies that the user is talking  then the method gets (107) the features of only upper face. Then the method predicts (108) the action units and then infers (109) emotions of the user.

In one embodiment  the talking face refers to the faces that talk with or without any emotions. And non-talking faces refers to the faces that do not talk  but show some emotions. The various actions in method 100 may be performed in the order presented  in a different order or simultaneously. Further  in some embodiments  some actions listed in FIG. 1 may be omitted.

FIG. 2 illustrates a detailed flow diagram of an exemplary method of detecting talking segments in video sequences using visual cues  according to embodiments as disclosed herein. As depicted in the figure 200  the method may employ an algorithm for performing the steps. The algorithm obtains (201) a sequence of video frames and further detects (202) primary face and localizes the pupils and nose. In one embodiment  standard face detector and a version of Active Appearance Model (AAM) based method may be employed to identify face  pupils and nose locations in every frame of the video. The Active Appearance Model (AAM) is a generalization of the widely used Active Shape Model approach  but uses all the information in the image region covered by the target object  rather than just that near modeled edges. The method then normalizes (203) the face using pupils. The pupil locations are used to normalize every face image to MxN size. Further  the method localizes (204) the nose that will crop out the mouth region in each frame for further processing.

In one embodiment  the distance between the pupils is maintained as 48 pixels to normalize the faces and crop the mouth region to the size of 56?46 pixels.

The cropped sequence of mouth images may have illumination variations and alignment across the frames and hence the method selects a feature descriptor that can handle such conditions. In one embodiment  the method derives (205) histogram of Local Binary Pattern (LBP) values to encode appearance of mouth region. The LBP is a powerful feature used for texture classification which is later proven to be very effective with face recognition and related applications. In one embodiment  the LBP pattern is computed for every pixel in the cropped out image of the mouth region. Also  uniform LBP patterns (patterns with at most two bit wise transitions) are used and classified all as similar. The histogram of LBP values evaluated for the cropped image is used to describe the appearance of the mouth region in the concerned frame.

The system and method distinguishes the complex change of appearance in case of the talking mouth from the smoother appearance change of mouth movements exhibited in the onset and offset of emotions like smile  surprise  disgust and the like. Further  for neutral faces with no talking involved there will not be much change in the appearance of the mouth. In one embodiment  to distinguish the complex change  the gradient histograms are computed from a specific frame  say frame i  with the intention to capture the appearance changes over a time period 2 . The gradient LBP histograms are computed as follows:


where is the gradient histogram computed using the difference between histogram of ith frame and (i + n)th frame and is the gradient histogram computed using the difference between histogram of ith frame and (i - n)th frame.

The gradient histograms encode the appearance changes in the mouth patterns along the temporal dimension. The method takes the complete information regarding the appearance change over a time segment 2 +1 and encodes (206) into a single Integrated Gradient Histogram (IGH) as follows:

The series of talking frames will have more evenly distributed IGH values compared to the frames displaying a particular emotion. In other words  the uncertainty involved in the IGH representation is more for talking segments compared to the emotion segments. Hence  the method performs (208) online temporal segmentation of IGH entropy  uses the entropy of the IGH to quantify the amount of uncertainty in the video segment under consideration. The entropy of IGH of ith frame is calculated as follows:

where Epi is the entropy value of IGH of ith frame and pk is the histogram value for kth bin.

Further  the integrated gradient histogram is normalized before evaluating the entropy of the same. This arises from the need to compare the entropy values across different temporal segments. The energy values of the IGH over different temporal segments may vary as a result of the gradient process. The energy values are normalized by adding the common energy between the original LBP histograms as a separate bin in the IGH. For static segments  this common energy is a large spike in the IGH and may result in very less entropy. For emotion segments  the common energy may be comparable to a slow talking process. However  the gradient energy part of IGH has larger spread in talking segments and hence may have higher entropy compared to emotion segments. The temporal series data of entropy values evaluated from the IGH of every frame is used for unsupervised online segmentation of talking and non-talking faces.

In one embodiment  the entropy values are obtained for every frame in the video sequence form a time series data. The time series data is then segment in an unsupervised online fashion so as to provide the required input to the emotion recognition system regarding the presence of talking faces in the video sequence. In one embodiment  the system uses online K-Means algorithm to segment the time series data where K = 2. No further assumptions are made regarding the range or initial values of data.

The problem of inferring emotions in the presence of occlusions over mouth has been provided to improve the accuracy of emotion detection. The method checks (209) whether the user is talking or not. In one embodiment  the method considers the mouth region is occluded whenever talking is detected. If the method finds that the user is not talking then it analyzes (210) upper and lower facial AU’s. In one embodiment  the method finds that the user is talking  and a straightforward strategy could be to avoid the visual cues from the mouth region in a particular temporal segment. In one embodiment  the method analyzes (211) the Action Units (AU) from the upper half of the face only. Then the method infers (212) emotions based on the talking or non-talking visual cues. It can be noted that  such a method will be inferior to the method using all AUs under normal conditions  but will be superior to method using all AUs under talking conditions as the latter has a lot of misleading information.

In another embodiment  to improve emotion recognition is to use the mouth region but change the strategy of recognition  once talking is detected. Even though image features from a talking face cannot be easily interpreted  the mouth region still holds some cues to the current emotion. For example  a happy talking face and a sad talking face can be discerned. It is to be noted that  the approach to infer emotions from talking faces using mouth region would be different from a usual emotion recognition system. One skilled in the art will realize that movement of the lip corners can help distinguish certain emotions even while talking. The various actions in method 200 may be performed in the order presented  in a different order or simultaneously. Further  in some embodiments  some actions listed in FIG. 2 may be omitted.

In one embodiment  the method may be used in video conferring  meeting or interview scenario  in which the camera is focused to the persons  detects the talking and non-talking faces of the person involved in the session and determines the emotions of the person. Further  the method may also be employed in emotion recognition systems for better categorizing of the emotions.

FIG. 3 illustrates a computing environment implementing the application  according to embodiments disclosed herein. As depicted the computing environment comprises at least one processing unit that is equipped with a control unit and an Arithmetic Logic Unit (ALU)  a memory  a storage unit  plurality of networking devices  and a plurality Input output (I/O) devices. The processing unit is responsible for processing the instructions of the algorithm. The processing unit receives commands from the control unit in order to perform its processing. Further  any logical and arithmetic operations involved in the execution of the instructions are computed with the help of the ALU.

The overall computing environment can be composed of multiple homogeneous and/or heterogeneous cores  multiple CPUs of different kinds  special media and other accelerators. The processing unit is responsible for processing the instructions of the algorithm. The processing unit receives commands from the control unit in order to perform its processing. Further  any logical and arithmetic operations involved in the execution of the instructions are computed with the help of the ALU. Further  the plurality of process units may be located on a single chip or over multiple chips.

The algorithm comprising of instructions and codes required for the implementation are stored in either the memory unit or the storage or both. At the time of execution  the instructions may be fetched from the corresponding memory and/or storage  and executed by the processing unit.

In case of any hardware implementations various networking devices or external I/O devices may be connected to the computing environment to support the implementation through the networking unit and the I/O device unit.

The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in Fig. 3 include blocks which can be at least one of a hardware device  or a combination of hardware device and software module.

The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can  by applying current knowledge  readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept  and  therefore  such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore  while the embodiments herein have been described in terms of preferred embodiments  those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.

We claim:

1. A method for detecting and classifying talking segments of a face in a visual cue  said method comprising:
normalizing and localizing said face region for each frame of said visual cue;
obtaining a histogram of structure descriptive features of said face for said frame in said visual cue;
deriving an integrated gradient histogram (IGH) from said descriptive features for said frame in said visual cue;
computing entropy of said integrated gradient histogram (IGH) for said frame in said visual cue;
performing segmentation of said IGH to detect talking segments for said face in said visual cues; and
analyzing said segments for said frame in said visual cues for inferring emotions.

2. The method as in claim 1  wherein said normalizing comprises employing pupil location to normalize said face image for said frame of said visual cue.

3. The method as in claim 1  wherein said localizing comprises employing nose location to crop the mouth region in an accurate manner for said frame of said visual cue.

4. The method as in claim 1  wherein deriving said IGH comprises obtaining uncertainty involved in said IGH representation for talking segments as compared to the non talking segments.

5. The method as in claim 1  wherein entropy of said IGH is computed for determining the amount of uncertainty involved in talking segments in said visual cue.

6. The method as in claim 1  wherein said analyzing comprises employing upper facial action units for inferring emotions for said talking faces.

7. The method as in claim 1  wherein said analyzing comprises employing entire facial action units for inferring emotions for non talking faces.

8. The method as in claim 1  wherein said visual cue is at least one of: image frames  video.

9. A system for detecting and classifying talking segments of a face in a visual cue  said system performing at least one of the steps claimed in claims 1 to 8.

10. A computer program product for detecting and classifying talking segments of a face in a visual cue  said product comprising:
an integrated circuit further comprising at least one processor;
at least one memory having a computer program code within said circuit;
said at least one memory and said computer program code configured to  with said at least one processor cause the product to:
normalize and localize said face region for each frame of said visual cue;
obtain a histogram of structure descriptive features for said frame in said visual cue;
derive integrated gradient histogram (IGH) from said descriptive features for said frame in said visual cue;
compute entropy of said integrated gradient histogram (IGH) for said frame in said visual cue;
perform segmentation of said IGH to detect talking segments for said face in said visual cues; and
analyze said segments for said frame in said visual cues for inferring emotions.

11. The computer program product as in claim 10  wherein said normalizing comprises employing pupil location to normalize said face image for said frame of said visual cue.

12. The computer program product as in claim 10  wherein said localizing comprises employing nose location to crop the mouth region in an accurate manner for said frame of said visual cue.

13. The computer program product as in claim 10  wherein deriving said IGH comprises obtaining uncertainty involved in the IGH representation for talking segments as compared to the non talking segments.

14. The computer program product as in claim 10  wherein entropy of said IGH is computed for determining the amount of uncertainty involved in talking segments in said visual cue.

15. The computer program product as in claim 10  wherein said analysis comprises employing upper facial action units for inferring emotions for said talking faces.

16. The computer program product as in claim 10  wherein said analysis comprises employing entire facial action units for inferring emotions for non talking faces.

Dated this the 16th day of April 2012

Signature

SANTOSH VIKRAM SINGH
Patent Agent
Agent for the applicant

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 1519-CHE-2012-IntimationOfGrant20-03-2023.pdf 2023-03-20
1 Form-5.doc 2012-04-21
2 1519-CHE-2012-PatentCertificate20-03-2023.pdf 2023-03-20
2 Form-1.doc 2012-04-21
3 Drawings.doc 2012-04-21
3 1519-CHE-2012-Correspondence to notify the Controller [09-09-2022(online)].pdf 2022-09-09
4 1519-CHE-2012-US(14)-ExtendedHearingNotice-(HearingDate-15-09-2022).pdf 2022-09-01
4 1519-CHE-2012 CORRESPONDENCE OTHERS 23-04-2012.pdf 2012-04-23
5 1519-CHE-2012-Written submissions and relevant documents [18-03-2020(online)].pdf 2020-03-18
5 1519-CHE-2012 POWER OF ATTORNEY 23-04-2012.pdf 2012-04-23
6 1519-CHE-2012-US(14)-ExtendedHearingNotice-(HearingDate-06-03-2020).pdf 2020-03-06
6 1519-CHE-2012 FORM-18 23-04-2012.pdf 2012-04-23
7 1519-CHE-2012-Correspondence to notify the Controller [02-03-2020(online)].pdf 2020-03-02
7 1519-CHE-2012 POWER OF ATTORNEY 17-10-2012.pdf 2012-10-17
8 1519-CHE-2012-HearingNoticeLetter-(DateOfHearing-06-03-2020).pdf 2020-02-07
8 1519-CHE-2012 FORM-13 17-10-2012.pdf 2012-10-17
9 1519-CHE-2012 FORM-1 17-10-2012.pdf 2012-10-17
9 1519-CHE-2012-AMENDED DOCUMENTS [09-09-2019(online)].pdf 2019-09-09
10 1519-CHE-2012 CORRESPONDENCE OTHERS 17-10-2012.pdf 2012-10-17
10 1519-CHE-2012-FORM 13 [09-09-2019(online)].pdf 2019-09-09
11 1519-CHE-2012 FORM-13 12-12-2013.pdf 2013-12-12
11 1519-CHE-2012-FORM-26 [09-09-2019(online)].pdf 2019-09-09
12 1519-CHE-2012 FORM-13 17-12-2013.pdf 2013-12-17
12 1519-CHE-2012-AMMENDED DOCUMENTS [12-12-2018(online)].pdf 2018-12-12
13 1519-CHE-2012-FER.pdf 2018-06-25
13 1519-CHE-2012-FORM 13 [12-12-2018(online)].pdf 2018-12-12
14 1519-CHE-2012-MARKED COPIES OF AMENDEMENTS [12-12-2018(online)].pdf 2018-12-12
14 1519-CHE-2012-OTHERS [11-12-2018(online)].pdf 2018-12-11
15 1519-CHE-2012-FORM 3 [11-12-2018(online)].pdf 2018-12-11
15 1519-CHE-2012-PETITION UNDER RULE 137 [12-12-2018(online)].pdf 2018-12-12
16 1519-CHE-2012-FER_SER_REPLY [11-12-2018(online)].pdf 2018-12-11
16 1519-CHE-2012-RELEVANT DOCUMENTS [12-12-2018(online)].pdf 2018-12-12
17 1519-CHE-2012-DRAWING [11-12-2018(online)].pdf 2018-12-11
17 1519-CHE-2012-ABSTRACT [11-12-2018(online)].pdf 2018-12-11
18 1519-CHE-2012-CLAIMS [11-12-2018(online)].pdf 2018-12-11
18 1519-CHE-2012-COMPLETE SPECIFICATION [11-12-2018(online)].pdf 2018-12-11
19 1519-CHE-2012-CLAIMS [11-12-2018(online)].pdf 2018-12-11
19 1519-CHE-2012-COMPLETE SPECIFICATION [11-12-2018(online)].pdf 2018-12-11
20 1519-CHE-2012-ABSTRACT [11-12-2018(online)].pdf 2018-12-11
20 1519-CHE-2012-DRAWING [11-12-2018(online)].pdf 2018-12-11
21 1519-CHE-2012-FER_SER_REPLY [11-12-2018(online)].pdf 2018-12-11
21 1519-CHE-2012-RELEVANT DOCUMENTS [12-12-2018(online)].pdf 2018-12-12
22 1519-CHE-2012-FORM 3 [11-12-2018(online)].pdf 2018-12-11
22 1519-CHE-2012-PETITION UNDER RULE 137 [12-12-2018(online)].pdf 2018-12-12
23 1519-CHE-2012-MARKED COPIES OF AMENDEMENTS [12-12-2018(online)].pdf 2018-12-12
23 1519-CHE-2012-OTHERS [11-12-2018(online)].pdf 2018-12-11
24 1519-CHE-2012-FORM 13 [12-12-2018(online)].pdf 2018-12-12
24 1519-CHE-2012-FER.pdf 2018-06-25
25 1519-CHE-2012 FORM-13 17-12-2013.pdf 2013-12-17
25 1519-CHE-2012-AMMENDED DOCUMENTS [12-12-2018(online)].pdf 2018-12-12
26 1519-CHE-2012 FORM-13 12-12-2013.pdf 2013-12-12
26 1519-CHE-2012-FORM-26 [09-09-2019(online)].pdf 2019-09-09
27 1519-CHE-2012 CORRESPONDENCE OTHERS 17-10-2012.pdf 2012-10-17
27 1519-CHE-2012-FORM 13 [09-09-2019(online)].pdf 2019-09-09
28 1519-CHE-2012 FORM-1 17-10-2012.pdf 2012-10-17
28 1519-CHE-2012-AMENDED DOCUMENTS [09-09-2019(online)].pdf 2019-09-09
29 1519-CHE-2012 FORM-13 17-10-2012.pdf 2012-10-17
29 1519-CHE-2012-HearingNoticeLetter-(DateOfHearing-06-03-2020).pdf 2020-02-07
30 1519-CHE-2012 POWER OF ATTORNEY 17-10-2012.pdf 2012-10-17
30 1519-CHE-2012-Correspondence to notify the Controller [02-03-2020(online)].pdf 2020-03-02
31 1519-CHE-2012-US(14)-ExtendedHearingNotice-(HearingDate-06-03-2020).pdf 2020-03-06
31 1519-CHE-2012 FORM-18 23-04-2012.pdf 2012-04-23
32 1519-CHE-2012-Written submissions and relevant documents [18-03-2020(online)].pdf 2020-03-18
32 1519-CHE-2012 POWER OF ATTORNEY 23-04-2012.pdf 2012-04-23
33 1519-CHE-2012-US(14)-ExtendedHearingNotice-(HearingDate-15-09-2022).pdf 2022-09-01
33 1519-CHE-2012 CORRESPONDENCE OTHERS 23-04-2012.pdf 2012-04-23
34 1519-CHE-2012-Correspondence to notify the Controller [09-09-2022(online)].pdf 2022-09-09
35 1519-CHE-2012-PatentCertificate20-03-2023.pdf 2023-03-20
36 1519-CHE-2012-IntimationOfGrant20-03-2023.pdf 2023-03-20

Search Strategy

1 2869_23-05-2018.pdf

ERegister / Renewals

3rd: 15 Jun 2023

From 17/04/2014 - To 17/04/2015

4th: 15 Jun 2023

From 17/04/2015 - To 17/04/2016

5th: 15 Jun 2023

From 17/04/2016 - To 17/04/2017

6th: 15 Jun 2023

From 17/04/2017 - To 17/04/2018

7th: 15 Jun 2023

From 17/04/2018 - To 17/04/2019

8th: 15 Jun 2023

From 17/04/2019 - To 17/04/2020

9th: 15 Jun 2023

From 17/04/2020 - To 17/04/2021

10th: 15 Jun 2023

From 17/04/2021 - To 17/04/2022

11th: 15 Jun 2023

From 17/04/2022 - To 17/04/2023

12th: 15 Jun 2023

From 17/04/2023 - To 17/04/2024

13th: 01 Apr 2024

From 17/04/2024 - To 17/04/2025

14th: 01 Apr 2025

From 17/04/2025 - To 17/04/2026