Sign In to Follow Application
View All Documents & Correspondence

An Artificial Intelligence Face Recognition System

Abstract: The present invention provides an artificial intelligence face recognition system and method thereof. The artificial intelligence face recognition system comprises: an object detection unit for detecting and identifying an input image received from an input device; a habituation unit for habituating the face recognition system for the input image received from the object detection unit; a recognition unit for recognizing the input image received and habituated by the object detection unit. The present invention generates a proprietary binary value and a biometric template corresponding to the input image by habituating the artificial intelligence face recognition system and then recognizing the input image by comparing stored proprietary binary value and selected biometric template corresponding to the input image. The present invention can identify the face with a minimum response time. The system instantly recognizes the facial image based on the accuracy level that is provided to the system.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
21 November 2012
Publication Number
19/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

FACEMAP INFOTECHNOLOGIES PRIVATE LIMITED
7-1-24/2/D, GREENDALE, AMEERPET, HYDERABAD - 500 016

Inventors

1. MR. VENKATA RAMANA AKULA
#202, SLS TOWERS, JP NAGAR, LIC COLONY, VIJAYAWADA - 520 008

Specification

FIELD OF THE INVENTION

The present invention generally relates to face recognition system, in particular a method for differentiating faces accurately. In further aspects, the invention applies to an artificial intelligence method for efficient recognition of faces.

BACKGROUND OF THE INVENTION

The concept of Face Recognition is becoming an important feature in various organizations for security purposes. Most organizations intend to use such systems to authenticate selected individuals in its premises, or to sound an alarm when a particular 10 person is recognized.

Although, the concept of recognition of the human face has been discussed at length in theory, implementation of the same has been unsuccessful. The available systems in the market for face recognition are not successful due to their low levels of accuracy.

Other methods of facial detection through different visual structures and cues exist. However, all these methods of facial detection, to a good degree, work on the idea of a probabilistic match between the observed features and the stored profile of a specific person. Also, the probability of extracting all the features in live video stream is at a low level currently. Key cards can be forgotten, misplaced or misused. Fingerprints/hand prints can be manipulated by methods such as silicon gel. Retina recognition devices tend to have a harmful effect on the eyes on regular recognition. Voice recognition is a proven failure due to the easy nature of manipulation. The accuracy of gesture recognition is low when it pertains to individual recognition. Also, the currently existing and claimed facial recognition methods suffer from low levels of accuracy of real time face recognition, time required for identification, and number of faces that they can recognize.

The problem essentially has been that of low levels of accuracy that could not be successfully addressed by attempts at recognition of the human face.

Generally, the system developers determine the particular parameters that are most important in determining a match, and give these parameters more 'weight' in the match/no-match decision than other parameters. Obviously, the effectiveness of these decision rules distinguishes a successful face recognition system from unsuccessful systems, and considerable resources are expended to develop these decision rules.

Hence, we propose a system and method which can differentiate a face among multiple objects detected, habituate and recognize it with a high level of accuracy and minimum response time.

OBJECT OF THE INVENTION

One object of the invention is to overcome the disadvantages/drawbacks of the prior art.

It is an object of the present invention to provide an artificial intelligence face 15 recognition system and method that facilitate the authentication of the face within a fraction of second.

It is a further object of the present invention to provide an artificial intelligence face recognition system and method which can identify the face of same person, aged over a period of a few years.

These objects and others are achieved by providing a face recognition system wherein the identification is achieved by an artificial intelligence method.

These and other advantages of the present invention will become readily apparent from the following detailed description taken in conjunction with the accompanying drawings

The following presents a simplified summary of the invention in order to provide a basic understanding of aspects of the invention. This summary is not an extensive overview of the present invention. It is not intended to identify the key/critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concept of the invention in a simplified form as a prelude to a more detailed description of the invention presented later.

According to one aspect of the present invention, there is provided an artificial intelligence face recognition system. The artificial intelligence face recognition system comprises: an object detection unit for detecting and identifying an input image received from an input device; a habituation unit for habituating the face recognition system for the input image received from the object detection unit; and a recognition unit for recognizing the input image habituated and received from the habituation unit. A combination of these features generates a proprietary binary value and a biometric template corresponding to the input image by habituating the artificial intelligence face recognition system and then recognizes the input image by comparing the database of stored proprietary binary values and selects the accurate biometric template corresponding to the input image.

According to another aspect of the present invention, there is provided a method for face recognition. The face recognition method comprises steps: detecting and 20 identifying an input image by using the object detection unit; habituating said face recognition system by using the habituation unit; recognizing said input image by using the recognition unit; accordingly recognizing the input image by comparing a proprietary binary value corresponding to the input image with a biometric facial template for the input image.

25 According to another aspect, the present invention can easily differentiate between a photograph and a live human being. The system identifies the object by detecting an input from the user or device. The input is broken down into multiple objects depending on a predefined classification and further the desired object is identified based on a preprogrammed set of values from the objects.

According to a further aspect, the present invention can recognize the face of same person over a period of a few years. The system generates a unique binary value to each face by correlating the Comprehensive Facial Value and biometric facial template. 5
According to another aspect, the present invention can identify the face within a minimum response time. The system instantly recognizes the facial image based on the accuracy level that is provided to the system.

Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.

BRIEF DESCRIPTION OF DRAWING

15 The features and drawings of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Fig. 1 illustrates a block diagram depicting the Object Detection Unit, ODM (100) of the artificial intelligence face recognition system, in accordance with an aspect of the present invention.

Fig. 2 illustrates a block diagram depicting a Habituation Unit, HM (200) of the artificial intelligence face recognition system, in accordance with an aspect of the present invention.

Fig. 3 illustrates a block diagram depicting a Recognition Unit RM (300) of the artificial intelligence face recognition system, in accordance with an aspect of the present invention.

Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure.

Throughout the drawings, it should be noted that like reference numerals are used to depict the same or similar elements, features and structures.

DETAILED DESCRIPTION OF THE INVENTION

The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary.

Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.

The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.

It is to be understood that the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.

By the term "substantially" it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in 5 amounts that do not preclude the effect the characteristic was intended to provide.

Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other 10 embodiments.

It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.

The present invention will be more clearly described with reference to the drawings showing embodiments thereof.

Accordingly, the present invention provides an artificial intelligence face recognition system and method thereof.

The present invention provides a novel method for face recognition, in a more 20 sophisticated, effective and efficient manner. The present invention can easily differentiate between a photograph and live human being. The system identifies the object by detecting an input from a user or a device. Input device can be but is not limited to a video camera, high definition video cameras and other type of cameras and the like. The input is broken down into multiple objects depending 25 on predefined classification and further the desired object is identified based on a preprogrammed set of values from the objects. The present invention can recognize the face of same person over a period of few years. The system generates a unique binary value to each face by correlating the Comprehensive Facial Value and biometric facial template. The present invention can identify the face with a minimum response time. The system instantly recognizes the facial image based on the accuracy level that is provided to the system.

Fig. 1 is a block diagram depicting an object detection unit, ODM (100); and illustrating the scheme for object detection by artificial intelligence face 5 recognition system. The object detection unit, ODM (A-100) is given an input through user or analog/digital capturing device (A-101). The object detection unit (ODM) (A-100) scans the content of the input image and breaks it into multiple objects based on predefined classification (A-102). Further, the desired object is identified from broken down objects based on a preprogrammed set of 10 values from the objects (A-103). The objects of interest include but are not limited to faces, images, and vehicles.

Referring to Fig. 2 is a block diagram of Habituation unit, HM (200) describing the process of habituation by the artificial intelligence face recognition system. The habituation unit, HM (200) works more like a human. Just the way a human
15 being needs to first meet and get introduced to another person first, before being able to recognize him/her, the habituation unit, HM (200) enables the system to get habituated to the person first in order to recognize that person. This process can take place with a simple 2 dimensional photograph, or from digital footage (2 dimensional and 3 dimensional) or even if the person is merely physically
20 present before the system. The face is identified by artificial intelligence face recognition system through the process illustrated by Object detection unit (ODM), 100.

The identified face is used to generate a virtual facial image (B-201). To this virtual facial image, the system assigns axiomatic identities (B-202). The system 25 then adjusts this virtual facial image based on predefined variable proportions, in multiple steps (B-203), and follows this up by assigning a categorical class-feature vector (B-204): A unique Comprehensive Facial Value ("CFV") is generated for this image upon allocation of a graphic range of values to it (B-205).

The CFV is correlated with a unique biometric facial template to generate a proprietary binary value (B-206), which is stored in the system's repository.
Referring to Fig. 3 is a block diagram depicting a recognition unit, RM (300) of the artificial intelligence face recognition system, in accordance with an aspect 5 of the present invention. This process of recognition is faster and more efficient than a human being. Unlike a human being, the system is capable to habituating and subsequently recognizing almost everyone on the planet.

This virtual facial image under recognition is sent to the repository, and is processed as illustrated by the habituation unit, HM (300). The Comprehensive
10 Facial Value, CFV, the biometric template and the proprietary binary values are generated and assigned to the facial vector (C-301). Further, the biometric template is decomposed to generate a singular value (C-302). If this singular value is positive, it is stored in the system memory separately. On the other hand, if the said value is negative, this value is converted into a positive value and the value thus derived is stored separately in the system memory. The system then assigns near vector and covariance values to the detected biometric template (C-303), whereupon sample parameters are derived for the biometric template (C-304).

The system then considers the corner nodal points of the biometric template and 20 calculates the local respective fields, shared weights and spatial sub-sampling (C-305). The biometric template is compared in the repository for a match. The system instantly recognizes the facial image based on the accuracy level that is provided to the system, and a new Comprehensive Facial Value, CFV of the image gets appended to the existing Comprehensive Facial Value, CFV, if the 25 match Hence, the artificial intelligence face recognition system recognizes the faces which are fed into the repository with high efficiency and accuracy.

The System accurately time-stamps all individuals depending on the customization of the System based on the requirement, based on face recognition, taking less than 1 second to identify each face.

The System has threshold settings of recognition that have been successfully 5 implemented - in the default setting, a relatively low threshold level of recognition is been used. In sensitive environments which require zero tolerance to false entries from a security point of view, the threshold of the System should be maintained at very high levels to adhere to such requirement.

The System coordinates with the ERP system of the company and thereby 10 collates data in the required format and submits the same to various stakeholders for different purposes.

The methodology and techniques described with respect to the aforesaid embodiments can be performed using a machine or other computing device within which a set of instructions, when executed, may cause the machine to perform any one or more of the methodologies discussed above. In some embodiments, the machine operates as a standalone device. In some embodiments, the machine may be connected (e.g., using a network) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client user machine in a server-client user network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet PC, a laptop computer, a desktop computer, a control system, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

The machine may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU, or both), a main memory and a static memory, which communicate with each other via a bus. The machine may further include a video display unit (e.g., a liquid crystal display (LCD), a flat panel, a solid 5 state display, or a cathode ray tube (CRT)). The machine may include an input device (e.g., a keyboard) or touch-sensitive screen, a cursor control device (e.g., a mouse), a disk drive unit, a signal generation device (e.g., a speaker or remote control) and a network interface device.

The disk drive unit may include a machine-readable medium on which is stored 10 one or more sets of instructions (e.g., software) embodying any one or more of the methodologies or functions described herein, including those methods illustrated above. The instructions may also reside, completely or at least partially, within the main memory, the static memory, and/or within the processor during execution thereof by the machine. The main memory and the processor also may constitute machine-readable media.

Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Applications that may include the apparatus and systems of various
embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware units or devices with related control and data signals communicated between and through the units, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software, firmware, and hardware implementations.

The present disclosure contemplates a machine readable medium containing instructions, or that which receives and executes instructions from a propagated signal so that a device connected to a network environment can send or receive voice, video or data, and to communicate over the network using the instructions. The instructions may further be transmitted or received over a network via the network interface device.

While the machine-readable medium can be a single medium, the term "machine-readable medium" should be taken to include a single medium or 5 multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "machine-readable medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the 10 methodologies of the present disclosure.

The term "machine-readable medium" shall accordingly be taken to include, but not be limited to: tangible media; solid-state memories such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories; magneto-optical or optical medium such as a disk or tape; non-transitory mediums or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a machine-readable medium or a distribution medium, as listed herein and including art-recognized equivalents and successor media, in which the software implementations herein are stored.

The illustrations of arrangements described herein are intended to provide a general understanding of the structure of various embodiments, and they are not intended to serve as a complete description of all the elements and features of 25 apparatus and systems that might make use of the structures described herein. Many other arrangements will be apparent to those of skill in the art upon reviewing the above description. Other arrangements may be utilized and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. Figures are also merely representational and may not be drawn to scale. Certain proportions thereof may be exaggerated, while others may be minimized. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Thus, although specific arrangements have been illustrated and described herein, it should be appreciated that any arrangement calculated to achieve the same purpose may be substituted for the specific arrangement shown. This disclosure is intended to cover any and all adaptations or variations of various embodiments and arrangements of the invention. Combinations of the above arrangements, and other arrangements not specifically described herein, will be apparent to those of skill in the art upon reviewing the above description. Therefore, it is intended that the disclosure not be limited to the particular arrangement(s) disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments and arrangements falling within the scope of the appended claims.

ADVANTAGES:

• Very high accuracy rate across all industrial applications
• Can be applied to verify authenticity of an individual in secure settings 20like financial institutions, information technology companies etc
• Tamper proof, since the administrator at the client can be given customized rights depending on the situation
• Compatible with any standard ERP system

APPLICATION AREAS:

The preferred use is security enhancement. In that regard as well as in addition, the
present invention has a host of industrial applications, including the following:

• Housing communities, apartment buildings
• Airports, railway stations, bus stations
• IT/ITES offices
• Factories and other settings where a large workforce is engaged
• Manufacturing facilities
• Cash points
• Stadiums
• Public transportation systems
• Banks, financial institutions, ATMs
• Government offices (passport/visa verification, driver's license procedure etc)
• Electoral process (identification of voters)
• Businesses of all kinds (access, login)
• Public gatherings
• Traffic monitoring systems
• Hotels & restaurants

WE CLAIM:

1. An artificial intelligence face recognition system, said system comprising:

an object detection unit for detecting and identifying an input image received from an input device;

a habituation unit for habituating said face recognition system for said input image received from said object detection unit;

a recognition unit for recognizing said input image habituated and received from said habituation unit;

wherein the said system is characterized by generating a proprietary binary value and a biometric template corresponding to said input image by the said habituation unit and then recognizing said input image by comparing stored said proprietary binary value and selected said biometric template corresponding to said input image using said recognition unit.

2. The system as claimed in claim 1, wherein said detection unit is adapted to scan said input image.

3. The system as claimed in claim 1, wherein said detection unit scanning said input image divides it into multiple object(s) based on pre-defined classification.

4. The system as claimed in claim 3, further comprising identifying said object (s) based on preprogrammed set of value(s).

5. The system as claimed in claim 1, wherein habituating said system by said habituation unit comprises:

a. generating a virtual facial image for said identified object(s) received from said object detection unit;
b. assigning an axiomatic identity for said virtual facial image;
c. adjusting said virtual facial image based on predefined variable
proportions
d. generating said proprietary binary value by correlating a comprehensive

5 facial value with said biometric facial template corresponding to said virtual facial image;

e. storing said binary value in a system repository.

6. The system as claimed in claim 5, wherein adjusting of said virtual facial image comprises:

a. assigning a categorical class-feature vector to said virtual facial
image; b. allocating a graphic range of values to said virtual facial image for generating said comprehensive facial value corresponding to said
virtual facial image.

7. The system as claimed in claim 1, wherein recognizing said input image by said recognition unit comprises:

a. assigning said comprehensive facial value, said biometric and said proprietary binary value to said categorical class-feature vector;
b. generating a singular value by decomposing said biometric template;

i. if said singular value is positive, it is stored in a system
memory, ii. if said value is negative, converted into positive value and then stored in said system memory. 25

8. The system as claimed in claim 7, further comprising

a. assigning a near vector and a co-variance value to said biometric template received from said system memory;
b. deriving a sample parameter to said biometric template received from
step (a);
c. calculating a local respective field, shared weight and spatial sub sampling by considering a corner nodal point of said biometric template received from step (b);
d. comparing said biometric template received from step (c) with said binary value stored in said system repository;
e. recognizing said input image,
a. if said biometric template is matched with stored binary value
^* corresponding to said input image, therefore appending a new
comprehensive facial value to existing comprehensive facial
value of said input image.

b. if said biometric template is not matched, therefore said face
recognition system reject said input image.
9. The system as claimed in claim 1, wherein said input device is selected from an image capturing device and the like.

10. The system as claimed in claim 1, wherein said input image is selected from a user's face, a two dimensional analog image, two dimensional or three dimensional digital image and the like.

11. The system as claimed in any of the preceding claims 1 to 10 wherein said recognition of face takes less than 1 second.

12. A method for face recognition, said method comprising steps of:

a. detecting and identifying an input image by using an object detection unit;
b. habituating said face recognition system by using a habituation unit;
c. recognizing said input image by using a recognition unit;
d. wherein recognizing said input image by comparing a proprietary binary value corresponding to said input image with a biometric facial template for said input image.
13. The method as claimed in claim 12, wherein detecting and indentifying said input image by using a said object detection unit comprises:

i. Scanning and dividing said input image captured using an input means into multiple object(s) depending on pre-defined classification;
ii. identifying the said object (s) based on preprogrammed set of value(s).

14. The method as claimed in claim 12, wherein habituating by using said habituation unit comprises

a. generating a virtual facial image for said identified object(s) received from said object detection unit;
b. assigning an axiomatic identity for said virtual facial image;
c. adjusting said virtual facial image based on predefined variable proportions;
d. generating said proprietary binary value by correlating a comprehensive facial value with said biometric facial template corresponding to said virtual facial image;
e. storing said binary value in a system repository.

15. The method as claimed in claim 14, wherein adjusting of said virtual facial image comprises:

a. assigning a categorical class-feature vector to said virtual facial image,
b. allocating a graphic range of values to said virtual facial image for generating said comprehensive facial value corresponding to said virtual facial image.

16. The method as claimed in claim 12, wherein recognizing said input image by said recognition unit comprises:

a. assigning said comprehensive facial value, said biometric and said proprietary binary value to a facial vector,
b. generating a singular value by decomposing said biometric template;

i. if said singular value is positive, it is stored in a system memory,
ii. if said value is negative, converted into positive value and then stored in said system memory.

17. The method as claimed in claim 16, further comprising
a. assigning a near vector and a co-variance value to said biometric template received from said system memory;
b. deriving a sample parameter to said biometric template received from step (a);
c. calculating a local respective field, shared weight and spatial sub sampling by considering a corner nodal point of said biometric template received from step (b);
d. comparing said biometric template received from step (c) with said binary value stored in said system repository;
e. recognizing said input image,

i. if said biometric template is matched with stored binary value corresponding to said input image, therefore appending a new comprehensive facial value to existing comprehensive facial value of said input image.

ii. if said biometric template is not matched, therefore said face recognition system reject said input image.

18. The method as claimed in claim 12, wherein said input image is selected from a user's face, a two dimensional analog image and two dimensional and three dimensional digital images and the like.

19. The method as claimed in claim 12, wherein said input device is selected from an image capturing device and the like.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 4862-CHE-2012 POWER OF ATTORNEY 21-11-2012.pdf 2012-11-21
1 4862-CHE-2012-FORM-26 [11-07-2023(online)].pdf 2023-07-11
2 4862-CHE-2012 FORM-2 21-11-2012.pdf 2012-11-21
2 4862-CHE-2012-Correspondence to notify the Controller [08-07-2023(online)].pdf 2023-07-08
3 4862-CHE-2012-US(14)-HearingNotice-(HearingDate-12-07-2023).pdf 2023-06-17
3 4862-CHE-2012 FORM-1 21-11-2012.pdf 2012-11-21
4 4862-CHE-2012-CLAIMS [21-11-2020(online)].pdf 2020-11-21
4 4862-CHE-2012 DRAWINGS 21-11-2012.pdf 2012-11-21
5 4862-CHE-2012-FER_SER_REPLY [21-11-2020(online)].pdf 2020-11-21
5 4862-CHE-2012 DESCRIPTION (PROVISIONAL) 21-11-2012.pdf 2012-11-21
6 4862-CHE-2012-OTHERS [21-11-2020(online)].pdf 2020-11-21
6 4862-CHE-2012 CORRESPONDENCE OTHER 21-11-2012.pdf 2012-11-21
7 4862-CHE-2012-FER.pdf 2020-05-22
7 4862-CHE-2012 POWER OF ATTORNEY 18-11-2013.pdf 2013-11-18
8 Form-18(Online).pdf 2016-11-09
8 4862-CHE-2012 FORM-5 18-11-2013.pdf 2013-11-18
9 4862-CHE-2012 FORM-2 18-11-2013.pdf 2013-11-18
9 Form 18 [08-11-2016(online)].pdf 2016-11-08
10 4862-CHE-2012 ABSTRACT 18-11-2013.pdf 2013-11-18
10 4862-CHE-2012 FORM-13 18-11-2013.pdf 2013-11-18
11 4862-CHE-2012 CLAIMS 18-11-2013.pdf 2013-11-18
11 4862-CHE-2012 DRAWING 18-11-2013.pdf 2013-11-18
12 4862-CHE-2012 CORRESPONDENCE OTHERS 18-11-2013.pdf 2013-11-18
12 4862-CHE-2012 DESCRIPTION (COMPLETE) 18-11-2013.pdf 2013-11-18
13 4862-CHE-2012 CORRESPONDENCE OTHERS 18-11-2013.pdf 2013-11-18
13 4862-CHE-2012 DESCRIPTION (COMPLETE) 18-11-2013.pdf 2013-11-18
14 4862-CHE-2012 CLAIMS 18-11-2013.pdf 2013-11-18
14 4862-CHE-2012 DRAWING 18-11-2013.pdf 2013-11-18
15 4862-CHE-2012 ABSTRACT 18-11-2013.pdf 2013-11-18
15 4862-CHE-2012 FORM-13 18-11-2013.pdf 2013-11-18
16 4862-CHE-2012 FORM-2 18-11-2013.pdf 2013-11-18
16 Form 18 [08-11-2016(online)].pdf 2016-11-08
17 Form-18(Online).pdf 2016-11-09
17 4862-CHE-2012 FORM-5 18-11-2013.pdf 2013-11-18
18 4862-CHE-2012-FER.pdf 2020-05-22
18 4862-CHE-2012 POWER OF ATTORNEY 18-11-2013.pdf 2013-11-18
19 4862-CHE-2012-OTHERS [21-11-2020(online)].pdf 2020-11-21
19 4862-CHE-2012 CORRESPONDENCE OTHER 21-11-2012.pdf 2012-11-21
20 4862-CHE-2012-FER_SER_REPLY [21-11-2020(online)].pdf 2020-11-21
20 4862-CHE-2012 DESCRIPTION (PROVISIONAL) 21-11-2012.pdf 2012-11-21
21 4862-CHE-2012-CLAIMS [21-11-2020(online)].pdf 2020-11-21
21 4862-CHE-2012 DRAWINGS 21-11-2012.pdf 2012-11-21
22 4862-CHE-2012-US(14)-HearingNotice-(HearingDate-12-07-2023).pdf 2023-06-17
22 4862-CHE-2012 FORM-1 21-11-2012.pdf 2012-11-21
23 4862-CHE-2012-Correspondence to notify the Controller [08-07-2023(online)].pdf 2023-07-08
23 4862-CHE-2012 FORM-2 21-11-2012.pdf 2012-11-21
24 4862-CHE-2012-FORM-26 [11-07-2023(online)].pdf 2023-07-11
24 4862-CHE-2012 POWER OF ATTORNEY 21-11-2012.pdf 2012-11-21

Search Strategy

1 2020-05-1815-45-06E_18-05-2020.pdf
1 2021-01-0617-11-16AE_06-01-2021.pdf
2 2020-05-1815-45-06E_18-05-2020.pdf
2 2021-01-0617-11-16AE_06-01-2021.pdf