Abstract: The present invention provides a face recognition system (102) and a face recognition method thereof. A face detection unit (118) detects a face in an image and generates a cropped image. A face alignment unit (120) aligns the cropped image to generate an aligned image. A face embedding unit (122) generates a plurality of embeddings based on the aligned image. A face classification unit (124) recognizes the detected face based on the embeddings. The face recognition system (102) recognizes persons with high efficiency, even when the persons are masked an/or unmasked.
DESC:TECHNICAL FIELD
[1] The present invention relates generally to image processing and computer vision, and particularly to face recognition.
BACKGROUND
[2] In modern times, biometric identification systems that use one or more biological features of users have gained popularity. Such biometric systems have wide range of applications, such as, lockers or safes, smartphones, doors, etc. Widely used biometric systems are fingerprint scanners, face recognition systems, iris scanners, and so on. Since the biological features such as face, fingerprints, or iris, are unique for every person, the biometric systems are more secure than the systems using passwords.
[3] Face recognition systems are widely used owing to the simplicity of use. For using face recognition systems, the users do not have to consciously perform any action for unlocking. Face recognition involves complex image-processing in real-world applications with complex effects of illumination, occlusion, and imaging condition on live images. First step for the face recognition systems is to acquire an image from a camera. Second step is face detection from the acquired image. In third step, the face recognition system takes the face images from output of the face detection on the input image. In final step, a person’s identity is provided as a result of the face recognition. In most common application of face recognition systems, the camera captures the image of the user and matches the image with a database of images. If a match is found, the user is validated. If no match is found, the user is rejected access or service. These rules of face recognition may differ for different applications.
[4] However, the conventional face recognition systems fail to identify the face of the user if any part of the face is blocked, either by presence of any obstruction between the camera and the user or by accessories worn by the user, such as face masks. The use of the face masks has increased owing to risk of infection by various airborne or contagious viruses. In fact, wearing the face masks is even mandatory at places like nursing homes, hospitals, and clinics. In such cases, the conventional face recognition systems fail, and the users are forced to resort to alternative authentication systems.
[5] In few conventional face recognition systems, image processing is used to determine whether the user is wearing a face mask or not. However, even such systems do not help in identification of the user when the face of the user is partially covered by the face mask.
[6] Thus, there is a need for a face recognition system that identifies the user even when the user wears a face mask.
SUMMARY
[7] This summary is provided to introduce concepts related to face recognition system and face recognition method. This summary is neither intended to identify essential features of the present invention nor is it intended for use in determining or limiting the scope of the present invention.
[8] In an embodiment of the present invention, a face recognition system is provided. The face recognition system includes a face detection unit, a face alignment unit, a face embedding unit, a database, and a face classification unit. The face detection unit is configured to detect a face in an image. The face detection unit is configured to create a plurality of digital masks. The face detection unit is configured to apply the digital masks on the face. Each digital mask indicates a distinct masked variation of the face. The face detection unit is configured to generate a plurality of cropped images indicative of masked and unmasked variations of the face. The face alignment unit is configured to receive the cropped images. The face alignment unit is configured to align the cropped images to generate a plurality of aligned images. The face embedding unit is configured to receive the aligned images. The face embedding unit is configured to generate a plurality of embeddings based on the cropped images using deep learning techniques. The face embedding unit is configured to aggregate the embeddings to form a single embedding. The database is configured to receive and store the embeddings and the single embedding. Each embedding corresponds to a person. The face classification unit is configured to match the detected face with one or more embeddings or the single embedding, stored in the database. The face classification unit is configured to update the stored embeddings of the person if the valid match is found. The face classification unit is configured to store the embeddings in the database if the valid match is not found. The embeddings correspond to a new person.
[9] In an embodiment of the present invention, a face recognition method is provided. The method includes detecting a face in an image by a face detection unit. The method includes creating a plurality of digital masks by the face detection unit. The method includes applying the digital masks on the face by the face detection unit. Each digital mask indicates a distinct masked variation of the face. The method includes generating a plurality of cropped images indicative of masked and unmasked variations of the face by the face detection unit. The method includes aligning the cropped images for generating a plurality of aligned images by a face alignment unit. The method includes generating a plurality of embeddings based on the aligned images using deep learning techniques by a face embedding unit. The method includes aggregating the embeddings to form a single embedding by the face embedding unit. The method includes storing the plurality of embeddings and the single embedding in a database. Each embedding corresponds to a person. The method includes matching the detected face with one or more embeddings or the single embedding stored in the database, by a face classification unit. The method includes updating the stored embeddings of the person if the valid match is found by the face classification unit. The method includes storing the embeddings in the database if the valid match is not found by the face classification unit. The embeddings correspond to a new person.
[10] In an embodiment, the face detection unit detects a plurality of landmark points in the face. The face detection unit generates one or more digital masks based on the landmark points.
[11] In an embodiment, the face detection unit generates one or more digital mask based on randomly identified regions of the face.
[12] In an embodiment, one or more embeddings in the database correspond to same person.
[13] In an embodiment, the face classification unit matches the detected face to each embedding individually.
[14] In an embodiment, the face classification unit matches the detected face with a weighted average of one or more embeddings.
[15] In an embodiment, the embeddings are numerical vectors generated in 512-dimensional face.
[16] In an embodiment, the deep learning techniques include such as Arcface and/or Deep Convolution Neural Network with angular loss.
[17] In an embodiment, the face classification unit detects a degree of similarity between the detected face and the matched embeddings. The face classification unit ranks the matched embeddings based on the degree of similarity.
[18] In an embodiment, the face recognition system includes an I/O unit that provides information of one or more persons corresponding to the ranked embeddings.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[19] The detailed description is described with reference to the accompanying figures. The same numbers are used throughout the drawings to reference like features and modules.
[20] Figure 1 illustrates a schematic block diagram of an authentication system in accordance with an embodiment of the present invention.
[21] Figure 2 illustrates a schematic flow diagram of a method of face recognition in accordance with an embodiment of the present invention.
[22] Figure 3 illustrates a flowchart of a method of face recognition in accordance with an embodiment of the present invention.
[23] It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present invention.
[24] Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
DETAILED DESCRIPTION
[25] The various embodiments of the present invention provide a face recognition system and a face recognition method.
[26] In the following description, for purpose of explanation, specific details are set forth in order to provide an understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these details.
[27] One skilled in the art will recognize that various embodiments of the present invention, some of which are described below, may be incorporated into a number of systems.
[28] However, the systems and methods are not limited to the specific embodiments described herein. Further, structures and devices shown in the figures are illustrative of exemplary embodiments of the present invention and are meant to avoid obscuring the present invention.
[29] Furthermore, connections between components and/or modules within the figures are not intended to be limited to direct connections. Rather, these components and modules may be modified, re-formatted or otherwise changed by intermediary components and modules.
[30] References in the present invention to “one embodiment” or “an embodiment” mean that a particular feature, structure, characteristic, or function described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase “in an implementation” or “in an embodiment” in various places in the specification are not necessarily all referring to the same embodiment or implementation.
[31] Referring now to Figure 1, an authentication system (100) is shown in accordance with an embodiment of the present invention. The authentication system (100) includes a face recognition system (102), a camera (104), and an external source (105). The camera (104) and the external source (105) may be connected to the face recognition system (102) by wired or wireless communication links. The face recognition system (102) includes a processor (106), a memory (108), an Input/Output (I/O) unit (110), a database (112), an Artificial Intelligence (AI) unit (114), a data acquisition unit (116), a face detection unit (118), a face embedding unit (122), and a face classification unit (124). The aforementioned components units are interconnected by an internal bus (not shown).
[32] The face recognition system (102) identifies a person even when the person wears a face mask, the person’s face is obstructed, or the person’s face is only partially visible. The face recognition system (102) may also be used in identifying people in photos, videos, or in real-time camera feed of CCTV cameras or webcams.
[33] In an embodiment, a face recognition method implemented by the face recognition system (102) is provided. The face recognition is an AI-based method. In that, the units (118-124) collectively implement the face recognition method using various AI techniques performed by the AI unit (114). For instance, the units (118-124) implement various deep learning techniques such as MTCNN, arcface, Deep CNN with angular loss, and Dlib by way of the AI unit (114). The AI unit (114) may be CPUs or GPUs specifically designed to implement such AI techniques.
[34] Referring now to Figure 2, a schematic flow diagram of a method of face recognition is illustrated in accordance with an embodiment of the present invention.
[35] In operation, the camera (104) and the external source (105) interfaces with the face recognition system (102) through the I/O unit (110). The camera (104) captures an image or a video of a person and provides the captured image or frames of the video to the face recognition system (102). Alternatively, the external source (105) provides an image or a frame of a video to the face recognition system (102). The data acquisition unit (116) receives the image. The data acquisition unit (116) may convert the image into a digital format, when applicable. Thereafter, the data acquisition unit (116) provides the image to the face detection unit (118). The image may be of a person without wearing any face mask or the image of the person wearing a face mask or image of the face obstructed which makes the face not wholly visible.
[36] The face detection unit (118) determines whether there are any faces in the image. If the face detection unit (118) detects faces in the image, the face detection unit (118) determines location or position of the detected faces. The face detection unit (118) provides cropped images containing the face in the received image along with ‘landmark’ points on the face like eyes, nose, etc. of the person. The face detection unit (118) uses the landmark points to create various kinds of digital masks on the detected face covering parts of the face, creating more variations of the cropped face.
[37] The face alignment unit (120) aligns the detected faces to justify scales and orientations of the cropped images, thereby making the face recognition system (102) robust and efficient. For instance, if the face is sideways, upwards, or tilted, the face alignment unit (120) aligns the face to a normal front facing axis using a deep learning model such as MTCNN using the AI Unit (114). Thereafter, the face alignment unit (120) creates aligned images as required for further operation by the models in the AI Unit (114) and provides the same to the face embedding unit (122). The face alignment unit (120) may also provide the aligned images to the face embedding unit (122) to generate one or more robust features. In an example, the aligned images are fixed standard size images. The face alignment unit (120) may perform more cleanup techniques on the aligned images before providing the same to the face embedding unit (122).
[38] The face embedding unit (122) receives the aligned images and analyzes the same. The face embedding unit (122) provides numerical vectors that represent each detected face in the received images in a 512-dimensional space. The vector representation is computed by pre-trained deep learning methods/models using the AI unit (114). The face embedding unit (122) uses multiple models to create multiple embeddings, and the embeddings are further processed to create best representation of the face. The embeddings may be combined, aggregated or remain separate.
[39] The face classification unit (124) classifies the detected faces with respect to the faces in the database (112). The vectors of similar faces are closer in the 512-dimensional space. The face classification unit (124) uses similarity measurement of the embeddings such as, but not limited to, cosine similarity, which is a probability value between 0 and 1. The face classification unit (124) organizes, filters, and ranks images according to visual similarity. With multiple embeddings per user, there can be multiple comparisons. The results may be combined, aggregated, or treated separately. The face classification unit (124) sets a threshold value which is indicative of a limit beyond which the similarity score can be considered an acceptable result. This threshold is approximated through testing, and it can also be user controlled by the administrator of the face recognition system.
[40] If a valid match is found, the new embedding is used to update the existing embedding for the matched person in the database (112). A valid match can also be indicated by human intervention. If no match is found, the person is added to the database (112) with an auto-generated label as a new person of interest (PoI). The updated embedding allows the AI unit (114) to learn from the process of matching and results in re-learning of the deep learning models therein. In an example, the embeddings are updated with the new embedding with any number of operations such as, but not limited to, determining aggregate or mean of the embeddings.
[41] For adding faces to the database (112) for the purpose of classification and identification, the face recognition system (102) fetches the images with named labels from the database (112) and analyzes the images to detect faces. In that, the face detection unit (118) detects the face using a first deep learning method/model in the AI unit (114), for instance, (opensource) Dlib model, fetches landmark points and bounding box, and creates a cropped image based on the bounding box. The face detection unit (118) determines ‘mask’ landmark points and applies black color or smooth coloration of the pixels in the pre-determined ‘mask’ space on the cropped face to create a masked image or an obstructed image of the user resulting in only certain areas of the face exposed. The mask space may be defined as a predetermined region or randomly identified regions of the image on which pixel color manipulation techniques are applied. In an example, the mask space covers the randomly identified regions of the face. The face detection unit (118) may generate multiple mask spaces encompassing different regions of the face. In an example, the face detection unit (118) applies different kinds of masks. The face detection unit (118) stores the cropped image and the mask image in the database (112) and links the same with the person’s name or a label. Following the face alignment and the image preprocessing by the face alignment unit (120), the face embedding unit (122) calculates separate embeddings for full face and masked face using a second deep learning method/model, for instance, arcface model. If there are more than one images of a face, unobstructed or otherwise, the face embedding unit (122) generates embeddings for all. The face embedding unit (122) may merge the embeddings of each type of image or save the same separately. The merging can be done in many ways. In an example, a non-weighted averaging method is used for merging. The merging can also be done over combinations of the face, for example, full face and mask face embeddings can be merged for the same image, or all full face images for a person are merged and mask face images for the same person are merged and that embedding is saved separately. More than one mask can be applied, and the embedding combinations can be treated in a combined or standalone manner. The face embedding unit (122) stores the features into a pickle file or similar file format associated with the person’s name or the person’s label along with the embeddings for the label. The face recognition system (102) performs the above process for all the images in the database (112).
[42] In an embodiment, the face recognition system (102) generates embeddings for all images and ID based on the received image as depicted in Figures 2-3.
[43] Referring now to Figure 3, a flowchart of a method of face recognition is illustrated in accordance with an embodiment of the present invention.
[44] At step 302, the data acquisition unit (116) reads the image (or the frame) received from the camera (104) or the external source (105).
[45] At step 304, the data acquisition unit (116) converts the image from BGR color space to RGB color space.
[46] At step 306, the data acquisition unit (116) passes the frame to the face detection unit (118) to detect the face. In that, the face detection unit (118) uses a third deep learning model, for instance, open source Multi-task Cascaded Convolutional Neural Networks (MTCNN) or the first deep learning model (Dlib) to detect whether there is a face in the received image.
[47] If at step 306 the face detection unit (118) detects that there is no face in the image, the face recognition system (102) skips all further steps and ascertains that no person is present in the image. Thereafter, a new image is received at step 302.
[48] If at step 306 the face detection unit (118) detects a face, step 308 is executed. At step 308, the face detection unit (118) attempts to create a mask on the user’s face using the first deep learning model (Dlib library) in the AI unit (114) to fetch the landmark points. The landmark points are used to make a mask on the face within the frame based on a mask script and using the mask points. Since more than one kind of mask scripts are present, multiple images are generated for each image, which may be counted as modified versions of the image. In an example, region-specific parts of the image are tested to hide the parts of the face that are not of interest. The face detection unit (118) returns the face and mask face detected and the landmark points and the bounding box for the same.
[49] At step 310, the face detection unit (118) creates the cropped images based on the bounding box and sends the cropped images to the face alignment unit (120) to align the face image and generate the images of the preferred size.
[50] At step 312, the face embedding unit (122) uses the AI unit (114) to run multiple deep learning models, such as, but not limited to, facenet, arcface, Deep CNN with angular loss, and SphereFace to create multiple embeddings for all the images and masked or modified version of the images.
[51] At step 314, the face classification unit (124) compares the embeddings of the images from the previous step to the respective embeddings from the database (112) of saved Persons of Interest (PoI). The face classification unit (124) calculates the similarity of the face and the mask images. If similarity is less than the threshold value for all the embeddings, then the face classification unit (124) generates an ID for the person in the frame and saves the same in the database (112) with the embeddings and adds it to the pickle file with other labels. In that, the face classification unit (124) determines that the person is a new Person of Interest or unknown person, then the new face is added to the database (112) for future reference. The embeddings of all modifications of the image of the person are also stored in the database (112).
[52] At step 316, if the similarity scores between the calculated embeddings and the embeddings from the database (112) is less than the predetermined threshold value for all the embeddings, the face classification unit (124) generates a new ID.
[53] At step 318, the face classification unit (124) adds the new ID to the database (112) as a new temporary person of interest. In that, the face classification unit (124) may permanently add the same to the database (112) as a person of interest or there may be a manual intervention. If either the full face similarity score or the mask face similarity score is greater or equal to the threshold, the face classification unit (124) ascertains that the person matched from the database (112) with the highest similarity score is the person in the frame, hence a successful identification. The face recognition system (102) may also have manual intervention where an admin identifies or validates the person in the image with a person in the database (112) or adds them manually in the database (112). In either case, once the person is identified, the embedding is merged with the identified person’s existing data and updated in the database (112).
[54] At step 320, the face classification unit (124) recognizes the face or faces in the image and prepares a list of all persons identified. Thereafter, the processor (106) draws the bounding box and the user label on the frame and returns the same by way of the I/O unit (110) along with the list of persons identified and information about them, for instance, the image with the bounding box and the user label is displayed on a screen. The bounding box may have a color based on the findings or business rules, for instance: green box for verified people, red box for known threats, grey box for unknowns, or yellow box for conflicting results between both the models which requires manual intervention. If the similarity is above the threshold for a matched person of interest (PoI) from the database (112) or a PoI is validated by an admin, the embedding is added to the database (112) and merged with the PoI’s data. Optionally, the face recognition system (102) may, when identifying the person, return a list of persons with the closest match instead of a single label. This may be used for manual validation. In an example, as the size of the PoI list is increased, the accuracy increases. The extended list of PoIs may be sent to another classification model for further fine tuning.
[55] In an exemplary embodiment, the face recognition system (102) enhances dark or night images with one or more deep learning models from the AI Unit (114), thereby making the image more visible. This results in an increase in similarity scores and hence better identification chances.
[56] In an exemplary embodiment, the face recognition system (102) increases resolution of the image using one or more deep learning models in the AI Unit (114), so that a person that is far from the camera (104) is enhanced, resulting into a better chance at identifying the person.
[57] In an exemplary embodiment, the face recognition system (102) reduces noise or graininess in the image by passing it to the AI Unit (112), which is usually brought about by low light conditions or low camera quality.
[58] In an exemplary embodiment, the face classification unit (124) implements a technique such as, but not limited to, clustering, sorting or hashing. In that, the face classification unit (124) sorts and arranges the embeddings, thereby searching and comparing the embeddings faster.
[59] In an exemplary embodiment, the face recognition system (102) integrates with biometric systems, systems that classify people based on part or whole of their body, or electronic ID systems like RFID to associate people with their faces. This can be used in conjunction with the face identification system to further increase accuracy of the face recognition system (102) to identify the person or to use as a double authentication method.
[60] The foregoing description of the invention has been set merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to person skilled in the art, the invention should be construed to include everything within the scope of the invention.
,CLAIMS:
1. A face recognition system (102), comprising:
a face detection unit (118) configured to:
detect a face in an image,
create a plurality of digital masks,
apply the digital masks on the face, wherein each digital mask indicates a distinct masked variation of the face, and
generate a plurality of cropped images indicative of masked and unmasked variations of the face;
a face alignment unit (120) configured to:
receive the cropped images, and
align the cropped images to generate a plurality of aligned images;
a face embedding unit (122) configured to:
receive the aligned images,
generate a plurality of embeddings based on the cropped images using deep learning techniques, and
aggregate the embeddings to form a single embedding;
a database (112) configured to receive and store the embeddings and the single embedding, wherein each embedding corresponds to a person; and
a face classification unit (124) configured to:
match the detected face with one or more embeddings or the single embedding, stored in the database (112),
update the stored embeddings of the person if the valid match is found, and
store the embeddings in the database (112) if the valid match is not found, said embeddings corresponding to a new person.
2. The face recognition system (102) as claimed in claim 1, wherein the face detection unit (118) is configured to:
detect a plurality of landmark points in the face, and
generate one or more digital masks based on the landmark points.
3. The face recognition system (102) as claimed in claim 1, wherein the face detection unit (118) is configured to generate one or more digital mask based on randomly identified regions of the face.
4. The face recognition system (102) as claimed in claim 1, wherein one or more embeddings in the database (112) correspond to same person.
5. The face recognition system (102) as claimed in claim 4, wherein the face classification unit (124) is further configured to match the detected face to each embedding individually.
6. The face recognition system (102) as claimed in claim 4, wherein the face classification unit (124) is further configured to match the detected face with a weighted average of one or more embeddings.
7. The face recognition system (102) as claimed in claim 1, wherein the embeddings are numerical vectors generated in 512-dimensional face.
8. The face recognition system (102) as claimed in claim 1, wherein the deep learning techniques include such as Arcface and/or Deep Convolution Neural Network with angular loss.
9. The face recognition system (102) as claimed in claim 1, wherein the face classification unit (124) is configured to:
detect a degree of similarity between the detected face and the matched embeddings;
rank the matched embeddings based on the degree of similarity.
10. The face recognition system (102) as claimed in claim 9, comprising an I/O unit (110) configured to provide information of one or more persons corresponding to the ranked embeddings.
11. A face recognition method, comprising:
detecting, by a face detection unit (118), a face in an image;
creating, by the face detection unit (118), a plurality of digital masks;
applying, by the face detection unit (118), the digital masks on the face, wherein each digital mask indicates a distinct masked variation of the face;
generating, by the face detection unit (118), a plurality of cropped images indicative of masked and unmasked variations of the face;
aligning, by a face alignment unit (120), the cropped images for generating a plurality of aligned images;
generating, by a face embedding unit (122), a plurality of embeddings based on the aligned images using deep learning techniques;
aggregating, by the face embedding unit (122), the embeddings to form a single embedding;
storing, in a database (112), the plurality of embeddings and the single embedding, wherein each embedding corresponds to a person;
matching, by a face classification unit (124), the detected face with one or more embeddings or the single embedding, stored in the database (112);
updating, by the face classification unit (124), the stored embeddings of the person if the valid match is found; and
storing, by the face classification unit (124), the embeddings in the database (112) if the valid match is not found, said embeddings corresponding to a new person.
12. The face recognition method as claimed in claim 11, comprising:
detecting, by the face detection unit (118), a plurality of landmark points in the face; and
creating, by the face detection unit (118), one or more digital masks based on the landmark points.
13. The face recognition method as claimed in claim 11, comprising generating, by face detection unit (118), one or more digital mask based on randomly identified regions of the face.
14. The face recognition method as claimed in claim 11, wherein one or more embeddings in the database (112) correspond to same person.
15. The face recognition method as claimed in claim 14, wherein the face classification unit (124) matches the detected face to each embedding in the database (112) individually.
16. The face recognition method as claimed in claim 14, wherein the face classification unit (124) matches the detected face with a weighted average of one or more embeddings stored in the database (112).
17. The face recognition method as claimed in claim 11, wherein the embeddings are numerical vectors generated in 512-dimensional face.
18. The face recognition method as claimed in claim 11, wherein the deep learning techniques include Arcface and/or Deep Convolution Neural Network with angular loss.
19. The face recognition method as claimed in claim 11, comprising:
detecting, by the face classification unit (124), a degree of similarity between the detected face and the matched embeddings; and
ranking, by the face classification unit (124), the matched embeddings based on the degree of similarity.
20. The face recognition method as claimed in claim 11, comprising displaying, by an I/O unit (110), information of one or more persons corresponding to the ranked embeddings.
| # | Name | Date |
|---|---|---|
| 1 | 202021030586-IntimationOfGrant20-02-2025.pdf | 2025-02-20 |
| 1 | 202021030586-PROVISIONAL SPECIFICATION [17-07-2020(online)].pdf | 2020-07-17 |
| 1 | 202021030586-Written submissions and relevant documents [23-09-2024(online)].pdf | 2024-09-23 |
| 2 | 202021030586-Correspondence to notify the Controller [06-09-2024(online)].pdf | 2024-09-06 |
| 2 | 202021030586-FORM 1 [17-07-2020(online)].pdf | 2020-07-17 |
| 2 | 202021030586-PatentCertificate20-02-2025.pdf | 2025-02-20 |
| 3 | 202021030586-DRAWINGS [17-07-2020(online)].pdf | 2020-07-17 |
| 3 | 202021030586-FORM-26 [06-09-2024(online)].pdf | 2024-09-06 |
| 3 | 202021030586-Written submissions and relevant documents [23-09-2024(online)].pdf | 2024-09-23 |
| 4 | 202021030586-US(14)-HearingNotice-(HearingDate-09-09-2024).pdf | 2024-08-19 |
| 4 | 202021030586-FORM-26 [17-10-2020(online)].pdf | 2020-10-17 |
| 4 | 202021030586-Correspondence to notify the Controller [06-09-2024(online)].pdf | 2024-09-06 |
| 5 | 202021030586-Response to office action [26-07-2024(online)].pdf | 2024-07-26 |
| 5 | 202021030586-Proof of Right [15-01-2021(online)].pdf | 2021-01-15 |
| 5 | 202021030586-FORM-26 [06-09-2024(online)].pdf | 2024-09-06 |
| 6 | 202021030586-US(14)-HearingNotice-(HearingDate-09-09-2024).pdf | 2024-08-19 |
| 6 | 202021030586-FORM 3 [17-07-2021(online)].pdf | 2021-07-17 |
| 6 | 202021030586-ABSTRACT [11-10-2022(online)].pdf | 2022-10-11 |
| 7 | 202021030586-Response to office action [26-07-2024(online)].pdf | 2024-07-26 |
| 7 | 202021030586-ENDORSEMENT BY INVENTORS [17-07-2021(online)].pdf | 2021-07-17 |
| 7 | 202021030586-CLAIMS [11-10-2022(online)].pdf | 2022-10-11 |
| 8 | 202021030586-ABSTRACT [11-10-2022(online)].pdf | 2022-10-11 |
| 8 | 202021030586-COMPLETE SPECIFICATION [11-10-2022(online)].pdf | 2022-10-11 |
| 8 | 202021030586-DRAWING [17-07-2021(online)].pdf | 2021-07-17 |
| 9 | 202021030586-CLAIMS [11-10-2022(online)].pdf | 2022-10-11 |
| 9 | 202021030586-COMPLETE SPECIFICATION [17-07-2021(online)].pdf | 2021-07-17 |
| 9 | 202021030586-DRAWING [11-10-2022(online)].pdf | 2022-10-11 |
| 10 | 202021030586-COMPLETE SPECIFICATION [11-10-2022(online)].pdf | 2022-10-11 |
| 10 | 202021030586-FER_SER_REPLY [11-10-2022(online)].pdf | 2022-10-11 |
| 10 | 202021030586-FORM 18 [21-07-2021(online)].pdf | 2021-07-21 |
| 11 | 202021030586-DRAWING [11-10-2022(online)].pdf | 2022-10-11 |
| 11 | 202021030586-FORM 4(ii) [09-09-2022(online)].pdf | 2022-09-09 |
| 11 | Abstract1.jpg | 2022-02-08 |
| 12 | 202021030586-FER.pdf | 2022-03-11 |
| 12 | 202021030586-FER_SER_REPLY [11-10-2022(online)].pdf | 2022-10-11 |
| 13 | 202021030586-FORM 4(ii) [09-09-2022(online)].pdf | 2022-09-09 |
| 13 | Abstract1.jpg | 2022-02-08 |
| 14 | 202021030586-FORM 18 [21-07-2021(online)].pdf | 2021-07-21 |
| 14 | 202021030586-FER_SER_REPLY [11-10-2022(online)].pdf | 2022-10-11 |
| 14 | 202021030586-FER.pdf | 2022-03-11 |
| 15 | 202021030586-COMPLETE SPECIFICATION [17-07-2021(online)].pdf | 2021-07-17 |
| 15 | 202021030586-DRAWING [11-10-2022(online)].pdf | 2022-10-11 |
| 15 | Abstract1.jpg | 2022-02-08 |
| 16 | 202021030586-COMPLETE SPECIFICATION [11-10-2022(online)].pdf | 2022-10-11 |
| 16 | 202021030586-DRAWING [17-07-2021(online)].pdf | 2021-07-17 |
| 16 | 202021030586-FORM 18 [21-07-2021(online)].pdf | 2021-07-21 |
| 17 | 202021030586-ENDORSEMENT BY INVENTORS [17-07-2021(online)].pdf | 2021-07-17 |
| 17 | 202021030586-CLAIMS [11-10-2022(online)].pdf | 2022-10-11 |
| 17 | 202021030586-COMPLETE SPECIFICATION [17-07-2021(online)].pdf | 2021-07-17 |
| 18 | 202021030586-FORM 3 [17-07-2021(online)].pdf | 2021-07-17 |
| 18 | 202021030586-DRAWING [17-07-2021(online)].pdf | 2021-07-17 |
| 18 | 202021030586-ABSTRACT [11-10-2022(online)].pdf | 2022-10-11 |
| 19 | 202021030586-ENDORSEMENT BY INVENTORS [17-07-2021(online)].pdf | 2021-07-17 |
| 19 | 202021030586-Proof of Right [15-01-2021(online)].pdf | 2021-01-15 |
| 19 | 202021030586-Response to office action [26-07-2024(online)].pdf | 2024-07-26 |
| 20 | 202021030586-FORM 3 [17-07-2021(online)].pdf | 2021-07-17 |
| 20 | 202021030586-FORM-26 [17-10-2020(online)].pdf | 2020-10-17 |
| 20 | 202021030586-US(14)-HearingNotice-(HearingDate-09-09-2024).pdf | 2024-08-19 |
| 21 | 202021030586-DRAWINGS [17-07-2020(online)].pdf | 2020-07-17 |
| 21 | 202021030586-FORM-26 [06-09-2024(online)].pdf | 2024-09-06 |
| 21 | 202021030586-Proof of Right [15-01-2021(online)].pdf | 2021-01-15 |
| 22 | 202021030586-Correspondence to notify the Controller [06-09-2024(online)].pdf | 2024-09-06 |
| 22 | 202021030586-FORM 1 [17-07-2020(online)].pdf | 2020-07-17 |
| 22 | 202021030586-FORM-26 [17-10-2020(online)].pdf | 2020-10-17 |
| 23 | 202021030586-DRAWINGS [17-07-2020(online)].pdf | 2020-07-17 |
| 23 | 202021030586-PROVISIONAL SPECIFICATION [17-07-2020(online)].pdf | 2020-07-17 |
| 23 | 202021030586-Written submissions and relevant documents [23-09-2024(online)].pdf | 2024-09-23 |
| 24 | 202021030586-FORM 1 [17-07-2020(online)].pdf | 2020-07-17 |
| 24 | 202021030586-PatentCertificate20-02-2025.pdf | 2025-02-20 |
| 25 | 202021030586-IntimationOfGrant20-02-2025.pdf | 2025-02-20 |
| 25 | 202021030586-PROVISIONAL SPECIFICATION [17-07-2020(online)].pdf | 2020-07-17 |
| 1 | 100301E_10-03-2022.pdf |
| 1 | FER-2022-03-10-14-42-21E_10-03-2022.pdf |
| 2 | 100301E_10-03-2022.pdf |
| 2 | FER-2022-03-10-14-42-21E_10-03-2022.pdf |