Abstract: Described herein is a system 100 and a process 200 for detecting the position of a pre-designated part of user's body, such as an ear lobe 116a, 116b, an ear helix and / or edges 116a, 116b of the neck in an image 104 of a human face 106. An image containing a human face 106 is captured and is displayed or reproduced. The present subject matter determines and crops part of the image 104 that encompasses the face 106, determines reference face-landmarking points 114 falling on the jawline of the face 106 and rotation of the face 106 in the image 104, corrects the rotation of the face 106 in the image 104, determines the vertical size or height of the face 106, corrects the offset distance from reference face-landmarking points 114 for the height of the face 106 so determined, and adds the offset distance to reference face-landmarking points 114 for determining the position of ear lobes 116a, 116b and / or edges 118a, 118b of the neck on the image 104 of the human face 106. Refer Figure 1
TECHNICAL FIELD
The present subject matter in general relates to detection of pre-designated parts of a user's body in the image of a human face, and particularly relates to a system and process for simulated application of an article, such as an ornamental article, on a pre-designated part of user's body, such as at least one ear or neck of the user.
BACKGROUND
Conventional methods of applying articles, such as ornamental articles, requires a user to apply each ornamental article on a part of a user's body and thereafter view the user's body part on a reflective object such as a mirror. Thereafter, the user removes the ornamental article and may choose to apply another ornamental article on said body part.
This conventional process of applying and removing multiple ornamental articles has several disadvantages for the user. Firstly, the physical application and removal of the ornamental articles takes significant amount of time. Secondly, time limitations as well limitations of availability of inventory at a given store location further imposes a significant restriction on the number of items that the user can apply. Thirdly, physical application of ornamental articles requires users to apply multiple items of metallic substances to their skin which can, in some cases, cause reactions such as allergies and can sometimes even be painful.
Therefore, there is a need to eliminate the conventional methods of applying articles, such as ornamental articles, directly on a user's body part.
Techniques for detecting and recognizing face of a human being in an image are known in the art. One such technique is face detection technique that detects a face in an image
consisting of several objects including a face. Face landmarking is a procedure of determining and localizing certain characteristic points on a human face. Further, face recognition is a process of recognizing the characteristics of a human face and matching said characteristics of said face with characteristics of faces contained in a database of images of faces. Furthermore, techniques for ear recognition on the face of a human being in an image are also known in the art.
As described by Yang et al., in "Detecting Faces in Images: A Survey (2002)", there are four known methods for detection of faces in images, namely, knowledge-based methods, feature invariant approach, template matching methods and appearance-based methods. Similarly, Viola and Jones describe techniques for face detection in "Rapid Object Detection using a Boosted Cascade of Simple Features (2001)".
US9471829B2 describes rapid facial landmark detection techniques relying on inner/outer corner of the eyes and left / right corner of the mouth. One such technique for determining face landmarks is described at US7027622B2, which relates to a method for locating face landmarks in an image.
Similarly, US 9361510 B2 describes a method for facial landmark tracking and relies on facial landmarks such as eyes, nose, mouth and chin. Other similar techniques known in the art use face landmarking points and detect the position of only specific points on the image of a face, such as corners of eyes, corners of lips, and tip of the chin.
Ear and neck detection and recognition techniques are also known in the art. However, conventional ear detection and recognition techniques necessarily require ears to be visible in the image of the face and are unable to detect and / or recognize them based on a frontal image of a human face. Ear detection and recognition techniques known in the art, such as those
described in "An Study of Ear Detection and Its Application to Face Detection" by Santana, Lorenzo-Novarro et a I, dated 2011 and Ear Recognition by Anika Flug, dated 2015, depend on availability of a side view of the face in which the ear is clearly visible.
However, conventional face detection, recognition and landmarking techniques do not provide any teaching for detecting the position of earlobes, ear helix or edges of the neck, in an image of a human face. Similarly, ear recognition techniques known in the art do not disclose or teach any mechanism to detect the neck edges or position of ear lobes or ear helix based on a frontal image of the face of a user or an image of a face in which ear lobes or a portion of an ear and neck are not visible or are obscured by an object, such as hair.
Therefore, there is a well felt need for a system and a process that overcomes the aforementioned and other related challenges of existing face detection, recognition and landmarking techniques.
SUMMARY
It is an object of the present subject matter to superimpose an image of an article, such as an ornamental article, on a pre-designated part of user's body, such as an ear lobe or neck of the user.
It is another object of the present subject matter to simulate physical application of an article, such as an ornamental article, without requiring the user to physically apply the article to their body part, such as user's neck or ear.
It is yet another object of the present subject matter to provide a faster process of application and removal of ornamental articles.
It is yet another object of the present subject matter is to significantly increase the number of ornamental articles which the user can apply using simulated process.
It is yet another object of the present subject matter to obviate the need for physically applying an ornamental article to the pre-designated part, such as ears or neck of a user.
It is yet another object of the present subject matter to eliminate the possibility of any adverse reactions such as allergies that may occur while applying an ornamental article directly to the pre-designated part, such as ears or neck of a user.
It is yet another object of the present subject matter to enable a user to simulate the application of a large inventory of ornamental articles without visiting a store.
It is yet another object of the present subject matter to superimpose an image of an article in a moving position of an ear lobe, an ear helix or edges of the neck in a moving image of a human face.
It is yet another object of the present subject matter to detect the position of an ear lobe, an ear helix or edges of the neck even if the ear lobe, the ear helix or the neck is not visible in the image or is obscured by hair and / or other objects.
The subject matter relates to a system for detecting the position of an ear lobe and / or an ear helix and / or the neck in an image of a human face. The system includes an image capturing device for capturing an image containing a human face; a display device for displaying or reproducing the image captured by the image capturing device; and a processor unit that determines and crops part of the image that encompasses the face, determines reference face-landmarking points falling on the jawline of the face and rotation of the face in the image, corrects the rotation of the face in the image, determines the vertical size or height of the face,
corrects the offset distance from reference face-landmarking points for the height of the face so determined, and adds the offset distance to the reference face-landmarking points for determining the position of ear lobes and neck edges on the image of the human face.
In an embodiment of the present subject matter, the image capturing device comprises a camera, which may be integrated with a computing system, such as a tablet.
In another embodiment of the present subject matter, the display device comprises a screen, which may be integrated with the camera individually or may form part of the computing system, such as a tablet.
In yet another embodiment of the present subject matter, the processor unit comprises one or more processors, which one or more processors including one or more or logic circuitries for processing instructions, general-purpose processors, special purpose processors, digital signal processors (DSP), microprocessors, micro-controllers, controller or the like.
In yet another embodiment of the present subject matter, the processor unit determines rotation of the face in the image on x-axis, y-axis and / or z-axis.
In yet another embodiment of the present subject matter, the processor unit corrects the rotation of the face in the image along the x-axis, y-axis and / or z-axis such that the rotation along each of the axes is 0 degrees.
In yet another embodiment of the present subject matter, the processor unit determines the vertical size or height of the face by determining the distance between face landmarking points falling on the uppermost layer of points falling on the face and face landmarking points falling on the lowermost layer of points falling on the face.
A process for detecting the position of an ear lobe, an ear helix and / or neck edges in an image of a human face is also described herein. The process includes the steps of determining and cropping part of the image that encompasses the face; determining reference face-landmarking points falling on the jawline of the face and rotation of the face in the image on x-axis, y-axis and / or z-axis; correcting the rotation of the face in the image; determining the vertical size or height of the face; correcting the offset distance from reference face-landmarking points forthe height of the face so determined; and adding the offset distance to reference face-landmarking points for determining the position of an ear lobe, an ear helix and the neck area on the image of the human face.
In an embodiment of the present subject matter, the step of correcting the rotation of the face in the image includes correction along the x-axis, y-axis and / or z-axis, such that the rotation along each of the axes is 0 degrees.
In another embodiment of the present subject matter, the step of determining the vertical size or height of the face is performed by determining the distance between face landmarking points falling on the uppermost layer of the points falling on the face and face landmarking points falling on the lowermost layer of the points falling on the face.
BRIEF DESCRITPTION OF ACCOMPNAYING DRAWINGS
The present invention, both as to its organization and manner of operation, together with further objects and advantages, may best be understood by reference to the following description, taken in connection with the accompanying drawings. These and other details of the
present invention will be described in connection with the accompanying drawings, which are furnished only by way of illustration and not in limitation of the invention, and in which drawings: Figure 1 illustrates a block diagram of a system for detecting the position of an ear lobe, an ear helix and the neck area in an image of a human face in accordance with one embodiment of the present subject matter.
Figure la depicts a bounding box covering a face in an image in accordance with one embodiment of the present subject matter.
Figure lb depicts face landmarking points on the face contained in the bounding box in an image in accordance with one embodiment of the present subject matter.
Figure lc depicts face landmarking points falling on the jawline of the face in an image in accordance with one embodiment of the present subject matter.
Figure Id depicts face landmarking points falling on the uppermost layer of points falling on the face and face landmarking points falling on the lowermost layer of points falling on the face in accordance with one embodiment of the present subject matter.
Figure le depicts positions of ear lobes of a human face on the image in accordance with one embodiment of the present subject matter.
Figure If depicts position of neck edges of a human face on the image in accordance with one embodiment of the present subject matter.
Figure 2 illustrates a flow diagram of a process for detecting the position of an ear lobe, an ear helix and / or edges of the neck in an image of a human face in accordance with one embodiment of the present subject matter.
DETAILED DESCRIPTION
The following presents a detailed description of various embodiments of the present subject matter with reference to the accompanying drawings.
The embodiments of the present subject matter are described in detail with reference to the accompanying drawings. However, the present subject matter is not limited to these embodiments which are only provided to explain more clearly the present subject matter to a person skilled in the art of the present disclosure. In the accompanying drawings, like reference numerals are used to indicate like components.
The specification may refer to "an", "one", "different" or "some" embodiment(s) in several locations. This does not necessarily imply that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "includes", "comprises", "including" and/or "comprising" when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an
element is referred to as being "attached" or "connected" or "coupled" or "mounted" to another element, it can be directly attached or connected or coupled to the other element or intervening elements may be present. As used herein, the term "and/or" includes any and all combinations and arrangements of one or more of the associated listed items.
The figures depict a simplified structure only showing some elements and functional entities, all being logical units whose implementation may differ from what is shown.
The present subject matter teaches a system and a process for detection of position of a body part, particularly an ear lobe, an ear helix or the neck edges, based on an image of a human face in varying orientations.
The system and process according to the present subject matter does not rely on side view of an image of a human face and is able to detect the position of a body part, such as an ear lobe, an ear helix and / or the neck, based on a frontal image of a human face. According to the present subject matter, the position of ear lobes, ear helix and the neck can be detected even if one or more ears or the neck of the user are not clearly visible in the image of the face. It is also possible according to the present subject matter to detect the position of ear lobes, ear helix and /or the neck where the image of the face is moving and consequently, the positions of ear lobes, ear helix and / or the neck are also moving.
In a preferred embodiment of the present subject matter, a process for superimposing an image of an article, particularly image of an ornamental article, on an ear lobe and / or the neck is described. The process according to the present subject matter detects the position of the ear lobe and the neck based on the image of the human face.
In an embodiment of the present subject matter, the user is able to simulate the physical application of an article, such as an ornamental article, without requiring to physically apply the article to their body part, particularly to ears and the neck. In an embodiment of the present subject matter, the user is able to accurately superimpose the article on an image of a pre-designated body part of the user, such as the accurate superimposition of an earring over the user's ear or a necklace over the user's neck area.
In an embodiment of the present subject matter, an image of one or all body parts of a user are captured using an image capturing device and is displayed on a screen. The position of a pre-designated body part of a user in the image is determined and an image of an article is superimposed on the image of the pre-designated body part of the user. In anotherembodiment, a moving image or a video of the user's body or part of the user's body is displayed on a screen. The position of a pre-designated body part of the user is determined, and an image of an article is superimposed on the pre-designated body part with the image remaining superimposed on the pre-designated body part throughout the movement of the user's body.
In an embodiment, the present subject matter provides a system and a process for detecting the position of an ear lobe an ear helix, and / or the neck in an image of a human face. In another embodiment, the system and process of the present subject matter detects the position of an ear lobe, an ear helix and the neck area in a moving image or a live video which contains a human face. The process according to the present subject matter enables superimposition of an image of an article at the position of an ear lobe, an ear helix and the neck in an image of the human face. In another embodiment, process enables superimposition of an
image of an article upon a moving position of an ear lobe, an ear helix and / or the neck in a moving image of a human face.
Figure 1 illustrates a block diagram of a system 100 for detecting the position of an ear lobe, an ear helix and / or the neck in an image of a human face in accordance with one embodiment of the present subject matter. The system 100 includes an image capturing device 102 for capturing an image 104 containing a human face 106. In a preferred embodiment, the image 104 captured by the image capturing device 102 is displayed or reproduced on a display device 108, such as a screen. The image 104 of the human face 106 captured by the image capturing device 102 may be a frontal view, a side view (also called profile view), an oblique right view or an oblique left view of the face in different embodiments. The system 100 further includes a processor unit 110 that determines a part of the image 104 that encompasses the face 106. The processor unit 110 then includes said part of the image encompassing the face 106 in a bounding box 112, as shown in Figure la.
Thereafter, the processor unit 110 crops the bounding box 112 from the rest of the image 106. The cropped image of the bounding box 112 is used as an input for identifying face landmarking points 114 on the image 104 contained in the bounding box 112, as shown in Figure lb. In a preferred embodiment, the face landmarking points 114 on the image 104 are identified by using a face landmarking technique. In an embodiment, the number of landmarking points are 68 [1A-68A], which are determined on the basis of a Constrained Local Model. However, the number of landmarking points 114 may vary as per the requirement.
The processor unit 110 then determines reference face-landmarking points 114 falling on the jawline of the face 106, as shown in Figure lc, and the rotation of the face 106 in the image
on x-axis, y-axis and / or z-axis. The rotation of the face 106 in the image 104 along x-axis, y-axis and / or z-axis is corrected by the processor unit 110, such that the rotation along each of the axes is 0 degrees. Once the rotation of the face 106 in the image 104 is corrected, the processor unit 110 determines the vertical size or the height of the face 104 by determining the distance between the face landmarking points 114a falling on the uppermost layer of points falling on the face 104 and the face landmarking points 114b falling on the lowermost layer of points falling on the face 104, as shown in Figure Id.
The processor unit 110 then corrects the offset distance from the reference face-landmarking points 114 for the height of the face 106 so determined. The position of ear lobes 116a, 116b on the image 104 of the human face 106 is determined by adding the offset distance to the reference face-landmarking points 114, as shown in Figure le. Alternatively or simultaneously, the position of edges 118a, 118b of user's neck on the image 104 of the human face 106 is determined by adding the offset distance to the reference face-landmarking points 114, as shown in Figure If.
In case of constituting a video or moving images, the processor unit 110 uses multiple individual images or image frames which are captured one after the other i.e. consecutively by the image capturing device 102. Where the image taken by the image capturing device 102 is a moving image or a live video, the processor unit 110 repeats the above steps for each of the individual images in the moving image or live video to determine the position of ear lobes or the neck in each of the individual images in the moving image or the live video.
In a preferred embodiment, the image capturing device 102 is a camera which may be integrated with a computing system, such as a tablet. The display device 108, such as a screen,
on which the image 104 captured by the image capturing device 102 is displayed, may be integrated with the camera individually or may form part of the computing system, such as a tablet.
The processor unit 110 of the present subject matter includes one or more processors. A processor in accordance with one embodiment of the present subject matter may include a logic circuitry for processing instructions. In other embodiments, the processor may be one or more of general-purpose processors, special purpose processors, digital signal processors (DSP), microprocessors, micro-controllers, controllers or the like.
Figure 2 illustrates a flow diagram of a process 200 for detecting the position of an ear lobe, an ear helix and / or the neck area in an image of a human face 106 in accordance with one embodiment of the present subject matter. The process 200 includes the step of capturing 202 an image 104 that includes a human face 106 by an image capturing device 102. The step of capturing 202 an image 104 includes capturing a frontal view, a side view (also called profile view), an oblique right view or an oblique left view of the face 106. The process 200 further includes the step of displaying or reproducing 204 the image 104 captured in step 202 by a display device 108. In step, 206, part of the image 104 which includes the face 106 is determined. In step 208, the part of the image 104 encompassing the face 106 is included in a bounding box 112. In step 210, the bounding box 112 is cropped from the rest of the image 104. In step 212, face landmarking points 114 are identified on the image 104 contained in the bounding box 112 by using the image cropped in step 210. In a preferred embodiment, the face landmarking points 114 on the image 104 are identified by using a face landmarking technique. The number of landmarking points in an embodiment of the present subject matter are 68 [1A-68A]. The number
of landmarking points are determined on the basis of a Constrained Local Model. In step 214, the reference face-landmarking points falling on the jawline of the face 106 are determined. In step 216, the rotation of the face 106 in the image on x-axis, y-axis and / or z-axis is determined. In step 218, the rotation of the face 106 in the image along the x-axis, y-axis and / or z-axis is corrected, such that the rotation along each of the axes is 0 degrees. In step 220, the vertical size or the height of the face 106 is determined by determining the distance between the face landmarking points 114a falling on the uppermost layer of the points falling on the face 106 and the face landmarking points 114b falling on the lowermost layer of the points falling on the face 106. In step 222, the offset distance from the reference face-landmarking points is corrected for the height of the face 106 so determined. In step 224, the position of one or more ear lobes 116a, 116b and/or edges 118a, 118b of the user's neck on the image 104 of the human face 106 is determined by adding the offset distance to the reference face-landmarking points.
The face detection technique could optionally be chosen from the techniques known in the art, such as Haar Feature Based Cascade Classifiers. Further, the rotation of human face 106 in an image 104 on x-axis, y-axis and / or z-axis can optionally be determined by processes known in the art.
In an embodiment, the determination of an average offset distance in a subset of the human population is carried out through sampling. The distance along the x-axis and the y-axis of an ear lobe from reference face-landmarking points in an image is manually determined across a large number of images of human faces and then averaged out. The average offset distance may optionally be determined through machine learning techniques, wherein the reference point coordinates, rotation of the image around the x-axis, y-axis and z-axis, and face size are
input variables and the manually labelled coordinates of the ear lobe in the image is the output for the machine learning system.
As explained above, this average offset distance is then corrected for the size of a given image 104 of a face 106 by taking a product of the offset distance with the size of the face 106 such that the average offset distance can then be applied to faces of varying sizes and shapes. However, the above described process may be used for determining the offset distances for an ear helix from a reference face-landmarking point in the image of a human face.
The rotation of a human face in an image 104 about each of the axes is determined using techniques known in the prior art. Further, the step 218 for correction of rotation of the human face in an image about each of the axes involves determination of constant values which can be applied to the rotations along each of the axes. In an embodiment, this step 218 involves manual labelling of center position of each of the ear lobes and edges of the neck in a large set of images of human faces.
The error in determination of the position of neck and ear lobes according to the present process is determined by comparing manually labelled center position of each neck and ear lobes with the position of ear lobe determined by adding the average offset distance to the reference face landmarking points. Thereafter, the value of constants across each of the axes is determined across a large number of images of human faces such that the error is minimized. The constants, once obtained, can then be applied to an image of a human face, even if the same is rotated along multiple axes so as to correct the rotation and determine the position of the ear lobe and edges of the neck on the human face.
One of the advantages of the process according to the present subject matter is that the process described herein does not rely on visibility of ear lobes, ear helix or the neck area in an image of a human face. The position of ear lobes, ear helix or the neck can be determined even if the ear lobes and the neck are not visible at all in the image or are obscured by hair and/ or other objects. The present subject matter thus obviates the need of obtaining images with a visible ear lobe or neck, thereby greatly increasing the speed and efficiency with which the position of ear lobes or the neck can be determined.
The present invention also provides superimposition of an image, either in 2D or in 3D, of an article, preferably an ornamental article, on the image of a human face at the position of a specific part of the face, in particular one or more ear lobes or the neck. The said position is also determined by the process described above. The image of the face along with image of the article superimposed at the position of ear lobes or the neck is then displayed on a display device, such as a screen. The user whose image is captured by the image capturing device 102 can view the image of her / his face on the display device 108 with an image 104 of an article superimposed on said image at said position of the concerned part of the face 106.
The present subject matter has several advantages over the conventional processes of application and removal of an ornamental article on a user's body part. For instance, the process according to the present subject matter is much faster. Further, the number of ornamental articles which the user can apply using the simulated process according to the present subject matter is significantly higher since the present process obviates the need for physical space to maintain inventory and is less time consuming. The present subject matter also obviates the need for physically applying the ornamental article to the skin of a user, thereby eliminating the
possibility of any adverse reactions such as allergies. Moreover, the process according to the present subject matter enables the user to simulate the application of a large inventory of articles, particularly ornamental articles, without visiting a store.
While the preferred embodiments of the present invention have been described hereinabove, it should be understood that various changes, adaptations, and modifications may be made therein without departing from the spirit of the invention and the scope of the appended claims. It will be obvious to a person skilled in the art that the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive.
We claim:
1. A system 100 for detecting the position of a pre-designated part of user's body, such as an
ear lobe 116a, 116b, an ear helix and / or edges 116a, 116b of the neck in an image 104 of a
human face 106, the system 100 comprises:
an image capturing device 102 for capturing an image 104 containing a human face 106; a display device 108 for displaying or reproducing the image 104 captured by the image capturing device 102; and a processor unit 110 that
determines and crops part of the image 104 that encompasses the face 106,
determines reference face-landmarking points 114 falling on the jawline of the
face 106 and rotation of the face 106 in the image 104,
corrects the rotation of the face 106 in the image 104,
determines the vertical size or height of the face 106,
corrects the offset distance from reference face-landmarking points 114 for the
height of the face 106 so determined, and
adds the offset distance to the reference face-landmarking points 114 for
determining the position of the pre-designated part of user's body on the image
104 of the human face 106.
2. The system 100 as claimed in claim 1, wherein the image capturing device 102 comprises a
camera, which may be integrated with a computing system, such as a tablet.
3. The system 100 as claimed in claims 1 or 2, wherein the display device 108 comprises a screen, which may be integrated with the camera individually or may form part of the computing system, such as a tablet.
4. The system 100 as claimed in any one of preceding claims, wherein the processor unit 110 comprises one or more processors, which one or more processors include one or more or logic circuitries for processing instructions, general-purpose processors, special purpose processors, digital signal processors (DSP), microprocessors, micro-controllers, controller or the like.
5. The system 100 as claimed in any one of preceding claims, wherein the processor unit 110 determines rotation of the face 106 in the image on x-axis, y-axis and / or z-axis.
6. The system 100 as claimed in any one of preceding claims, wherein the processor unit 110 corrects the rotation of the face 106 in the image 104 along the x-axis, y-axis and / or z-axis such that the rotation along each of the axes is 0 degrees.
7. The system 100 as claimed in any one of preceding claims, wherein the processor unit 110 determines the vertical size or height of the face 106 by determining the distance between face landmarking points 114a falling on the uppermost layer of points falling on the face 106 and face landmarking points 114b falling on the lowermost layer of points falling on the face 106.
8. A process 200 for detecting the position of a pre-designated part of user's body, such as an ear lobe 116a, 116b, an ear helix and / or edges 116a, 116b of the neck in an image 104 of a human face 106, the process 200 comprises the steps of
determining and cropping part of the image 104 that encompasses the face 106;
determining reference face-landmarking points 114 falling on the jawline of the face 106
and rotation of the face 106 in the image 104 on x-axis, y-axis and / or z-axis;
correcting the rotation of the face 106 in the image 104;
determining the vertical size or height of the face 106;
correcting the offset distance from reference face-landmarking points 114 for the height
of the face 106 so determined; and
adding the offset distance to reference face-landmarking points 114 for determining the
position of the pre-designated part of user's body on the image 104 of the human face
106.
9. The process as claimed in claim 8, wherein the step of correcting the rotation of the face 106 in the image 104 includes correction along the x-axis, y-axis and / or z-axis, such that the rotation along each of the axes is 0 degrees.
10. The process as claimed in claims 8 or 9, wherein the step of determining the vertical size or height of the face 106 is performed by determining the distance between face landmarking points 114a falling on the uppermost layer of the points falling on the face 106 and face landmarking points 114b falling on the lowermost layer of the points falling on the face 106.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 201811002786-PROVISIONAL SPECIFICATION [24-01-2018(online)]_1.pdf | 2018-01-24 |
| 1 | 201811002786-RELEVANT DOCUMENTS [05-10-2023(online)].pdf | 2023-10-05 |
| 2 | 201811002786-PROVISIONAL SPECIFICATION [24-01-2018(online)].pdf | 2018-01-24 |
| 2 | 201811002786-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 201811002786-US(14)-HearingNotice-(HearingDate-18-08-2020).pdf | 2021-10-18 |
| 3 | 201811002786-FORM 1 [24-01-2018(online)].pdf | 2018-01-24 |
| 4 | 201811002786-Proof of Right (MANDATORY) [22-01-2019(online)].pdf | 2019-01-22 |
| 4 | 201811002786-IntimationOfGrant15-12-2020.pdf | 2020-12-15 |
| 5 | 201811002786-PatentCertificate15-12-2020.pdf | 2020-12-15 |
| 5 | 201811002786-DRAWING [22-01-2019(online)].pdf | 2019-01-22 |
| 6 | 201811002786-CORRESPONDENCE-OTHERS [22-01-2019(online)].pdf | 2019-01-22 |
| 6 | 201811002786-Annexure [02-09-2020(online)].pdf | 2020-09-02 |
| 7 | 201811002786-Written submissions and relevant documents [02-09-2020(online)].pdf | 2020-09-02 |
| 7 | 201811002786-COMPLETE SPECIFICATION [22-01-2019(online)].pdf | 2019-01-22 |
| 8 | 201811002786-Proof of Right (MANDATORY) [12-02-2019(online)].pdf | 2019-02-12 |
| 8 | 201811002786-CLAIMS [11-05-2020(online)].pdf | 2020-05-11 |
| 9 | 201811002786-COMPLETE SPECIFICATION [11-05-2020(online)].pdf | 2020-05-11 |
| 9 | 201811002786-FORM-26 [12-02-2019(online)].pdf | 2019-02-12 |
| 10 | 201811002786-FER_SER_REPLY [11-05-2020(online)].pdf | 2020-05-11 |
| 10 | 201811002786-Power of Attorney-140219.pdf | 2019-02-15 |
| 11 | 201811002786-FORM 3 [11-05-2020(online)].pdf | 2020-05-11 |
| 11 | 201811002786-OTHERS-140219.pdf | 2019-02-15 |
| 12 | 201811002786-Correspondence-140219.pdf | 2019-02-15 |
| 12 | 201811002786-OTHERS [11-05-2020(online)].pdf | 2020-05-11 |
| 13 | 201811002786-PETITION UNDER RULE 138 [11-05-2020(online)].pdf | 2020-05-11 |
| 13 | 201811002786-STARTUP [13-12-2019(online)].pdf | 2019-12-13 |
| 14 | 201811002786-FER.pdf | 2020-02-05 |
| 14 | 201811002786-RELEVANT DOCUMENTS [13-12-2019(online)].pdf | 2019-12-13 |
| 15 | 201811002786-FORM 13 [13-12-2019(online)].pdf | 2019-12-13 |
| 15 | 201811002786-FORM28 [13-12-2019(online)].pdf | 2019-12-13 |
| 16 | 201811002786-FORM 18A [13-12-2019(online)].pdf | 2019-12-13 |
| 17 | 201811002786-FORM28 [13-12-2019(online)].pdf | 2019-12-13 |
| 17 | 201811002786-FORM 13 [13-12-2019(online)].pdf | 2019-12-13 |
| 18 | 201811002786-RELEVANT DOCUMENTS [13-12-2019(online)].pdf | 2019-12-13 |
| 18 | 201811002786-FER.pdf | 2020-02-05 |
| 19 | 201811002786-PETITION UNDER RULE 138 [11-05-2020(online)].pdf | 2020-05-11 |
| 19 | 201811002786-STARTUP [13-12-2019(online)].pdf | 2019-12-13 |
| 20 | 201811002786-Correspondence-140219.pdf | 2019-02-15 |
| 20 | 201811002786-OTHERS [11-05-2020(online)].pdf | 2020-05-11 |
| 21 | 201811002786-FORM 3 [11-05-2020(online)].pdf | 2020-05-11 |
| 21 | 201811002786-OTHERS-140219.pdf | 2019-02-15 |
| 22 | 201811002786-FER_SER_REPLY [11-05-2020(online)].pdf | 2020-05-11 |
| 22 | 201811002786-Power of Attorney-140219.pdf | 2019-02-15 |
| 23 | 201811002786-COMPLETE SPECIFICATION [11-05-2020(online)].pdf | 2020-05-11 |
| 23 | 201811002786-FORM-26 [12-02-2019(online)].pdf | 2019-02-12 |
| 24 | 201811002786-Proof of Right (MANDATORY) [12-02-2019(online)].pdf | 2019-02-12 |
| 24 | 201811002786-CLAIMS [11-05-2020(online)].pdf | 2020-05-11 |
| 25 | 201811002786-Written submissions and relevant documents [02-09-2020(online)].pdf | 2020-09-02 |
| 25 | 201811002786-COMPLETE SPECIFICATION [22-01-2019(online)].pdf | 2019-01-22 |
| 26 | 201811002786-CORRESPONDENCE-OTHERS [22-01-2019(online)].pdf | 2019-01-22 |
| 26 | 201811002786-Annexure [02-09-2020(online)].pdf | 2020-09-02 |
| 27 | 201811002786-PatentCertificate15-12-2020.pdf | 2020-12-15 |
| 27 | 201811002786-DRAWING [22-01-2019(online)].pdf | 2019-01-22 |
| 28 | 201811002786-Proof of Right (MANDATORY) [22-01-2019(online)].pdf | 2019-01-22 |
| 28 | 201811002786-IntimationOfGrant15-12-2020.pdf | 2020-12-15 |
| 29 | 201811002786-US(14)-HearingNotice-(HearingDate-18-08-2020).pdf | 2021-10-18 |
| 29 | 201811002786-FORM 1 [24-01-2018(online)].pdf | 2018-01-24 |
| 30 | 201811002786-RELEVANT DOCUMENTS [30-09-2022(online)].pdf | 2022-09-30 |
| 30 | 201811002786-PROVISIONAL SPECIFICATION [24-01-2018(online)].pdf | 2018-01-24 |
| 31 | 201811002786-PROVISIONAL SPECIFICATION [24-01-2018(online)]_1.pdf | 2018-01-24 |
| 31 | 201811002786-RELEVANT DOCUMENTS [05-10-2023(online)].pdf | 2023-10-05 |
| 1 | 201811002786_10-01-2020.pdf |