Systems And Methods For Performing Hand Segmentation
Abstract:
Hand segmentation on wearable devices is a challenging computer vision problem with a complex background because of varying illumination conditions, computational capacity of device(s), different skin tone of users from varied race, and presence of skin color background. Embodiments of the present disclosure provide systems and methods for performing, in real time, hand segmentation by pre-processing an input image to improve contrast and removing noise/artifacts. Multi Orientation Matched Filter (MOMF) is implemented and applied on the pre-processed image by rotating the MOMF at various orientations to form an edge image which comprises strong edges and weak edges. Weak edges are further removed using morphological operation. The edge image is then added to the input image (or pre-processed image) to separate different texture region in image. Largest skin-color blob is then extracted which is considered to be correct segmented hand.
Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence
Claims:1. A processor implemented method, comprising:
obtaining an input image comprising at least a hand and a background (302);
pre-processing the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background (304);
applying a Multi Orientation Matched Filter (MOMF) on the pre-processed image to obtain a plurality of filter responses (306);
merging the plurality of filter responses to obtain a merged filter response that comprises a plurality of strong edges and one or more weak edges (308);
filtering the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map (310);
adding the resultant edge map to the pre-processed image to obtain a resultant image, wherein the resultant image comprises texture regions that are isolated from each other (312);
detecting, using one or more chroma channels, a plurality of skin pixels from the resultant image (314); and
identifying a largest blob of skin pixels from the resultant image, wherein the largest blob of skin pixels is a segmented hand (316).
2. The processor implemented method of claim 1, wherein the step of pre-processing the image comprises down-sampling the image to obtain a down-sampled image and applying a Contrast Limited Local Histogram Equalization (CLAHE) technique on the down-sampled image to obtain the pre-processed image.
3. The processor implemented method of claim 1, wherein a plurality of weak edges are filtered during pre-processing of the input image by applying a Gaussian smoothing technique on the input image.
4. The processor implemented method of claim 1, wherein the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges.
5. The processor implemented method of claim 1, wherein the one or more weak edges formed as one or more isolated blobs are filtered by applying a Morphological erosion technique on the merged filter response.
6. A system (100) comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain an input image comprising at least a hand and a background;
pre-process the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background;
apply a Multi Orientation Matched Filter (MOMF) on the pre-processed image to obtain a plurality of filter responses;
merge the plurality of filter responses to obtain a merged filter response that comprises a plurality of strong edges and one or more weak edges;
filter the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map;
add the resultant edge map to the pre-processed image to obtain a resultant image, wherein the resultant image comprises texture regions that are isolated from each other;
detect, using one or more chroma channels, a plurality of skin pixels from the resultant image; and
identify a largest blob of skin pixels from the resultant image, wherein the largest blob of skin pixels is a segmented hand.
7. The system of claim 6, wherein the image is pre-processed by:
down-sampling the image to obtain a down-sampled image; and
applying a Contrast Limited Local Histogram Equalization (CLAHE) technique on the down-sampled image to obtain the pre-processed image.
8. The system of claim 6, wherein a plurality of weak edges are filtered during pre-processing of the input image by applying a Gaussian smoothing technique on the input image.
9. The system of claim 6, wherein the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges.
10. The system of claim 6, wherein the one or more weak edges formed as one or more isolated blobs are filtered by applying a Morphological erosion technique on the merged filter response.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEMS AND METHODS FOR PERFORMING HAND SEGMENTATION
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to image processing techniques, and, more particularly, to systems and methods for performing real time hand segmentation on frugal head mounted device for gestural interface.
BACKGROUND
With the resurgence of Head Mounted Displays (HMDs), in-air gestures form a natural and intuitive interaction mode of communication. HMDs such as Microsoft® Hololens, Daqri smart-glasses etc., have on-board processors with additional sensors, making the device expensive. Augmented Reality (AR) devices for example, Meta Glass, and Microsoft Hololens exemplify the use of hand gestures as a popular means of interaction between computers, wearables, robots and humans. The advances in smartphone technology have introduced several low-cost, video-see-through devices such as Google Cardboard and Wearality1 that provide immersive experiences with a Virtual Reality (VR) enabled smartphone. Using the stereo-rendering of camera feed and overlaying the related information on the smartphone screen, these devices can be extended to AR and human-computer interaction (HCI).
With the advent of the above mentioned gesture recognition devices, user interactions see an evolution to gestures, speech and eye gaze from the primitive methods of interaction such as touch screen, mouse and keyboard. Frugal Google cardboard has limited interaction methods, namely, the magnetic and conductive levers, often are subjected to wear and tear. Also, these lever based interfaces are not intuitive to interact with. It is also noted that speech based commands fail in noisy environments such as oil rigs, construction industry, and automotive industry and due to varying accents. The instinctive and intuitive human to machine communication still remains a challenging task.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for performing hand segmentation and identifying a segmented hand. The method comprises obtaining an input image depicting at least a hand and background; and pre-processing the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background. In an embodiment, the step of pre-processing the image comprises down-sampling the image to obtain a down-sampled image and applying a Contrast Limited Local Histogram Equalization (CLAHE) technique on the down-sampled image to obtain the pre-processed image. A plurality of weak edges are filtered during pre-processing of the input image by applying a Gaussian smoothing technique on the input image. Upon obtaining the pre-processed image, a Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image to obtain a plurality of filter responses. The method further comprises merging the plurality of filter responses to obtain a merged filter response that comprises a plurality of strong edges and one or more weak edges; filtering the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map; adding the resultant edge map to the input image (or pre-processed image) to obtain a resultant image, wherein the resultant image comprises different texture regions that are isolated from each other; detecting, using one or more chroma channels, a plurality of skin pixels from the resultant image; and identifying a largest blob of skin pixels from the resultant image, wherein the largest blob of skin pixels is a segmented hand.
In an embodiment, the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges. In an embodiment, the one or more weak edges formed as one or more isolated blobs are filtered by applying a Morphological erosion technique on the merged filter response.
In another aspect, there is provided a system for performing hand segmentation and identifying a correct segmented hand. The system comprises a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtain an input image depicting at least a hand and background; pre-process the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background. In an embodiment, the input image is pre-processed by down-sampling the image to obtain a down-sampled image; and applying a Contrast Limited Local Histogram Equalization (CLAHE) technique on the down-sampled image to obtain the pre-processed image. In an embodiment, during pre-processing of the input images a plurality of weak edges are filtered by applying a Gaussian smoothing technique on the input image. The hardware processors are further configured by the instructions to apply a Multi Orientation Matched Filter (MOMF) on the pre-processed image to obtain a plurality of filter responses; merge the plurality of filter responses to obtain a merged filter response that comprises a plurality of strong edges and one or more weak edges; and filter the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map. In an embodiment, the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges. In an embodiment, the plurality of weak edges formed as one or more isolated blobs are filtered by applying a Morphological erosion technique on the merged filter response. The resultant edge map is added to the input image (or pre-processed image) to obtain a resultant image, wherein the resultant image comprises different texture regions that are isolated from each other; and a plurality of skin pixels are detected from the resultant image using one or more chroma channels. A largest blob of skin pixels identified from the resultant image which is a segmented hand.
In yet another aspect, there are provided one or more non-transitory machine readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors causes a method for performing hand segmentation and identifying a correct segmented hand. The instructions causes obtaining an input image depicting at least a hand and background; and pre-processing the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background. In an embodiment, the step of pre-processing the image comprises down-sampling the image to obtain a down-sampled image and applying a Contrast Limited Local Histogram Equalization (CLAHE) technique on the down-sampled image to obtain the pre-processed image. A plurality of weak edges are filtered during pre-processing of the input image by applying a Gaussian smoothing technique on the input image. Upon obtaining the pre-processed image, a Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image to obtain a plurality of filter responses. The instructions further cause merging the plurality of filter responses to obtain a merged filter response that comprises a plurality of strong edges and one or more weak edges; filtering the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map; adding the resultant edge map to the input image (or pre-processed image) to obtain a resultant image, wherein the resultant image comprises different texture regions that are isolated from each other; detecting, using one or more chroma channels, a plurality of skin pixels from the resultant image; and identifying a largest blob of skin pixels from the resultant image, wherein the largest blob of skin pixels is a segmented hand.
In an embodiment, the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges. In an embodiment, the one or more weak edges formed as one or more isolated blobs are filtered by applying a Morphological erosion technique on the merged filter response.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates an exemplary block diagram of a system for performing hand segmentation and identifying a correct segmented hand in accordance with an embodiment of the present disclosure.
FIG. 2 illustrates an exemplary block diagram of the hand segmentation system of FIG. 1 in accordance with an example embodiment of the present disclosure.
FIG. 3 illustrates an exemplary flow diagram of a method for performing hand segmentation and identifying a segmented hand using components of the hand segmentation system of FIGS. 1-2 in accordance with an embodiment of the present disclosure.
FIG. 4A depicts a hand with a plurality of strong edges and a plurality of weak edges in accordance with an embodiment of the present disclosure.
FIG. 4B depicts a graphical representation illustrating a profile of the plurality of strong edges and the plurality of weak edges in accordance with an embodiment of the present disclosure.
FIG. 5A depicts a Multi Orientation Matched Filter orientation at 0 degree in accordance with an example embodiment of the present disclosure.
FIG. 5B depicts the Multi Orientation Matched Filter orientation at 30 degree in accordance with an example embodiment of the present disclosure.
FIG. 5C depicts the Multi Orientation Matched Filter orientation at 90 degree in accordance with an example embodiment of the present disclosure.
FIGS. 6A through 6B depict a Multi Orientation Matched Filter (MOMF) response on skin-like background in accordance with an example embodiment of the present disclosure.
FIG. 6C depicts a correct segmented hand corresponding to the hand comprised in an input image as depicted in FIG. 6A in accordance with an embodiment of the present disclosure.
FIG. 7 depicts results of hand segmentation of the present disclosure in comparison with YC_b C_r proposed by conventional techniques in accordance with an example embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
Currently existing method(s) use deep learning based method(s) to perform hand segmentation, which require additional resources for example, server. While there are few other methods which perform hand segmentation using depth send and RGB based techniques which are not accurate when there exists skin like background.
Hand segmentation is a necessary step for interpreting air-gestures. Use of these frugal headsets is encouraged with a smartphone for AR applications because of its economic viability nature, portability and scalability to the mass market.
Many applications proposed in researches incorporate additional wearable sensors and may require specific training for users. It is also noted that there have been very few attempts for hand segmentation on-board on Google cardboard with a smartphone. The possible applications are: (a) gesture recognition in HMD (b) video games in AR/VR mode (c) hand pose detection. However, prior researches and works fail to accurately perform hand segmentation which may be due to inaccurate capture of gestures, external factors for instance, nature of environment, and like.
Hand in-air gestures form a dominant mode of input for HCI and it is shown that they are usually preferred over touch based system. One of the most widely accepted examples of hand gestures recognition is data glove. Enhancement of hand segmentation has replaced the role of data gloves to bare hands due to its naturalness. Hand segmentation on wearable devices is a challenging computer vision problem with a complex background because of the following reasons: (a) varying illumination conditions, (b) computational capacity of device, (c) different skin tone of users from varied race, and (d) presence of skin color background. Few researches have used camera and IR LEDs to detect hand, which some have proposed using body-worn cameras and diffused IR illumination, and depth information for hand segmentation. The approaches discussed above require extra hardware, body-worn cameras, user instrumentation or external tracking, and often off-board processing as well. There are few other works that utilize random forest like classifiers and Gaussian mixture model for hand segmentation. However, these approaches take a lot of time to process each frame and pose serious barriers for user adoption. Embodiments of the present disclosure design and implement a filter for efficient hand segmentation in the wild and demonstrates using a combination with histogram equalization, Gaussian blurring. The present disclosure circumvent the shortcomings of the hand segmentation as discussed above and also takes care of First-person view (FPV) constraints caused due to wearable devices.
Referring now to the drawings, and more particularly to FIGS. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates an exemplary block diagram of a system 100 for performing hand segmentation and identifying a correct segmented hand in accordance with an embodiment of the present disclosure. The system 100 may also be referred as ‘a hand segmentation system’ or ‘a segmentation system’ and interchangeably used hereinafter. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the device 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 108 can be stored in the memory 102, wherein the database 108 may comprise, but are not limited to information hand and background, down-scaling output, filtered output(s), correct segmented hand output, and the like. More specifically, information pertaining to input image comprising hand, skin like background, and the like. In an embodiment, the memory 102 may store one or more technique(s) (e.g., filtering technique(s), one or more filters) which when executed by the one or more hardware processors 104 to perform the methodology described herein. The memory 102 may further comprise information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure.
FIG. 2, with reference to FIG. 1, illustrates an exemplary block diagram of the hand segmentation system 100 of FIG. 1 in accordance with an example embodiment of the present disclosure. The hand segmentation system 100 includes a pre-processing block 202, a Multi Orientation Matched Filtering (MOMF) block 204, and a skin segmentation block 206.
FIG. 3, with reference to FIGS. 1-2, illustrates an exemplary flow diagram of a method for performing hand segmentation and identifying a correct segmented hand using the system 100 and components of the hand segmentation system of FIGS. 1-2 in accordance with an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to the components of the system 100 as depicted in FIG. 1, and the block diagram as depicted in FIG. 2. In an embodiment of the present disclosure, at step 302, the one or more hardware processors 104 obtain an input image depicting at least a hand and a background. In an embodiment, the background may comprise skin like background. In an embodiment of the present disclosure, at step 304, the one or more hardware processors 104 pre-process the input image to obtain a pre-processed image comprising a pre-processed hand and pre-processed background. In an embodiment, the input image is pre-processed by down-sampling it first to obtain a down-sampled image and then a Contrast Limited Local Histogram Equalization (CLAHE) technique is applied on the down-sampled image to obtain the pre-processed image. The purpose of pre-processing is to improve contrast and remove noise. For instance, in the present disclosure the input image (or input image frames) was obtained from an image capturing device (e.g., say smartphone rear camera) which was then down-scaled or down-sampled to a resolution of 640×480, in order to reduce the processing time and that too without compromising much on image quality. Subsequently, contrast limited local histogram equalization (CLAHE) technique was applied to the down-sampled image for improving the global contrast and mitigating illumination artifacts. In an embodiment of the present disclosure, the input image was pre-processed in the pre-processing block 202 as depicted in FIG. 2.
FIG. 4A, with reference to FIGS. 1 through 3, depicts a hand with a plurality of strong edges and a plurality of weak edges in accordance with an embodiment of the present disclosure. FIG. 4B, with reference to FIGS. 1 through 4, depicts a graphical representation illustrating a profile of the plurality of strong edges and the plurality of weak edges in accordance with an embodiment of the present disclosure. The present disclosure considers two kinds of edges in hand images, viz., weak and strong edges. Weak edges are generated due to surface color discontinuity thus these consist of uniform texture. In contrast, strong edges are generated by the depth discontinuity hence these contain significant texture and color variations. For visualization (as depicted in FIG. 4A). To mitigate the weak edges, Gaussian smoothing technique is applied to a histogram equalized image. In other words, a plurality of weak edges are filtered during pre-processing of the input image by applying a Gaussian smoothing technique on the input image. It is observed that the smoothing can slightly impact the strong edges, but most the intensity variations is being preserved.
In an embodiment of the present disclosure, at step 306, the one or more hardware processors 104 apply a Multi Orientation Matched Filter (MOMF) on the pre-processed image to obtain a plurality of filter responses and merge the plurality of filter responses to obtain a merged filter response at step 308. In an embodiment the merged filter response comprises a plurality of strong edges and one or more weak edges. More specifically, the Multi Orientation Matched Filter (MOMF) is applied on the pre-processed image by rotating the MOMF at one or more predefined orientations for detecting the one or more strong edges. MOMF orientations can be visualized in FIGS. 5A through 5C. Design and implementation of the MOMF as executed by the present disclosure is described below:
Multi Orientation Matched Filter (MOMF):
As discussed above, color based hand segmentation often fails to correctly distinguish the hand from the background containing skin-like pixel intensities. Hence, to detect the strong edges the present disclosure implements and executes the MOMF for correct hand segmentation. An example depicting the behavior of weak and strong edges, is illustrated in FIG. 4B as mentioned above. It can be seen from FIG. 4B that the pattern formed by the strong edges closely resembles a sigmoid function in the cross-sectional profile while line shaped pattern in the tangential profile. Hence, the MOMF was designed which approximates the sigmoid function in cross-sectional profile while line in the tangential profile. Such a filter, G_?, of size (2n+1)×(2m+1) is given by:
G_? (x,y)=1/2-1/(1+e^((-p)/c) ) (1)
where G_? (x,y) represents the value of the filter G_? at the location (x,y); ? denotes the orientation of filter; c provides the scaling in filter; while p handle the orientation and is given by:
p=xcos?+ysin? (2)
-n=x=n, -m=y=m
The MOMF at different orientations, ? can be visualized from FIGS. 5A through 5C. More specifically, FIG. 5A depicts a Multi Orientation Matched Filter orientation at 0 degree in accordance with an example embodiment of the present disclosure. FIG. 5B depicts the Multi Orientation Matched Filter orientation at 30 degree in accordance with an example embodiment of the present disclosure. FIG. 5C depicts the Multi Orientation Matched Filter orientation at 90 degree in accordance with an example embodiment of the present disclosure. It can be observed that the MOMF of the present disclosure is defined such that its mean is zero, hence it can only provide the strong edge information oriented in the direction ?. Since strong edges are present at multiple orientations, the MOMF of the present disclosure is applied at different fixed orientations. Hence the filter is termed MOMF. Multiple filter responses are obtained by applying the oriented matched filters on the pre-processed image and the final response at a pixel is given by the maximum filter response. Mathematically, the final filter response, R is given by:
R(x,y)=(_??T^max)(G_? (x,y)?I(x,y) (3)
where ? and T represent convolution operator and set of orientation respectively. For visualization, consider FIGS. 6A-6B which depict the input image and the corresponding R respectively. It can be seen that R contains high values at the strong edges and low values for the background and weak edges. More specifically, FIGS. 6A through 6B, with reference to FIGS. 1 through 5C, depict a Multi Orientation Matched Filter (MOMF) response on skin-like background in accordance with an example embodiment of the present disclosure. The steps 306 and 308 are executed in the MOMF block 204 of FIG. 2.
As can be seen from FIGS. 6A through 6B, though weak edges were removed by applying Gaussian smoothing technique on the input image during pre-processing stage some of the weak edges are still present in the final filter response (also referred as the merged filter response) in the form of isolated blobs. In order to filter the remaining weak edges, the present disclosure employs Morphological erosion technique. More specifically, in an embodiment of the present disclosure, at step 310, the one or more hardware processors 104 filter the one or more weak edges formed as one or more isolated blobs from the merged filter response to obtain a resultant edge map. The one or more weak edges formed as one or more isolated blobs are filtered or removed by applying the Morphological erosion technique on the merged filter response.
In an embodiment of the present disclosure, at step 312, the one or more hardware processor 104 add the resultant edge map to the pre-processed image to obtain a resultant image. The resultant image comprises different texture regions that are isolated from each other, in one example embodiment. In other words, the resultant edge map is added to original image (or pre-processed image), I using:
I ¯=max?(R(x,y)?I,255) (4)
where ? and max denote the pixel-wise addition and max operator respectively. Due to the pixel-wise addition, the resultant image I ¯ contains higher values at the locations of strong edges and they lie outside the range of skin color. The skin pixels are detected from the I ¯ using color segmentation technique known in the art. More specifically, at step 314, the one or more hardware processor 104 detecting, using one or more chroma channels, a plurality of skin pixels from the resultant image. In other words, the system 100 utilizes only the chroma channels (C_b and C_r) for the detection because they exhibit better clustering of skin pixels and uni-modal distribution. The threshold values for the chroma channels are: 77
Documents
Application Documents
#
Name
Date
1
201821033598-IntimationOfGrant19-12-2023.pdf
2023-12-19
1
201821033598-STATEMENT OF UNDERTAKING (FORM 3) [06-09-2018(online)].pdf
2018-09-06
2
201821033598-REQUEST FOR EXAMINATION (FORM-18) [06-09-2018(online)].pdf
2018-09-06
2
201821033598-PatentCertificate19-12-2023.pdf
2023-12-19
3
201821033598-FORM 18 [06-09-2018(online)].pdf
2018-09-06
3
201821033598-FER.pdf
2021-10-18
4
201821033598-FORM 1 [06-09-2018(online)].pdf
2018-09-06
4
201821033598-CLAIMS [27-07-2021(online)].pdf
2021-07-27
5
201821033598-FIGURE OF ABSTRACT [06-09-2018(online)].jpg