Sign In to Follow Application
View All Documents & Correspondence

System And Method For Processing An Image To Determine A Match Of Random Hand Gesture

Abstract: A method for processing an image of a user (102) to determine a match of a random hand gesture is provided. The method includes determining a match of an indicated random hand gesture and a detected hand gesture of the user (102). The hand gesture of the user (102) is detected by (i) marking a plurality of landmark points to unique positions of hand and fingers in the hand gesture of the image and mapping the plurality of landmark points by computing their X, Y coordinates (ii) detecting at least one state of the hand based on the plurality of landmark points, (iv) computing a value for each of the at least one state of the hand, (v) detecting the hand gesture of the user (102) by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
17 December 2020
Publication Number
25/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ipo@myipstrategy.com
Parent Application

Applicants

TRAQCHECK IT SERVICES PRIVATE LIMITED
C-193, DEFENCE COLONY, NEW DELHI-110024

Inventors

1. Jaibir Nihal Singh
C 193 Defence Colony, New Delhi, 110024, India.
2. Armaan Mehta
D 65 First Floor, Panchsheel Enclave, New Delhi, 110017, India
3. Rishabh Jain
H.No. - L - 3/22, Ground Floor, DLF Phase 2, Gurgaon, Haryana, 122002, India

Specification

Claims:I / We Claim:
1. A processor-implemented method for processing an image of a user (102) to determine a match of a random hand gesture, the method comprising:
indicating a random hand gesture to the user (102), wherein the random hand gesture is presented through an interface;
generating a database with an image of the user (102), wherein the image is captured using an image capturing device (104) of the user (102), wherein the image comprises a face of the user (102) with at least one hand showing a hand gesture performed by the user(102);
characterized in that,
marking, using a machine learning model, a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image;
mapping, using the machine learning model, the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image;
detecting, using the machine learning model, at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image, wherein the at least one state of the at least one hand comprises the at least one position of the finger, a direction of the finger, and an orientation of palm;
computing, using the machine learning model, a value for each of the at least one state of the hand;
detecting, using the machine learning model, the hand gesture of the user (102) by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value; and
determining a match of indicated random hand gesture and detected hand gesture of the user (102).

2. The processor-implemented method as claimed in claim 1, wherein the method comprises performing a comparative image analysis that determines a match level between the face of the user (102) in the image of the user (102) and the face of the user (102) in a personal photograph identification document of the user (102), the personal photograph identification document of the user (102) is obtained from the user (102) through the interface.

3. The processor-implemented method as claimed in claim 1, wherein the method comprises, a confidence threshold, a verifying parameter used for matching the comparative image analysis and the detected hand gesture of the user (102).

4. The processor-implemented method as claimed in claim 1, wherein the method comprises communicating to indicate the hand gesture to the user (102), wherein the hand gesture is presented to the user (102) through the interface.
5. The processor-implemented method as claimed in claim 1, wherein the hand gesture is randomly generated by (i) defining at least one state of the hand, (ii) computing a random value for the at least one state of the hand, and (iii) presenting the value of the at least one state of the hand to the user (102).
6. The processor-implemented method as claimed in claim 2, wherein the comparative image analysis is performed with a cloud-based image analysis service.
7. The processor-implemented method as claimed in claim 1, wherein the plurality of landmark points comprise 21 landmark points.
8. The processor-implemented method as claimed in claim 1, wherein the detection of the hand gesture using a vision based machine learning model comprises a plurality of models that comprises a palm detector model that defines the palm of the user (102) on the image, a handmark detector model that defines the plurality of landmark points with their X,Y coordinates with respect to the hand and finger position on the image and a gesture recognizer model that determines the at least one state of the hand based on the computed X,Y coordinates to detect the hand gesture of the user (102).

9. A system for processing an image of a user (102) to determine a match of a random hand gesture, the system comprising:
a server (108) that is communicatively coupled with an image capturing device (104) associated with a user (102), wherein the server (108) comprises
a memory that stores a set of instructions and a set of modules;
a processor in communication with the memory, the processor retrieving executing machine-readable program instructions from the memory which, when executed by the processor, enable the processor to:
indicating a random hand gesture to the user (102), wherein the random hand gesture is presented through an interface;
generating a database with an image of the user (102), wherein the image is captured using the image capturing device (104) of the user (102), wherein the image comprises face of the user (102) with at least one hand showing a hand gesture performed by the user (102);
characterized in that,
marking, using a machine learning model, a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image;
mapping, using the machine learning model, the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image;
detecting, using the machine learning model, at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image, wherein the at least one state of the at least one hand comprises the at least one position of the finger, a direction of the finger, and an orientation of palm;
computing, using the machine learning model, a value for each of the at least one state of the hand;
detecting, using the machine learning model, the hand gesture of the user (102) by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value; and
determining a match of indicated random hand gesture and detected hand gesture of the user (102).
10. The system as claimed in claim 9, wherein the system comprises performing a comparative image analysis that determines a match level between the face of the user (102) in the image of the user (102) and the face of the user (102) in a personal photograph identification document of the user (102), wherein the personal photograph identification document of the user (102) is obtained from the user (102) through the interface.

Description:BACKGROUND
Technical Field
[0001] The embodiments herein generally relate to a system and method for user identity verification, and more specifically to a system and method for processing an image of a user to determine a match of a random hand gesture.
Description of the Related Art
[0002] Identity fraud is a growing worldwide problem. Stolen or fabricated identity information of an individual greatly affects both identity presenters and identity verifiers as it costs both time and money. Further, identity fraud prevents the identity presenters to positively identify themselves, which is central to everyday activities. Hence, consent and data privacy is pivotal in the background check and identity verification process often used by businesses and/or government agencies to ensure that information provided by users is associated with the identity of a real person.
[0003] An individual needs to give consent then only the background verification process can begin. Existing background verification systems, in general, collects data of an individual by simply using a picture of their ID. It is very difficult for a company to get legitimate consent from the person due to easy impersonation. A third-party can run a background verification on an individual by just submitting a picture of that individual’s government-issued identification document. The problem is that people can run background checks on someone else without their consent. Similarly, Identity Theft can easily take place if an individual gets hold of someone else's ID. For example, to book an online car rental, an individual can submit someone else's ID and use the services.
[0004] Hand gesture recognition is attracting attention as a method of efficiently and naturally interacting and information-exchanging between a user and smart information technology (IT) devices for identity verification. Current hand gesture recognition systems use classifiers based on machine learning algorithms to decide which hand gesture a person is performing in a given picture. This is done by having a labeled training dataset of a large number of pictures of a variety of gestures. In case, a gesture is not mentioned in this training dataset, or if there are not enough pictures of a particular gesture to train the machine learning model, the system would not be able to detect that particular gesture. If the gesture is common (such as a thumbs up), then an imposter can simply try and get a picture of the candidate from their social media and submit it to the system. By definition, this machine learning classifier solution can only identify common hand gestures. Traditional hand gesture recognition systems would train a machine-learning-based classifier on a dataset of "labeled images" - an image of a hand gesture along with a text accompanying the image indicating what the gesture is. Such a system could then be used to make inferences based on future input images to classify that image into one of the gesture types it encountered in the dataset. However, such a solution would not be advantageous because if there exists a large dataset containing them, they are inherently common and thus become bad choices for unusual and rare hand gestures for efficient identity verification.
[0005] Therefore, there arises a need to address the aforementioned technical drawbacks in existing technologies to a user identity verification using random hand gesture.
SUMMARY
[0006] In view of the foregoing, an embodiment herein provides a method for processing an image of a user to determine a match of a random hand gesture. The method includes the steps of (i) indicating a random hand gesture to the user, the random hand gesture is presented through an interface, (ii) generating a database with an image of the user, the image is captured using an image capturing device of the user, the image includes face of the user with at least one hand showing a hand gesture performed by the user, (iii) marking, using a machine learning model, a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image, (iv) mapping, using the machine learning model, the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image, (v) detecting, using the machine learning model, at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image, the at least one state of the at least one hand includes the at least one position of the finger, a direction of the finger, and an orientation of palm, (vi) computing, using the machine learning model, a value for each of the at least one state of the hand, (vii) detecting, using the machine learning model, the hand gesture of the user by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value, and (viii) determining a match of indicated random hand gesture and detected hand gesture of the user.
[0007] In some embodiments, the method includes performing a comparative image analysis that determines a match level between the face of the user in the image of the user and the face of the user in a personal photograph identification document of the user, wherein the personal photograph identification document of the user is obtained from the user through the interface.
[0008] In some embodiments, the method includes a confidence threshold, a verifying parameter used for matching the comparative image analysis, and the detected hand gesture of the user.
[0009] In some embodiments, the method includes communicating to indicate the hand gesture to the user, the hand gesture is presented to the user through the interface.
[0010] In some embodiments, the hand gesture is randomly generated by (i) defining at least one state of the hand, (ii) computing a random value for the at least one state of the hand, and (iii) presenting the value of the at least one state of the hand to the user.
[0011] In some embodiments, the comparative image analysis is performed with a cloud-based image analysis service.
[0012] In some embodiments, the plurality of landmark points includes 21 landmark points.
[0013] In some embodiments, the detection of the hand gesture using a vision-based machine learning model includes a plurality of models that includes a palm detector model that defines the palm of the user on the image, a handmark detector model that defines the plurality of landmark points with their X, Y coordinates with respect to the hand and finger position on the image and a gesture recognizer model that determines the at least one state of the hand based on the computed X, Y coordinates to detect the hand gesture of the user.
[0014] In another aspect, a system for processing an image of a user to determine a match of a random hand gesture is provided. The system includes a server that is communicatively coupled with an image capturing device associated with a user. The server includes a memory that stores a set of instructions and a set of modules and a processor that executes the set of instructions and is configured to (i) indicating a random hand gesture to the user, the random hand gesture is presented through an interface, (ii) generating a database with an image of the user, the image is captured using an image capturing device of the user, the image includes face of the user with at least one hand showing a hand gesture performed by the user, (iii) marking, using a machine learning model, a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image, (iv) mapping, using the machine learning model, the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image, (v) detecting, using the machine learning model, at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image, the at least one state of the at least one hand includes the at least one position of the finger, a direction of the finger, and an orientation of palm, (vi) computing, using the machine learning model, a value for each of the at least one state of the hand, (vii) detecting, using the machine learning model, the hand gesture of the user by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value, and (viii) determining a match of indicated random hand gesture and detected hand gesture of the user.
[0015] In some embodiments, the system includes performing a comparative image analysis that determines a match level between the face of the user in the image of the user and the face of the user in a personal photograph identification document of the user, the personal photograph identification document of the user is obtained from the user through the interface.
[0016] The system uniquely identifies an individual and authenticates identity in a completely automated manner thus saves time, money. The system uses unusual and unorthodox hand gestures to avoid impersonation. The system provides a full-proof method of instantly verifying an individual to obtain consent from the user to perform the verification. The system can be employed in online retail environments that require employee/customer verification.
[0017] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating preferred embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the embodiments herein without departing from the spirit thereof, and the embodiments herein include all such modifications.

BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0019] FIG. 1 is a block diagram that illustrates a system view of processing an image of a user to determine a match of a random hand gesture according to some embodiments herein;
[0020] FIG. 2 is a block diagram of a server of FIG. 1 according to some embodiments herein;
[0021] FIG. 3A and 3B are flow diagrams that illustrate a method for processing an image of a user to determine a match of a random hand gesture according to some embodiments herein; and
[0022] FIG.4 is a flow diagram that illustrates a method for processing an image of a user to determine a match of a face and a random hand gesture according to some embodiments herein;
[0023] FIG. 5 illustrates an exploded view of the image capturing device of FIG.1 according to some embodiments herein; and
[0024] FIG. 6 is a schematic diagram of a computer architecture in accordance with the
embodiments herein.
DETAILED DESCRIPTION OF THE DRAWINGS
[0025] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0026] As mentioned, there remains a need for a system and method for processing an image of a user to determine a match of a random hand gesture. The embodiments herein achieve this by proposing a system that determines a match of a random and an unorthodox hand gesture for verifying user identity. Referring now to the drawings, and more particularly to FIGS. 1 through 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments.
[0027] FIG. 1 is a block diagram 100 that illustrates a system view of processing an image of a user 102 to determine a match of a random hand gesture according to some embodiments herein. The block diagram 100 includes an image capturing device 104, and a server 108. The server 108 enables to process the image of the user 102 to determine a match of the random hand gesture. The server 108 may obtain input from the image capturing device 104 through a network 106.
[0028] In some embodiments, the image capturing device 104 may be a mobile phone, a kindle, a PDA (Personal Digital Assistant), a tablet, a music player, a computer, an electronic notebook, or a smartphone. The server 108 receives and processes the image from the image capturing device 104 through the network 106 to determine the match of the random hand gesture.
[0029] In some embodiments, the network 106 is a wired network. In some embodiments, the network 106 is a wireless network. In some embodiments, the network 106 is a combination of a wired network and a wireless network. In some embodiments, the network 106 is the Internet.
[0030] The server 108 indicates the random hand gesture to the user 102. The random hand gesture is presented to the user 102 through an interface. In some embodiments, the server 108 communicates with the user 102 to indicate the hand gesture to the user 102. The server 108 marks a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image using a machine learning model. In some embodiments, the plurality of landmark points includes 21 landmark points. The server 108 maps the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image using the machine learning model. The server 108 detects at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image using the machine learning model. The at least one state of the at least one hand includes the at least one position of the finger, a direction of the finger, and an orientation of palm. The detection of the hand gesture using the machine learning model may be a vision-based machine learning model. The machine learning model may include a plurality of models that includes a palm detector model that defines the palm of the user 102 on the image, a handmark detector model that defines the plurality of landmark points with their X, Y coordinates with respect to the hand and finger position on the image and a gesture recognizer model that determines the at least one state of the hand based on the computed X, Y coordinates to detect the hand gesture of the user 102. In some embodiments, the server 108 generates the random hand gesture to present to the user 102 through the interface. The hand gesture may be randomly generated by (i) defining at least one state of the hand, (ii) computing a random value for the at least one state of the hand, and (iii) presenting the value of the at least one state of the hand to the user 102. In some embodiments, the server 108 presents the random hand gesture as an image to the user 102 through the interface. The image may be an image of an individual performing a particular hand gesture. The server 108 may be configured to extract the particular hand gesture performed by the individual from the image by mapping the plurality of landmark points to specific hand positions. Boolean values may be determined for a certain set of states from the image. A unique set of these values corresponding to a state defines a hand gesture The server 108 may store the representation of the hand gesture for future use.
[0031] The server 108 computes a value for each of the at least one state of the hand using a machine learning model. The server 108 detects the hand gesture of the user 102 by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value using a machine learning model. The server 108 determining a match of indicated random hand gesture and the detected hand gesture of the user 102.
[0032] The vision-based machine learning model may be trained with images of hands along with landmark points annotated on those images.
[0033] In some embodiments, the method includes performing a comparative image analysis that determines a match level between the face of the user in the image of the user 102 and the face of the user in a personal photograph identification document of the user 102. The personal photograph identification document of the user is obtained from the user 102 through the interface. In some embodiments, the server 108 receives the personal photograph identification document from the user 102 through the Whatsapp interface. In some embodiments, the server 108 receives the personal photograph identification document from the user 102 through the React-based Web interface. In some embodiments, the server 108 includes a confidence threshold, a verifying parameter for matching the comparative image analysis, and the detected hand gesture of the user 102. The comparative image analysis compares the similarity between two images, (i.e) the image of the user 102 in the personal photograph identification document and the image of the user 102 including face with at least one hand showing a hand gesture performed by the user 102. In some embodiments, the comparative image analysis is performed with a cloud-based image analysis service. The cloud-based image analysis service may be an AWS Rekognition.
[0034] In some embodiments, the confidence threshold can be customized according to the requirements of the system. For instance, if the similarity between two faces is a value between 0 and 100, then the confidence threshold is a value between 0 and 100, and if the similarity is greater than or equal to the confidence threshold, then the two faces are matched.
[0035] FIG. 2 is a block diagram of a server 108 of FIG. 1 according to some embodiments herein. The server 108 includes a database 202, a random hand gesture indication module 204, a database generation module 206, and a random hand gesture recognition module 208. The random hand gesture indication module 204 indicates a random hand gesture to the user 102 through an interface. The database generation module 206 obtains an image of the user 102 from an image capturing device 104 of the user 102 that includes a face of the user 102 with at least one hand showing a hand gesture performed by the user 102. The database generation module 206 generates a database with the image of the user 102 and stores it in the database 202. The random hand gesture recognition module 208 marks a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image using a machine learning model. The random hand gesture recognition module 208 maps the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image using a machine learning model. The random hand gesture recognition module 208 detects at least one state of the at least one hand that includes at least one position of the finger, a direction of the finger, and an orientation of palm-based on the plurality of landmark points mapped to the hand gesture of the image using a machine learning model. The random hand gesture recognition module 208 computes a value for each of the at least one state of the hand using a machine learning model. The random hand gesture recognition module 208 detects the hand gesture of the user 102 by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value using a machine learning model. The random hand gesture recognition module 208 determines a match of indicated random hand gesture and the detected hand gesture of the user 102.
[0015] FIG. 3A and 3B are flow diagrams that illustrate a method 300 for processing an image of a user 102 to determine a match of a random hand gesture according to some embodiments herein. At step 302, the method 300 includes indicating a random hand gesture to the user 102. At step 304, the method 300 includes generating a database with an image of the user 102 that includes a face of the user 102 with at least one hand showing a hand gesture performed by the user 102. At step 306, the method 300 includes marking a plurality of landmark points to unique positions of the at least one hand and fingers in the hand gesture of the image using a machine learning model. At step 308, the method 300 includes mapping the plurality of landmark points to the at least one hand and at least one position of a finger by computing X, Y coordinates of the plurality of landmark points in the hand gesture of the image using a machine learning model. At step 310, the method 300 includes detecting at least one state of the at least one hand based on the plurality of landmark points mapped to the hand gesture of the image using a machine learning model. At step 312, the method 300 includes computing a value for each of the at least one state of the hand using a machine learning model. At step 314, the method 300 includes detecting the hand gesture of the user 102 by determining a match level between a computed value for each of the at least one state of the hand with a predetermined state value using a machine learning model. At step 316, the method 300 includes determining a match of indicated random hand gesture and the detected hand gesture of the user 102.
[0016] In an example embodiment, the at least one state of the hand may be defined as follows: Python 3.7, assert( type(points) == List ), assert( len(points) == 21 ). Points is a list of 21 X, Y coordinates returned by the model after being fed an image of a hand. Assumption 1: 10th point maps to the bottom of the index finger and the 12th point maps to the top of the index finger.
This function returns True if index finger is open in an upright hand.
def indexFingerOpenInUprightHand(points):
return points[11].y > points[9].y
Assumption 2: 15th points maps to top of the middle finger.
This function returns True if the index finger is open and crossed with a middle finger.
def indexFingerMiddleFingerCrossed(points):
return indexFingerOpenInUprightHand(points) and points[11].x > points[14].x
Many such functions can be created to detect different attributes of the hand, such as detecting if any finger is open/closed, if a hand is upright or facing downwards, in front of the palm is facing camera or back of the palm, calculating a degree of orientation of palm, if any finger is half-open, etc. These functions can be used to define arbitrary hand gestures. For example, to detect if a person is holding up 3 fingers:
This function returns True if hand is holding up 3 fingers in image.
def detectThreeFingerGesture(points):
fingers_open = sum( thumbOpen(points) + indexFingerOpen(points) +
ringFingerOpen(points) + middleFingerOpen(points) + pinkieFingerOpen(points) )
return fingers_open == 3.
[0017] In an example embodiment, if the random hand gesture is the letter “P” from the American Sign Language (ASL). The at least one state of the hand is represented as (i) All fingers open, (ii) Thumb and index finger connected (not crossed), and (iii) Palm at 75 - 90 degree angle.
In some embodiments, the random hand gesture is generated from scratch. A plurality of at least one state of the hand is defined to present uncommon and unorthodox hand gestures. The values of the at least one state of the hand are presented to the user 102, who will perform the gesture and then feed their image into the system. This system will then detect the states of the hand in the given image, and if they match with what it generated earlier, it can conclude that the user 102 has successfully performed the gesture. In an example embodiment, the following three states of the hand are considered (i.e.) (i) Middle finger open. The possible value for this state of the hand is true and false. (ii) Index finger open. The possible values for this state of the hand are true and false. (iii) Palm angle orientation. The possible values for this state of hand range from -90 to +90 degrees. Larger values are excluded to avoid uncomfortable gestures. The palm angle orientation state will be fuzzy and not expect the user 102 to get the orientation right completely. If the orientation is supposed to be 30 degrees, the state would be accepted even if the orientation detected in the picture is anywhere between 20 to 40 degrees. The algorithm to generate the random hand gesture will go through these 3 states and compute a random value for each state based on its possible values. An example of such a traversal is:
1) Middle finger open = false
2) Index finger open = true
3) Palm angle orientation = 30 degrees
The system will propagate the above states and their values to the user 102 and request to perform a gesture that satisfies all the above values. After receiving the image of the user's attempt, the system will use the machine learning algorithm to generate 21 landmark points, compute the state values, and match them with the earlier generated values. If the values match with respect to a confidence threshold, it indicates that the user 102 successfully performed the gesture.
[0015] FIG.4 is a flow diagram illustrating a method for processing an image of a user 102 to determine a match of a face and a random hand gesture according to some embodiments herein. At step 402, an image of the personal photograph identification document of the user 102 is obtained. At step 404, a random hand gesture is generated for the user 102. At step 406, an image of the user 102 including a face of the user 102 with at least one hand showing a hand gesture as indicated in the random hand gesture is obtained. At step 408, the matching process is initialized. At step 410, the face recognition is performed by (i) making a request to AWS Rekognition with the image of the personal photograph identification document and the image obtained from the user 102 (ii) parsing the response and check if the confidence level of match is greater than 80%. At step 412, the hand gesture recognition is performed by (i) defining a plurality of landmark points (ii) querying the corresponding function of random hand gestures by using the plurality of landmark points to detect the gesture. At step 414, the consent is confirmed if inputs for both the face recognition and hand gesture recognition are true. At step 416, a consent boolean is presented through an interface.
[0016] FIG. 5 illustrates an exploded view of the image capturing device 104 of FIG.1 according to some embodiments herein. The image capturing device 104 having a memory 502 having a set of computer instructions, a bus 504, a display 506, a speaker 508, and a processor 510 capable of processing a set of instructions to perform any one or more of the methodologies herein, according to an embodiment herein. The processor 510 may also enable digital content to be consumed in the form of a video for output via one or more displays 506 or audio for output via speaker and/or earphones 508. The processor 510 may also carry out the methods described herein and in accordance with the embodiments herein.
[0017] The embodiments herein may include a computer program product configured to include a pre-configured set of instructions, which when performed, can result in actions as stated in conjunction with the methods described above. In an example, the pre-configured set of instructions can be stored on a tangible non-transitory computer readable medium or a program storage device. In an example, the tangible non-transitory computer readable medium can be configured to include the set of instructions, which when performed by a device, can cause the device to perform acts similar to the ones described here. Embodiments herein may also include tangible and/or non-transitory computer-readable storage media for carrying or having computer executable instructions or data structures stored thereon.
[0018] Generally, program modules utilized herein include routines, programs, components, data structures, objects, and the functions inherent in the design of special-purpose processors, etc. that perform particular tasks or implement particular abstract data types. Computer executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
[0019] The embodiments herein can include both hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc.
[0020] A data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution.
[0021] Input/output (I/O) devices (including but not limited to keyboards, displays, pointing devices, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
[0022] A representative hardware environment for practicing the embodiments herein is depicted in FIG. 6, with reference to FIGS. 1 through 5. This schematic drawing illustrates a hardware configuration of a server/computer system/ image capturing device in accordance with the embodiments herein. The image capturing device includes at least one processing device 10 and a cryptographic processor 11. The special-purpose CPU 10 and the cryptographic processor (CP) 11 may be interconnected via system bus 14 to various devices such as a random access memory (RAM) 15, read-only memory (ROM) 16, and an input/output (I/O) adapter 17. The I/O adapter 17 can connect to peripheral devices, such as disk units 12 and tape drives 13, or other program storage devices that are readable by the system. The image capturing device can read the inventive instructions on the program storage devices and follow these instructions to execute the methodology of the embodiments herein. The image capturing device further includes a user interface adapter 20 that connects a keyboard 18, mouse 19, speaker 25, microphone 23, and/or other user interface devices such as a touch screen device (not shown) to the bus 14 to gather user input. Additionally, a communication adapter 21 connects the bus 14 to a data processing network 26, and a display adapter 22 connects the bus 14 to a display device 24, which provides a graphical user interface (GUI) 30 of the output data in accordance with the embodiments herein, or which may be embodied as an output device such as a monitor, printer, or transmitter, for example. Further, a transceiver 27, a signal comparator 28, and a signal converter 29 may be connected with the bus 14 for processing, transmission, receipt, comparison, and conversion of electric or electronic signals.
[0023] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the appended claims.

Documents

Application Documents

# Name Date
1 202011055049-STATEMENT OF UNDERTAKING (FORM 3) [17-12-2020(online)].pdf 2020-12-17
2 202011055049-PROOF OF RIGHT [17-12-2020(online)].pdf 2020-12-17
3 202011055049-FORM FOR STARTUP [17-12-2020(online)].pdf 2020-12-17
4 202011055049-FORM FOR SMALL ENTITY(FORM-28) [17-12-2020(online)].pdf 2020-12-17
5 202011055049-FORM 1 [17-12-2020(online)].pdf 2020-12-17
6 202011055049-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [17-12-2020(online)].pdf 2020-12-17
7 202011055049-EVIDENCE FOR REGISTRATION UNDER SSI [17-12-2020(online)].pdf 2020-12-17
8 202011055049-DRAWINGS [17-12-2020(online)].pdf 2020-12-17
9 202011055049-DECLARATION OF INVENTORSHIP (FORM 5) [17-12-2020(online)].pdf 2020-12-17
10 202011055049-COMPLETE SPECIFICATION [17-12-2020(online)].pdf 2020-12-17
11 202011055049-FORM-26 [31-12-2020(online)].pdf 2020-12-31
12 202011055049-STARTUP [11-12-2024(online)].pdf 2024-12-11
13 202011055049-FORM28 [11-12-2024(online)].pdf 2024-12-11
14 202011055049-FORM 18A [11-12-2024(online)].pdf 2024-12-11
15 202011055049-FER.pdf 2024-12-19
16 202011055049-OTHERS [29-04-2025(online)].pdf 2025-04-29
17 202011055049-FER_SER_REPLY [29-04-2025(online)].pdf 2025-04-29
18 202011055049-DRAWING [29-04-2025(online)].pdf 2025-04-29
19 202011055049-CORRESPONDENCE [29-04-2025(online)].pdf 2025-04-29
20 202011055049-COMPLETE SPECIFICATION [29-04-2025(online)].pdf 2025-04-29
21 202011055049-US(14)-HearingNotice-(HearingDate-22-07-2025).pdf 2025-06-05
22 202011055049-Correspondence to notify the Controller [16-07-2025(online)].pdf 2025-07-16
23 202011055049-US(14)-ExtendedHearingNotice-(HearingDate-29-07-2025)-1130.pdf 2025-07-22
24 202011055049-Correspondence to notify the Controller [23-07-2025(online)].pdf 2025-07-23
25 202011055049-Written submissions and relevant documents [13-08-2025(online)].pdf 2025-08-13

Search Strategy

1 searhcE_19-12-2024.pdf
2 202011055049_SearchStrategyAmended_E_gestrdesAE_05-06-2025.pdf