Sign In to Follow Application
View All Documents & Correspondence

A System For Recognizing A User And A Method Thereof

Abstract: ABSTRACT A System for Recognizing a User and a Method thereof The present invention provides a system (100) and a method (200) for recognizing a user. The system (100) comprises one or more image sensors (110) configured to capture real time images of a vicinity of the one or more image sensors (110); and a processing unit (120) that has one or more processing modules configured to receive the real time images from the one or more image sensors (110) and being configured to determine a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions. Reference Figure 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 July 2023
Publication Number
05/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TVS MOTOR COMPANY LIMITED
“Chaitanya” No.12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu - 600006 India

Inventors

1. SUMEET SHEKHAR
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India
2. RAJAN SIPPY
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India
3. NARESH ADEPU
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India
4. ATHARVA KADETHANKAR
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India
5. MANISH SHARMA
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India
6. SIDDAPURA NAGARAJU PRASHANTH
TVS Motor Company Limited “Chaitanya” No 12 Khader Nawaz Khan Road, Nungambakkam Chennai Tamil Nadu 600006 India

Specification

Description:FIELD OF THE INVENTION
[001] The present invention relates to recognizing a user. More particularly, the present invention relates to a system and method for recognizing a user.

BACKGROUND OF THE INVENTION
[002] Face recognition is a biometric technology for identity recognition based on unique facial feature information of a person. Face recognition technology has made significant advancements in various fields, including security and automotive industries. Typically, a face recognition system includes a component that captures an image of a person. This image is then processed using associated circuitry and software, which compare it with stored images. When a positive match is found between the acquired image and a pre-stored image, the system determines the individual as an authorized user.
[003] However, in existing face recognition system, faces can be photographed and printed to fool a face recognition system, or faces can be spoofed with nearly identical masks. Specifically, face recognition system often struggles to identify the authorized user accurately and reliably, especially in scenarios where a live person's face needs to be distinguished from static images or videos. This is because the conventional face recognition system is dependent on the hardware aspects which are not capable of detecting the difference between the live person and a static image or video of the person. Consequently, the conventional systems are not spoof-proof, making it possible to easily access the vehicle by using a static photo or video of the authorized user.
[004] Furthermore, the existing system is unable to differentiate between two individuals who may resemble each other, leading to potential security threats to the accessibility and security of the vehicle. Additionally, these systems fail to address the issue of detecting the primary user's face in scenarios where multiple faces are present around the vehicle. In crowded places, conventional systems fail to identify the intended user and may grant access to unauthorized individuals solely based on their proximity to the system. This inability to accurately judge whether a person intends to use the face recognition system or is merely standing nearby poses a significant security risk.
[005] Moreover, the functionality of conventional face recognition system is limited to basic start-stop functionality of vehicle. However, there are other applications of face recognition system, such as those based on mobile computational devices and angle of approach, which the conventional systems fail to address due to their dependency on specific hardware configurations.
[006] Thus, there is a need in the art for a system and method for recognizing a user which addresses at least the aforementioned problems.

SUMMARY OF THE INVENTION
[007] In one aspect, the present invention relates to a system for recognizing a user. The system has one or more image sensors configured to capture real time images of a vicinity of the one or more image sensors. The system has a processing unit that has one or more processing modules configured to receive the real time images from the one or more image sensors and being configured to determine a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions.
[008] In an embodiment of the invention, the one or more processing modules are configured to: determine a plurality of faces from the real time images; determine a face of a primary user from the plurality of faces in the real time images; determine whether the face of the primary user is a real face or a spoofed face; and determine whether the primary user is an authorized user, if the face of the primary user is a real face.
[009] In a further embodiment of the invention, the set of predefined functions includes one or more of a surface area comparison function, an eye angle comparison feature, a histogram analysis feature and a facial comparison feature.
[010] In an embodiment of the invention, the processing unit has a first module configured to determine the face of the primary user from the plurality of faces in the real time images based on one or more parameters of the face.
[011] In a further embodiment of the invention, the processing unit is configured to determine a plurality of faces in the received real time images and determine the face of the primary user based on the one or more parameters of the face. Herein, the one or more parameters of the face includes a surface area of the plurality of faces in the real time images, and angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors.
[012] In a further embodiment of the invention, the face having a combination of surface area with the maximum number of pixels and a predefined angle with respect to one or more image sensors is determined as the face of primary user from the plurality of faces
[013] In a further embodiment of the invention, the processing unit has of a second module and a third module. The second module being configured to determine whether the determined face of the primary user is a real face based on histogram analysis of the real time images. The third module being configured to determine whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users.
[014] In a further embodiment of the invention, the system has an illumination device and an auxiliary sensor unit. The auxiliary sensor unit is in communication with the processing unit and being configured to detect a level of ambient light around the one or more image sensors, and to switch on the illumination device if the ambient light is below a predetermined threshold value of ambient light.
[015] In a further embodiment of the invention, wherein the auxiliary sensor unit being configured to determine whether the one or more vehicle parameters are below a first predetermined threshold. The processing unit is configured to switch off the one or more image sensors or the illumination device or switch off the system, if the one or more vehicle parameters are below the first predetermined threshold.
[016] In an embodiment of the invention, the processing unit has a vision processing unit in communication with the first module, the second processing module and the third module, the vision processing unit being configured to receive inputs from a hardware through an operating system and a hardware abstraction layer.
[017] In another aspect, the present invention relates to a method for recognizing a user. The method has the steps of: capturing real time images of a vicinity of the one or more image sensors; receiving the real time images from the one or more image sensors; and determining, by the one or more modules of the processing unit, a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions.
[018] In an embodiment of the invention, the method has the steps of determining a plurality of faces from the real time images; determining a face of a primary user from the plurality of faces in the real time images; determining whether the face of the primary user is a real face or a spoofed face; and determining whether the primary user is an authorized user, if the face of the primary user is a real face.
[019] In an embodiment of the invention, the method has the steps of: determining a face of a primary user from the plurality of faces in the real time images based on one or more parameter of the face
[020] In an embodiment of the invention, the method has the steps of: determining a plurality of faces in the received real time images; and determining, the face of the primary user based on the one or more parameters of the face, wherein the one or more parameters of the face has a surface area of the plurality of faces in the real time images, and angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors.
[021] In an embodiment of the invention, the face having a combination of surface area with the maximum number of pixels and a predefined angle with respect to one or more image sensors is determined as the face of primary user from the plurality of faces.
[022] In an embodiment of the invention, the method has the steps of: determining whether the determined face of the primary user is a real face based on histogram analysis of the real time images; and determining whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users.

BRIEF DESCRIPTION OF THE DRAWINGS
[023] Reference will be made to embodiments of the invention, examples of which may be illustrated in accompanying figures. These figures are intended to be illustrative, not limiting. Although the invention is generally described in context of these embodiments, it should be understood that it is not intended to limit the scope of the invention to these particular embodiments.
Figure 1 illustrates a system for recognizing a user in a vehicle, in accordance with an embodiment of the present invention.
Figure 2 illustrates a method for recognizing a user, in accordance with an embodiment of the present invention.
Figure 3 illustrate a detailed method for recognizing a user, in accordance with an embodiment of the present invention.
Figure 4A and 4B illustrate a method for training modules of the processing unit, in accordance with an embodiment of the present invention.
Figure 5 illustrates a process flow of system and method for recognizing a user, in accordance with an embodiment of the present invention.
Figure 6 illustrates a software architecture for the system and method for recognizing a user, in accordance with an embodiment of the present invention.


DETAILED DESCRIPTION OF THE INVENTION
[024] The present invention relates to recognizing a user. More particularly, the present invention relates to a system and method for recognizing a user. The system and method of the present invention are typically used in a vehicle such as a two wheeled vehicle, or a three wheeled vehicle including trikes, or a four wheeled vehicle, or other multi-wheeled vehicles as required. In addition, the system and method of the present invention is also used in office spaces.
[025] Figure 1 illustrates a system 100 for recognizing a user. As illustrated, the system 100 comprises one or more image sensors 110. The one or more image sensors 110 are configured to capture real time images of a vicinity of the one or more image sensors. In one instance, the one or more image sensors 110 are configured to capture real time images of a user. In essence, the one or more image sensors 110 capture a series of real time images, or video feed or live feed of the user. The real time images of the user are captured as soon as the vehicle ignition is switched on. The real time images, or video feed or live feed are a series of individual image frames, which can be analysed for recognizing a user. In an embodiment, the one or more image sensors 110 comprises one or more of a camera, a Red-Green-Blue wavelength camera, a Red-Green-Blue-Infrared wavelength camera, an Infrared camera, a Monochrome camera, a Thermal camera, a Radio Detection and Ranging camera, a Light Detection and Ranging camera, or a Time-of-Flight camera.
[026] As illustrated in Figure 1, the system 100 further comprises a processing unit 120. The processing unit 120 comprises one or more processing modules. The processing unit 120 is configured to receive the real time images from the one or more image sensors 110. The processing unit 120 is configured to grab a particular real time image or a frame from the video stream. Subsequently, the processing unit 120 is configured to determine a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions.
[027] In an embodiment, the set of predefined functions comprise one or more of a surface area comparison function, an eye angle comparison feature, a histogram analysis feature and a facial comparison feature as explained hereinbelow.
[028] The processing modules process the particular real time image or the said frame by performing image transformation by applying image filters or kernels, such as by shifting, changing the image properties to align the real time images with the requirements of the processing unit 120. The processing unit 120 is configured to detect the plurality of faces in the received real time image. In an embodiment, the plurality of faces are detected based on a face detection technique by micropattern matching using an Artificial Intelligence module which includes deep machine learning and machine learning capabilities. The predefined functions performed by the processing unit 120 for determination of user authentication includes determining a face of the primary user from the plurality of faces in the real time images based on one or more parameters of the face. In an embodiment, the processing unit 120 comprises a first module 122 configured to detect the face of the primary user from the plurality of faces. The one or more parameters of the face comprise a surface area of the plurality of faces in the real time image, and angle of eye of the plurality of faces in the real time image with respect to the one or more image sensors 110. In one instance, the face having a combination of surface area with the maximum number of pixels and the predefined angle with respect to one or more image sensors is determined as the face of primary user from the plurality of faces. Herein, the predefined angle being the angle of the face with respect to the image sensor is a forward direction of the vehicle. In an embodiment, the predefined angle ranges between +90 degrees to -90 degrees. For example, the face having a pixel number equal to greater than the predefined threshold number and represented using RGB domain may be determined as the face of primary user from the plurality of faces. Another example, the face having maximum number of pixels in comparison with the other faces in the real time image may be determined as the face of primary user from the plurality of faces. Thus, the processing unit 120 is configured not only to detect plurality of faces in the received real time images, but also determines a face of the primary user from the plurality of faces in the received real time images.
[029] The processing unit 120 comprises a second module 124 and a third module 126. Herein, the second module 124 is configured to determine whether the face of the primary user is a real face based on histogram analysis of the real time images. In an embodiment, the second module 124 comprises of an Artificial Intelligence Module which extracts temporal histogram features from the face of the primary user to determine whether the face is a real face or a spoofed face. For instance, the histogram of a real face will vary significantly over time as compared to a spoofed face, which creates a distinction between the real face and the spoofed face. If the second module 124 determines that the face of the primary user is a real face, the third module 126 is configured to determine whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users. The first machine learning module of the processing unit 120 analyses one or more frames from the real time images or live feed to determine whether the primary user is a real face. If, the second module 124 determines that the primary user is a real face, the said frame is analysed by the third module 126 to determine whether the user is an authorized user in the predetermined manner.
[030] In a further embodiment, the second module 124 having an AI model based on a deep machine learning model is a combination of convolution neural network and recurrent neural network based on histogram. If the face is determined as a real face, the image is transmitted to the third module 126 to determine whether the determined real face is an authorized user or a non-authorized user. The third module 126 is configured to access pre-stored data set that uses vector interpretation of the face and uses a classifier based on deep neural network to classify the face as authorized or non-authorized user. If the user is determined as a non-authorized user, the third module 126 outputs the false flag or a non-authorized access flag. If the user is determined as an authorized user, the third module 126 issues user ID to the said user as an output to the control module 130. Based on the user ID from the processing unit 120, the control module 130 performs the requisite vehicle operation. In an embodiment, the control module 130 is configured to lock/unlock a vehicle based on the input from the processing unit 120.
[031] As further illustrated in the embodiment depicted in Figure 1, the system further comprises an illumination device 140 and an auxiliary sensor unit 150. The auxiliary sensor unit 150 in communication with the processing unit 120. The auxiliary sensor unit 150 is configured to detect a level of ambient light around the one or more image sensors 110. Correspondingly, the auxiliary sensor unit 150 or the processing unit 120 is configured to switch on the illumination device 140 or a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light or if the image capture is not clear. As per non limiting example the range of predetermined threshold value of ambient light is 0-800 lux. For example, if during riding conditions such as overcast conditions, or night-time riding conditions when the ambient light is low for example 200 lux, to ensure that the one or more real time images are captured appropriately, the auxiliary sensor unit 150 switches on the illumination device 140, or a vehicle lighting system or increases brightness of the instrument cluster, to increase the ambient light around the user. As per yet another embodiment, the
[032] As further illustrated in the embodiment depicted in Figure 1, the auxiliary sensor unit 150 is configured to detect one or more vehicle parameters. The processing unit 120 receives the one or more vehicle parameters from the auxiliary sensor unit 150 and is configured to determine whether the one or more vehicle parameters are below a first predetermined threshold. If the processing unit 120 determines that the one or more vehicle parameters are below the predetermined threshold, the processing unit 120 is configured to switch off the one or more image sensors 110 or the illumination device 140 or switch off the system 100. In an embodiment, the one or more vehicle parameters comprises a State of Charge of a battery of the vehicle, and the first predetermined threshold of the State of Charge (SOC) of the battery is 0%-30% of the maximum battery charge. If based on the input from the auxiliary sensor unit 150, the processing unit 120 determines that the SOC of the battery is lower than 0%-30%, the processing unit 120 switches off the system 100, or switches off the one or more image sensors 110 or the illumination device 140. Such switching off of the system 100 or the one or more image sensors 110 and the illumination device 140 prevents deep discharging of the battery.
[033] In another aspect, the present invention provides a method 200 for recognizing a user. Figure 2 illustrates the method steps involved in the method 200 for recognizing a user. At step 202, the one or more image sensors 110 are activated. The one or more image sensors 110 are activated as soon as the vehicle ignition is switched on and remain activated during vehicle riding conditions. At step 204, real time images of the user of the vehicle are captured by the one or more image sensors 110. Thereafter, the real time images of the user of the vehicle captured by the one or more image sensors 110 are received by the processing unit 120. In essence, a series of real time images, or video feed or live feed of the user riding the vehicle is captured by the one or more image sensors 110 and received by the processing unit 120. The real time images, or video feed or live feed are a series of individual image frames, which can be analysed for recognizing the user.
[034] After step 204, a set of predefined functions are performed for determination of user authentication explained as follows. At step 206, the processing unit 120 determines a plurality of faces from the real time images. At step 208, the processing unit 120 is further configured to determine a face of the primary user from the plurality of faces in the real time images based on one or more parameters of the face. At step 210, the processing unit 120 is configured to grab a particular real time image or a frame from the video stream. Subsequently, the processing unit 120 is configured to process the particular real time image or the said frame by performing image transformation by applying image filters or kernels, such as by shifting, changing the image properties to align the real time images with the requirements of the processing unit 120. At step 212, the processing unit 120 is configured to determine whether the face of the primary user is a real face based on histogram analysis of the real time images. If at step 212, it is determined that the user is a real face, the method moves to step 214. At step 214, the processing unit 120 is configured to determine whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users. If at step 214, it is determined that the user is an authorized user, the processing unit issues user ID to the authorized user as an output.
[035] If at step 212, it is determined that the face of the primary user is not a real face, the method moves to step 204. If at step 214, it is determined that the face of the determined primary user is not the face of the authorized user, the processing unit 120 outputs a false flag or a non-authorized access flag and the method 200 reverts to step 208.
[036] In an embodiment, the method 200 has the step of determining, by a first module 122 of the processing unit 120, a face of a primary user from the plurality of faces in the real time images based on one or more parameters of the face. The one or more parameters of the face comprise a surface area of the plurality of faces in the real time images, and angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors 110. In an embodiment, the predefined angle ranges between +90 degrees to -90 degrees. In an embodiment as depicted in Figure 3, the method 200 comprises the steps of determining, by a second module 124 of the processing unit 120, whether the determined face of the primary user is a real face based on histogram analysis of the real time images. The method 200 further has the step of determining, by a third module 126 of the processing unit 120, whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users. In an embodiment, a face of the primary user from the plurality of faces in the processed real time image is determined based on one or more parameters of the face. Subsequently, the second module 124 determine whether the face of the primary user is a real face based on histogram analysis of the real time images. One particular image or a frame from the one or more frames from the real time images or live feed is first processed for noise reduction and image tuning. If it is determined that the face of the primary user is a real face, the third module 126 determines whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users.
[037] As illustrated in Figure 3, at step 302, the processing unit 120 detects a plurality of faces in the real time image. At step 304, the processing unit 120 determines one or more parameter of the face, wherein the one or more parameter of the face comprise a surface area of the plurality of faces in the real time images and angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors 110. At step 306, the first module of the processing unit 120 determines a face of the primary user from the plurality of faces based on the determined one or more parameters of the face. Thereafter, at step 308, the second module of the processing unit 120 incorporates histogram analysis to distinctively identify the spoofed face from the real one. At step 310, the processing unit 120 determines whether the determined face of the primary user is a real face based on histogram analysis of the real time images. If at step 310, it is determined that the user is a real face, the method moves to step 312. Thereafter, at step 312, the third module 126 of the processing unit 120 compares the face of the determined primary user to a database of facial recognition data for authorized users. Thereafter at step 314, the processing unit 120 determines whether the face of the determined primary user is the face of the authorized user based on comparison analysis at step 312.
[038] If at step 310, it is determined that the face of the primary user is not a real face, the method moves to step 208. If at step 314, it is determined that the face of the determined primary user is not the face of the authorized user, the processing unit 120 outputs a false flag or a non-authorized access flag and the method 200 reverts to step 208.
[039] In another aspect, the present invention relates to a method for training modules for recognizing a user. The modules are equipped with artificial intelligence having machine learning and deep machine learning capabilities. Figure 4A illustrates the method steps 400 for training the second module 124 for determining whether the face of the primary user is a real face or a spoofed face. At step 402, the real time images are captured. At step 404, the faces are temporally labelled and step 406, the artificial intelligence module of the second module 124 is trained to determine whether the face of the primary user is a real face or a spoofed face. At step 408, the second module 124 is evaluated and at step 410, final second module 124 for deployment is obtained.
[040] Similarly, Figure 4B illustrates the method steps 500 for training the third module 126 for determining whether the primary user is an authorized user. At step 502, real time images are captured. At step 504, embedding point extraction is performed on the faces, such as extraction of features of the face using face net. At step 506, cosine similarity between captured primary face and the faces in the database is analysed. At step 508, a classifier module of the third module 126 is trained based on the cosine similarity to determine an authorized or a non-authorized user. At step 510, the third module 126 is evaluated and at step 512, the final third module 126 for deployment is obtained.
[041] Figure 5 illustrates a process flow of the present invention in accordance with an embodiment of the invention. In operation, for example, the one or more image sensors 110 capture real time images of the user. The one or more image sensors 110 capture the real time images of the user as a video stream, which is then converted to video input, and this video input is then encoded for further processing. Thereafter, the processing unit 120 breaks down the encoded video input into a plurality of frames by a frame grabber functionality, out of which one or more frames are selected for further processing. Thereafter each of the selected frames is processed through the image processing. Thereafter, the processing unit 120 detects a plurality of faces in real time images through a face detection functionality. Thereafter, the processing unit 120 determines the face of the primary user from the plurality of faces based on the one or more parameters of the face. Thereafter, the processing unit 120 determines whether the face of the primary user is a real face based on the histogram analysis through the second module 124. Thereafter, the processing unit 120 determines whether the face of the primary user is an authorized user, if the face of the primary user is a real face through the third module 126. Based on the inputs from the processing unit 120, the processing unit 120 generates a suitable output.
[042] Figure 6 illustrates the software architecture in relation to the present invention. As illustrated in Figure 6, the software architecture has a vision processing unit 170. The vision processing unit 170 is operatively coupled to a plurality of microservices 172, namely microservice 1, microservice 2 and microservice 3. Microservices 172 are small independent services for example Application Program Interface for detection of primary face, spoofed face, authorized user etc. Microservice 1 is in relation to capturing of the real time images using a hardware 178 such as the one or more image sensors 110. In operation, the microservice 1 receives the real time images from the hardware 178 through a hardware abstraction layer 176 and an operating system 174 and communicates the real time images to the vision processing unit 170. The first module 122, the second module 124 and the third module 126 coupled with the vision processing unit 170 determine whether the face of the primary user is an authorized user, if the face of the primary user is a real face. A debug module 180 is provided for any debugging and error correction in the software architecture. Similarly, microservice 2 and microservice 3 is in relation to detection of ambient light, wherein the microservice 2 and microservice 3 receive input from the relevant hardware, namely the illumination sensor unit 140 through operating system and hardware abstraction layer to be sent to the vision processing unit for detection of ambient light. Based on the detection of the ambient light, the processing unit 120 determines whether to switch on the illuminator.
[043] Advantageously, the present invention provides a system and method that effectively minimize the dependency on hardware, providing a highly flexible and adaptable solution. By reducing interdependency on hardware, the system becomes compatible with a wider range of vehicles, office spaces, and other environments where face detection is required. The present invention addresses the limitations of existing face recognition systems by providing a foolproof solution and ensuring accurate, efficient, and reliable face recognition at all times. This significantly enhances the security and prevents unauthorized access of the system. The present invention accurately detects and distinguishes between live faces and static images or videos, thereby countering spoofing attempts effectively.
[044] The present invention accurately identifies the primary user’s face even in scenarios where multiple faces are present around the vehicle. This capability enhances security and prevents unauthorized access by ensuring that only the intended user is granted permission.
[045] Furthermore, the present invention is equipped with AI modules that enable the system to detect the user’s intention to use the face recognition system. By analysing various cues and contextual factors, such as user behaviour and proximity, the system can accurately determine whether the user intends to authorized themselves or is merely in the vicinity of the system. This feature adds an additional layer of security and significantly reduces the risk of granting access to unauthorized individuals.
[046] Moreover, the present invention provides a method for processing the live image that ensures high efficiency with minimal inference time, reduced complexity, and the ability to skip subsequent steps if the previous step is deemed non-compliant.
[047] The present invention provides an enhanced face recognition system that is robust against spoofing attempts, capable of accurately distinguishing between live faces and static images or videos, and able to reliably identify the primary user even in the presence of multiple faces. Therefore, significantly enhancing the accessibility, security, and functionality of face recognition systems in two-wheeled vehicles or office spaces.
[048] In light of the abovementioned advantages and the technical advancements provided by the disclosed system and method, the claimed steps as discussed above are not routine, conventional, or well understood in the art, as the claimed steps enable the following solutions to the existing problems in conventional technologies. Further, the claimed steps clearly bring an improvement in the functioning of the system itself as the claimed steps provide a technical solution to a technical problem.
[049] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, non-volatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[050] While the present invention has been described with respect to certain embodiments, it will be apparent to those skilled in the art that various changes and modification may be made without departing from the scope of the invention as defined in the following claims.

List of Reference Numerals
100: System for Recognizing a User

110: One or more image sensors
120: Processing Unit
122: First Module
124: Second Module
126: Third Module
130: Control Module
140: Illumination Sensor
150: Auxiliary Sensor Unit
170: Vision Processing Unit
172: Microservices
174: Operating System
176: Hardware Abstraction Layer
178: Hardware
180: Debug Module
200, 300: Method for Recognizing a User
400, 500: Method for training modules
, Claims:WE CLAIM:
1. A system (100) for recognizing a user, the system (100) comprising:
one or more image sensors (110), the one or more image sensors (110) being configured to capture real time images of a vicinity of the one or more image sensors (110); and
a processing unit (120), the processing unit (120) comprising one or more processing modules being configured to receive the real time images from the one or more image sensors (110) and, the one or more processing modules being configured to determine a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions.

2. The system (100) as claimed in claim 1, wherein one or more processing modules being configured to:
determine the plurality of faces from the real time images;
determine a face of the primary user from the plurality of faces;
determine whether the face of the primary user is a real face or a spoofed face; and
determine whether the primary user is an authorized user, if the face of the primary user is a real face.

3. The system (100) as claimed in claim 1, wherein the set of predefined functions comprise one or more of a surface area comparison function, an eye angle comparison feature, a histogram analysis feature and a facial comparison feature.

4. The system (100) as claimed in claim 1, wherein the processing unit (120), comprises a first module (122), the first module (122) being configured to:
determine the face of the primary user from the plurality of faces in the real time images based on one or more parameters of the face.

5. The system (100) as claimed in claim 4, wherein the processing unit (120) is configured to:
determine a plurality of faces in the received real time images; and
determine the face of the primary user based on the one or more parameters of the face, wherein the one or more parameters of the face comprise a surface area of the plurality of faces in the real time images, and an angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors (110).

6. The system (100) as claimed in claim 5, wherein the face having a combination of surface area with the maximum number of pixels and a predefined angle with respect to one or more image sensors (110) is determined as the face of primary user from the plurality of faces.

7. The system (100) as claimed in claim 4, wherein the processing unit (120) comprises of a second module (124), the second module (124) being configured to determine whether the determined face of the primary user is a real face based on histogram analysis of the real time images.

8. The system (100) as claimed in claim 4, wherein the processing unit (120) comprises of a third module (126), the third module (126) being configured to determine whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users.

9. The system (100) as claimed in claim 1, comprising an illumination device (140) and an auxiliary sensor unit (150), the auxiliary sensor unit (150) being in communication with the processing unit (120) and being configured to detect a level of ambient light around the one or more image sensors (110), and at least one of the auxiliary sensor unit (150) and the processing unit (120) being configured to switch on at least one of the illumination device (140) and a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.
10. The system (100) as claimed in claim 8, wherein the auxiliary sensor unit (150) being configured to:
detect one or more vehicle parameters;
determine whether the one or more vehicle parameters are below a first predetermined threshold; and
the processing unit (120) being configured to:
switch off at least one of the one or more image sensors (110) and the illumination device (140) or switch off the system (100), if the one or more vehicle parameters are below the first predetermined threshold.

11. The system (100) as claimed in claim 7, wherein the processing unit (120) comprises a vision processing unit (170) in communication with the first module (122), the second processing module (124) and the third module (126), the vision processing unit (170) being configured to receive inputs from a hardware (178) through an operating system (174).

12. A method (200) for recognizing a user, the method (200) comprising the steps of:
capturing, by one or more image sensors (110), real time images of a vicinity of the one or more image sensors (110);
receiving, by a processing unit (120) comprising one or more processing modules, the real time images from the one or more image sensors (110); and
determining, by the one or more modules of the processing unit (120), a face of an authorised primary user from a plurality of faces in the real time images based on a set of predefined functions.

13. The method (200) as claimed in claim 11, comprising the steps of:
determining, by the one more processing modules, the plurality of faces from the real time images;
determining, by the one more processing modules, a face of a primary user from the plurality of faces in the real time images;
determining, by the one more processing modules, whether the face of the primary user is a real face or a spoofed face; and
determining, by the one more processing modules, whether the primary user is an authorized user, if the face of the primary user is a real face.

14. The method (200) as claimed in claim 12, wherein the set of predefined functions comprise one or more of a surface area comparison function, an eye angle comparison feature, a histogram analysis feature and a facial comparison feature.

15. The method (200) as claimed in claim 12, comprising the steps of:
determining, by a first module (122) of the processing unit (120), a face of a primary user from the plurality of faces in the real time images based on one or more parameters of the face.

16. The method (200) as claimed in claim 15, having the steps of:
determining, by the processing unit (120), a plurality of faces in the received real time images; and
determining, by the processing unit (120), the face of the primary user based on the one or more parameters of the face, wherein the one or more parameters of the face comprise a surface area of the plurality of faces in the real time images, and angle of eye of the plurality of faces in the real time images with respect to the one or more image sensors (110).

17. The method (200) as claimed in claim 15, wherein the face having a combination of surface area with the maximum number of pixels and a predefined angle with respect to one or more image sensors (110) is determined as the face of primary user from the plurality of faces.

18. The method (200) as claimed in claim 14, comprising the steps of:

determining, by a second module (124) of the processing unit (120), whether the determined face of the primary user is a real face based on histogram analysis of the real time images; and
determining, by a third module (126) of the processing unit (120), whether the face of the determined primary user is the face of the authorized user based on comparison of the face of the determined primary user to a database of facial recognition data for authorized users.

19. The method (200) as claimed in claim 14, comprising the steps of: switching ON, by at least one of the auxiliary sensor unit (150) and the processing unit (120), at least one of the illumination device (140) and a vehicle lighting system if the ambient light is below a predetermined threshold value of ambient light.

Dated this 20 day of July 2023

TVS MOTOR COMPANY LIMITED
By their Agent & Attorney

(Nikhil Ranjan)
of Khaitan & Co
Reg No IN/PA-1471

Documents

Application Documents

# Name Date
1 202341049102-STATEMENT OF UNDERTAKING (FORM 3) [20-07-2023(online)].pdf 2023-07-20
2 202341049102-REQUEST FOR EXAMINATION (FORM-18) [20-07-2023(online)].pdf 2023-07-20
3 202341049102-PROOF OF RIGHT [20-07-2023(online)].pdf 2023-07-20
4 202341049102-POWER OF AUTHORITY [20-07-2023(online)].pdf 2023-07-20
5 202341049102-FORM 18 [20-07-2023(online)].pdf 2023-07-20
6 202341049102-FORM 1 [20-07-2023(online)].pdf 2023-07-20
7 202341049102-FIGURE OF ABSTRACT [20-07-2023(online)].pdf 2023-07-20
8 202341049102-DRAWINGS [20-07-2023(online)].pdf 2023-07-20
9 202341049102-DECLARATION OF INVENTORSHIP (FORM 5) [20-07-2023(online)].pdf 2023-07-20
10 202341049102-COMPLETE SPECIFICATION [20-07-2023(online)].pdf 2023-07-20
11 202341049102-Request Letter-Correspondence [15-05-2024(online)].pdf 2024-05-15
12 202341049102-Power of Attorney [15-05-2024(online)].pdf 2024-05-15
13 202341049102-Form 1 (Submitted on date of filing) [15-05-2024(online)].pdf 2024-05-15
14 202341049102-Covering Letter [15-05-2024(online)].pdf 2024-05-15
15 202341049102-Request Letter-Correspondence [30-05-2024(online)].pdf 2024-05-30
16 202341049102-Power of Attorney [30-05-2024(online)].pdf 2024-05-30
17 202341049102-Form 1 (Submitted on date of filing) [30-05-2024(online)].pdf 2024-05-30
18 202341049102-Covering Letter [30-05-2024(online)].pdf 2024-05-30