Abstract: The present disclosure provides a system (106) and a method for identifying spoof attack in a plurality of images. The system (106) generates a target physical object. The system (106) generates a set of synthetic face spoof images from the target physical object. The system (106) trains a Convolutional Neural Network (CNN) model based on the set of synthetic face spoof images, and identifies, via the trained CNN model, spoof attack in a plurality of images.
DESC:RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The embodiments of the present disclosure generally relate to the field of face recognition systems, and specifically to a system and a method for generating synthetic facial spoof print data that prevents spoof attacks and helps in maintaining security from deployed recognition systems.
BACKGROUND OF INVENTION
[0003] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0004] Face Anti-Spoofing (FAS) has been vastly utilized for securing face recognition systems from Presentation Attacks (PAs). As more and more realistic PAs with novel types spring up, traditional FAS methods based on handcrafted features are becoming unreliable due to their limited representation capacity. With an emergence of large-scale academic datasets in the recent decade, deep learning-based FAS provides remarkable performance and is being predominantly used against the PAs. However, existing methods mainly focus on handcrafted features, which are outdated and inefficient in handling various challenges.
[0005] Biometrics utilize physiological features, such as fingerprint, face, and iris, or behavioural characteristics, such as typing rhythm and gait to uniquely identify or authenticate an individual. As biometric systems are widely used in real-world applications including mobile phone authentication and access control, biometric spoof or the PAs are posed with a larger threat, where a spoofed biometric sample is presented to the biometric system, and attempted to be authenticated. As face is the most accessible biometric modality, there have been many different types of PAs for faces including print attack, replay attack, three dimensional (3D) masks, etc. As a result, conventional face recognition systems may be very vulnerable to such PAs.
[0006] Further, vulnerability of the face recognition systems to the PAs (also known as direct attacks or spoof attacks) has received a great deal of interest from a biometric community. The rapid evolution of the face recognition systems into real-time applications has raised new concerns about their ability to resist the PAs, particularly in unattended application scenarios such as automated border control. The goal of the presentation attack is to subvert the face recognition systems by presenting a facial biometric artifact. Popular face biometric artifacts include a printed photo, an electronic display of a facial photo, replaying video using an electronic display, and Three-Dimensional (3D) face masks. Such biometrics have demonstrated a high security risk for state-of-the-art face recognition systems.
[0007] In real-world cases, spoofing faces are broadcasted by physical spoofing carriers (e.g., paper, glass screen, and resin mask), which have obvious material properties difference with human facial skin. Such differences may be explicitly described as human-defined cues (e.g., remote-photoplethysmography (rPPG), depth, and reflection). Further, the differences may also be implicitly learned according to the material property face anti-spoofing with human material perception uniqueness of structural live facial skin.
[0008] Conventionally, the systems and the methods focus on an invasive way of spoof detection by presenting challenges and verifying against pre-stored or additional sensor data. The conventional systems may perform spoof detection by observing facial response over a set of presented challenges, and by observing face signatures for sequence of one or more position requests based on pitch and yaw movements. Further, the conventional systems may identify face spoofs based on material properties of an imaging object used to create spoof. However, the conventional systems may not generate synthetic data for spoofs and spoof detection from a single static image.
[0009] There is, therefore, a need in the art to provide an improved system and a method to generate synthetic data for spoof detection by overcoming the deficiencies of the prior art(s).
OBJECTS OF THE INVENTION
[0010] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are listed herein below.
[0011] It is an object of the present disclosure to provide a system and a method to generate synthetic facial spoof print and replay attack data utilizing material properties of print and digital surfaces.
[0012] It is an object of the present disclosure to provide a system and a method that utilizes a low latency-based Convolutional Neural Network (CNN) based architecture to identify paper, mobile based print, and replay spoof attacks.
[0013] It is an object of the present disclosure to provide a system and a method that utilizes a face anti-spoofing technique to protect face recognition systems from Presentation Attacks (PAs).
[0014] It is an object of the present disclosure to provide a system and a method that limits unintended access to resources and prevents damage of business revenues.
SUMMARY
[0015] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] In an aspect, the present disclosure relates to a system for identifying spoof attack in a plurality of images. The system includes one or more processors, and a memory operatively coupled to the one or more processors. The memory includes processor-executable instructions, which on execution, cause the one or more processors to generate a target physical object based on a plurality of parameters, generate a set of synthetic face spoof images from the target physical object, train a Convolutional Neural Network (CNN) model based on the set of synthetic face spoof images, and identify, via the trained CNN model, the spoof attack in the plurality of images.
[0017] In an embodiment, the one or more processors may generate the target physical object by being configured to randomly input at least one real image of a plurality of real images into at least one layout generator, and fuse the at least one real image with one of assets templates available in the at least one layout generator.
[0018] In an embodiment, the plurality of parameters may include at least one of an Identity (ID) card, a paper print-out, a news article cutting, a pamphlet, and a print media.
[0019] In an embodiment, the one or more processors may generate the set of synthetic face spoof images from the target physical object by being configured to identify a type of the target physical object, apply texture and material properties to the target physical object based on the type, and apply different variations of geometry, lighting.
[0020] In an embodiment, the one or more processors may generate the set of synthetic face spoof images by simulating the target physical object and the plurality of real images.
[0021] In an embodiment, the one or more processors may identify, via the trained CNN model, the spoof attack in the plurality of images by being configured to classify print face spoofs and digital face spoofs from each image of the plurality of images.
[0022] In another aspect, the present disclosure relates to a method for identifying spoof attack in a plurality of images. The method includes generating, by one or more processors associated with a system, a target physical object based on a plurality of parameters. The method includes generating, by the one or more processors, a set of synthetic face spoof images from the target physical object. The method includes training, by the one or more processors, a CNN model based on the set of synthetic face spoof images. The method includes identifying, by the one or more processors, via the trained CNN model, the spoof attack in the plurality of images.
[0023] In an embodiment, generating, by the one or more processors, the target physical object may include randomly inputting, by the one or more processors, at least one real image of a plurality of real images into at least one layout generator, and fusing, by the one or more processors, the at least one real image with one of assets templates available in the at least one layout generator.
[0024] In an embodiment, generating, by the one or more processors, the set of synthetic face spoof images from the target physical object may include identifying, by the one or more processors, a type of the target physical object, applying, by the one or more processors, texture and material properties to the target physical object based on the type, and applying, by the one or more processors, different variations of geometry, lighting, and background scene to the target physical object.
[0025] In an embodiment, generating, by the one or more processors, the set of synthetic face spoof images may include simulating, by the one or more processors, the target physical object and the plurality of real images.
[0026] In an embodiment, identifying, by the one or more processors, via the trained CNN model, the spoof attack in the plurality of images may include classifying, by the one or more processors, print face spoofs and digital face spoofs from each image of the plurality of images.
BRIEF DESCRIPTION OF DRAWINGS
[0027] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0028] FIG. 1 illustrates an exemplary network architecture (100) for implementing a proposed system, in accordance with an embodiment of the present disclosure.
[0029] FIG. 2 illustrates an exemplary block diagram (200) of a system for identifying spoof attack in a plurality of images, in accordance with an embodiment of the present disclosure.
[0030] FIG. 3 illustrates an exemplary block diagram of a random face asset generator (300) of a face spoof synthetics engine associated with a proposed system, in accordance with an embodiment of the present disclosure.
[0031] FIG. 4 illustrates an exemplary block diagram of a face spoof simulator (400) of a face spoof synthetics engine associated with a proposed system, in accordance with an embodiment of the present disclosure.
[0032] FIGs. 5A to 5D illustrate exemplary representations of a spoof detector (500) associated with a proposed system, in accordance with an embodiment of the present disclosure.
[0033] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclose may be utilized in accordance with embodiments of the present disclosure.
[0034] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0035] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0036] The ensuing description provides exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0037] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0038] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0039] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
[0040] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0041] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0042] The present disclosure provides a system and a method of data synthetics of face spoofs by determining physical and material properties of paper, glass, and prints. The system and the method may identify spoof attacks in a plurality of images via a Convolutional Neural Network (CNN) model. The system may generate face spoof images by simulating a target physical object and a plurality of real imaged captured by a camera under the constraints of natural properties of material.
[0043] The system may generate the target physical object that may be used as a spoof medium. The system may generate a set of synthetic face spoof images from the target physical object. The system may input the set of synthetic face spoof images into the CNN model, and train the CNN model based on the set of synthetic face spoof images. The system may identify, via the trained CNN model, spoof attacks in the plurality of images efficiently.
[0044] Various embodiments of the present disclosure will be explained in detail with reference to FIGs. 1-6.
[0045] FIG. 1 illustrates an exemplary network architecture (100) for implementing a proposed system (106), in accordance with an embodiment of the present disclosure.
[0046] As illustrated in FIG. 1, by way of example and not by not limitation, the exemplary network architecture (100) may include a plurality of computing devices (102-1, 102-2…102-N), which may be individually referred as the computing device (102) and collectively referred as the computing devices (102). It may be appreciated that the computing device (102) may be interchangeably referred to as a user device, a client device, or a User Equipment (UE). The plurality of UEs (102) may include, but not be limited to, scanners such as cameras, webcams, scanning units, and the like.
[0047] In an embodiment, the UE (102) may include smart devices operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the UE (102) may include, but is not limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users and/or entities, or any combination thereof.
[0048] A person of ordinary skill in the art will appreciate that the computing device, the user device, or the UE (102) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0049] In an embodiment, the user device or the UE (102) may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UE (102) may include, but is not limited to, any electrical, electronic, electromechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, a mainframe computer, or any other computing device, wherein the UE (102) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from a user or the entity such as a touch pad, a touch enabled screen, an electronic pen, and the like.
[0050] A person of ordinary skill in the art will appreciate that the UE (102) may not be restricted to the mentioned devices and various other devices may be used.
[0051] In an exemplary embodiment, the UE (102) may communicate with the system (106) through a network (104). The network (104) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (104) may include, by way of example but not limitation, one or more of: a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a public-switched telephone network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, some combination thereof.
[0052] In an exemplary embodiment, the system (106) may be configured to generate synthetic facial spoof print and replay attack image dataset exploiting material properties of print and digital surfaces. It may be appreciated that the synthetic facial spoof print and replay attack image may be interchangeably referred to as synthetic face spoof images. The synthetic face spoof images may be utilized for training deep neural network model, for example, a CNN model. The trained CNN model may identify paper and mobile based print and replay spoof attacks in the images more efficiently by classifying the print face spoofs and digital face spoofs from the images.
[0053] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
[0054] FIG. 2 illustrates an exemplary block diagram (200) of a system (106) for identifying spoof attack in a plurality of images, in accordance with an embodiment of the present disclosure.
[0055] In an embodiment, and as shown in FIG. 2, the system (106) may include one or more processors (202). The one or more processors (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processors (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (106). The memory (204) may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as an Erasable Programmable Read-Only Memory (EPROM), a flash memory, and the like.
[0056] In an embodiment, the system (106) may also include an interface(s) (206). The interface(s) (206) may comprise a variety of interfaces, for example, a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the system (106) with various devices coupled to it. The interface(s) (206) may also provide a communication pathway for one or more components of the system (106). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (216).
[0057] In an embodiment, the processing engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples, described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the one or more processors (202) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (106) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (106) and the processing resource. In other examples, the processing engine(s) (208) may be implemented by an electronic circuitry.
[0058] In an embodiment, the database (216) may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processors (202) or the processing engine(s) (208) or the system (106).
[0059] In an exemplary embodiment, the processing engine(s) (208) may include one or more engines selected from any of a face spoof synthetics engine (210), a spoof detector (212), and other engines (214). The other engines (214) may include, but are not limited to, an acquisition engine, a monitoring engine, and the like.
[0060] In an embodiment, the one or more processors (202) may, via the face spoof synthetics engine (210), generate a target physical object. The target physical object may be a spoof medium. The target physical object may be generated based on a plurality of parameters. The plurality of parameters may include, but not limited to, an Identity (ID) card, a paper print-out, a news article cutting, a pamphlet, and a print media.
[0061] The target physical object may be generated by randomly inputting at least one real image of a plurality of real images into a layout generator. The layout generator may include, but not limited to, an ID card layout generator, a phone screen layout generator, a print media layout generator, and a paper print layout generator. The at least one real image may be fused with one of assets templates available in an asset library of the at least one layout generator to generate the target physical object.
[0062] In an embodiment, the one or more processors (202) may, via the face spoof synthetics engine (210), generate a set of synthetic face spoof images from the target physical object. The set of synthetic face spoof images may be generated from the target physical object by identifying a type of the target physical object, applying texture and material properties to the target physical object based on the type, and applying different variations of geometry, lighting, and background scene to the target physical object.
[0063] In an embodiment, the one or more processors (202) may, via the spoof detector (212), input the set of synthetic face spoof images into a CNNs model. The CNN model may be trained based on the set of synthetic face spoof images.
[0064] In an embodiment, the one or more processors (202) may, via the spoof detector (212), identify spoof attacks in the plurality of images using the trained CNN model. The spoof attacks may be identified by classifying print face spoofs and digital face spoofs from each image of the plurality of images.
[0065] Although FIG. 2 shows exemplary components of the system (106), in other embodiments, the system (106) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the system (106) may perform functions described as being performed by one or more other components of the system (106).
[0066] FIG. 3 illustrates an exemplary block diagram of a random face asset generator (300) of a face spoof synthetics engine (210) associated with a proposed system (106), in accordance with an embodiment of the present disclosure.
[0067] In an embodiment, the face spoof synthetics engine (210 of FIG. 2) may include a random face asset generator (300). The random face asset generator (300) may be provided for generating a target physical object used as a spoof medium. The random face asset generator (300) may include a layout randomizer (312), and a layout generator or a layout factory (310) of commonly used spoof mediums like ID card, paper print out, news article cuttings, pamphlets, etc. One of real images captured by a camera, i.e., a face image may be input into one of the layout generators. The layout generators may include, but not limited to, an ID card layout generator, a phone screen layout generator, a print media layout generator, and a paper print layout generator. Each specific type of the layout generator may fuse the face image with one of assets templates available in assets library of the layout generator to create realistic looking physical object that may be used for spoof simulation.
[0068] The assets library may include, but not limited to, ID card assets (302), paper assets (304), screen assets (306), and print media assets (308). The print media assets (308) may include, but not be limited to, paper print outs, news article cuttings, and pamphlets. The screen assets (306) may include, but not be limited to, phone, packet assembler/disassembler (PAD), television (TV), and liquid crystal display (LCD) screen. Further, the output from the layout factory (310) may be provided to the layout randomizer (312) for generating the target physical object, as shown in FIG. 3.
[0069] FIG. 4 illustrates an exemplary block diagram of a face spoof simulator (400) of a face spoof synthetics engine (210) associated with a proposed system (106), in accordance with an embodiment of the present disclosure.
[0070] As illustrated in FIG. 4, the face spoof synthetics engine (210 of FIG. 2) may include a face spoof simulator (400). The face spoof simulator (400) may be utilized for generating a set of face spoof images from the target physical object which is generated by the random face asset generator (300) as described in FIG. 3. The face spoof simulator (400) may identify a type of the target physical object, and apply texture and material properties for a specific type of the target physical object. For example, an ID card object may be applied with a plastic like texture and material property. Once the material properties are applied, different variations of geometry, lighting, and background scene may be applied to the target physical object for generating multiple face spoof images.
[0071] In an embodiment, the face spoof simulator (400) may receive the target physical object generated by the random face asset generator (300) through a layout generator (402) as shown in FIG. 4. The face spoof simulator (400) may include a materials property simulator (404) for application of texture and material properties. Further, a view renderer module (406) may receive the generated target physical object applied with specific texture and material properties. The view renderer module (406) may further enable applications of geometry, lighting, and background scene through a view geometry module (408), a lighting setup module (410), and a world background module (412), respectively. Therefore, generating multiple spoof images (414).
[0072] As examples, the screen spoofs and ID spoofs may include textures and glass shading properties. The glass shading properties may include, but not limited to, colour (printer and light emitting diode (LED) screen colour gamuts), roughness, Index of Refraction (IOR), normals, etc.
[0073] As examples, the print spoofs may include, but not be limited to, textures (paper textures), colours (printer colour gamuts), deformation, bloom, shading (volume shading, smooth shading etc.), and camera lighting.
[0074] FIGs. 5A to 5D illustrate exemplary representations of a spoof detector (500) associated with a proposed system (106), in accordance with an embodiment of the present disclosure.
[0075] As illustrated in FIGs. 5A to 5D, the spoof detector (500) may be a lightweight and wide CNN based neural network architecture designed to classify print face spoofs and digital face spoofs from a single/multiple images. A CNN model may focus on learning low level feature descriptors to identify the face spoofs attacks in the single/multiple images.
[0076] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclose may be utilized, in accordance with embodiments of the present disclosure.
[0077] As shown in FIG. 6, the computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), a communication port(s) (660), and a processor (670). A person skilled in the art will appreciate that the computer system (600) may include more than one processor (670) and communication ports (660). The processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (660) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects.
[0078] In an embodiment, the main memory (630) may be a Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0079] In an embodiment, the bus (620) may communicatively couple the processor(s) (670) with the other memory, storage, and communication blocks. The bus (620) may be, e.g., a Peripheral Component Interconnect PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), USB, or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
[0080] In another embodiment, operator and administrative interfaces, e.g., a display, keyboard, and cursor control device may also be coupled to the bus (620) to support direct operator interaction with the computer system (600). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (660). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
[0081] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
[0082] The present disclosure provides a system and a method to generate synthetic facial spoof print and replay attack data exploiting material properties of print and digital surfaces.
[0083] The present disclosure provides a system and a method that utilizes a low latency-based convolutional neural network (CNN) based architecture to identify paper, mobile based print, and replay spoof attacks efficiently.
[0084] The present disclosure provides a system and a method that utilizes a face anti-spoofing technique to protect face recognition systems from presentation attacks (PAs).
[0085] The present disclosure provides a system and a method that limits unintended access to resources and prevents damage of business revenues.
,CLAIMS:1. A system (106) for identifying spoof attack in a plurality of images, the system (106) comprising:
one or more processors (202); and
a memory (204) operatively coupled to the one or more processors (202), wherein the memory (204) comprises processor-executable instructions, which on execution, cause the one or more processors (202) to:
generate a target physical object based on a plurality of parameters;
generate a set of synthetic face spoof images from the target physical object;
train a Convolutional Neural Network (CNN) model based on the set of synthetic face spoof images; and
identify, via the trained CNN model, spoof attack in a plurality of images.
2. The system (106) as claimed in claim 1, wherein the one or more processors (202) are to generate the target physical object by being configured to:
randomly input at least one real image of a plurality of real images into at least one layout generator; and
fuse the at least one real image with one of assets templates available in the at least one layout generator.
3. The system (106) as claimed in claim 1, wherein the plurality of parameters comprises at least one of: an Identity (ID) card, a paper print-out, a news article cutting, a pamphlet, and a print media.
4. The system (106) as claimed in claim 1, wherein the one or more processors (202) are to generate the set of synthetic face spoof images from the target physical object by being configured to:
identify a type of the target physical object;
apply texture and material properties to the target physical object based on the type; and
apply different variations of geometry, lighting, and background scene to the target physical object.
5. The system (106) as claimed in claim 4, wherein the one or more processors (202) are to generate the set of synthetic face spoof images by being configured to simulate the target physical object and a plurality of real images.
6. The system (106) as claimed in claim 1, wherein the one or more processors (202) are to identify, via the trained CNN model, the spoof attack in the plurality of images by being configured to classify print face spoofs and digital face spoofs from each of the plurality of images.
7. A method for identifying spoof attack in a plurality of images, the method comprising:
generating, by one or more processors (202) associated with a system (106), a target physical object based on a plurality of parameters;
generating, by the one or more processors (202), a set of synthetic face spoof images from the target physical object;
training, by the one or more processors (202), a Convolutional Neural Network (CNN) model based on the set of synthetic face spoof images; and
identifying, by the one or more processors (202), via the trained CNN model, spoof attack in a plurality of images.
8. The method as claimed in claim 7, wherein generating, by the one or more processors (202), the target physical object comprises:
randomly inputting, by the one or more processors (202), at least one real image of a plurality of real images into at least one layout generator; and
fusing, by the one or more processors (202), the at least one real image with one of assets templates available in the at least one layout generator.
9. The method as claimed in claim 7, wherein generating, by the one or more processors (202), the set of synthetic face spoof images from the target physical object comprises:
identifying, by the one or more processors (202), a type of the target physical object;
applying, by the one or more processors (202), texture and material properties to the target physical object based on the type; and
applying, by the one or more processors (202), different variations of geometry, lighting, and background scene to the target physical object.
10. The method as claimed in claim 7, wherein generating, by the one or more processors (202), the set of synthetic face spoof images comprises simulating, by the one or more processors (202), the target physical object and a plurality of real images.
11. The method as claimed in claim 7, wherein identifying, by the one or more processors (202), via the trained CNN model, the spoof attack in the plurality of images comprises classifying, by the one or more processors (202), print face spoofs and digital face spoofs from each image of the plurality of images.
| # | Name | Date |
|---|---|---|
| 1 | 202321003430-STATEMENT OF UNDERTAKING (FORM 3) [17-01-2023(online)].pdf | 2023-01-17 |
| 2 | 202321003430-PROVISIONAL SPECIFICATION [17-01-2023(online)].pdf | 2023-01-17 |
| 3 | 202321003430-POWER OF AUTHORITY [17-01-2023(online)].pdf | 2023-01-17 |
| 4 | 202321003430-FORM 1 [17-01-2023(online)].pdf | 2023-01-17 |
| 5 | 202321003430-DRAWINGS [17-01-2023(online)].pdf | 2023-01-17 |
| 6 | 202321003430-DECLARATION OF INVENTORSHIP (FORM 5) [17-01-2023(online)].pdf | 2023-01-17 |
| 7 | 202321003430-ENDORSEMENT BY INVENTORS [16-01-2024(online)].pdf | 2024-01-16 |
| 8 | 202321003430-DRAWING [16-01-2024(online)].pdf | 2024-01-16 |
| 9 | 202321003430-CORRESPONDENCE-OTHERS [16-01-2024(online)].pdf | 2024-01-16 |
| 10 | 202321003430-COMPLETE SPECIFICATION [16-01-2024(online)].pdf | 2024-01-16 |
| 11 | 202321003430-FORM-8 [29-02-2024(online)].pdf | 2024-02-29 |
| 12 | 202321003430-FORM 18 [08-03-2024(online)].pdf | 2024-03-08 |
| 13 | Abstract1.jpg | 2024-04-15 |
| 14 | 202321003430-FER.pdf | 2025-08-05 |
| 15 | 202321003430-FORM 3 [05-11-2025(online)].pdf | 2025-11-05 |
| 1 | 202321003430_SearchStrategyNew_E_SearchHistory(11)E_19-03-2025.pdf |