Abstract: In biometric technology, face recognition techniques are considered the most significant research area. This technology is abundantly used in security services, smart cards, surveillance, social media, and ID verification. The number of countermeasures is gradually increasing, and many systems have been initiated to distinguish genuine access and fake attacks. In our invention, we propose a Convolutional Neutral Network (CNN), which can obtain ne distinctions and abilities in a supervised manner. Deep convolutional neural networks have prompted a progression of break-throughs for image classification. This invention introduces various architectures of CNN for detecting face spoofing using many convolutional layers. We have used VGG-16 under Convolutional Neural Networks (CNN) architecture in the proposed system for learning about the feature classification. Our proposed system has show-cased an accuracy of 98% for Convolutional Neural Network (CNN), 63% for VGG16, and 50% for Support Vector Machine (SVM) respectively. 6 Claims & 1 Figure
Description:Field of Invention
This invention relates to face spoof detection. This invention introduces various architectures of CNN for detecting face spoofing using many convolution layers. We have used VGG-16 under Convolution Neural Networks (CNN) architecture in the proposed system for learning about feature classification.
Objective of the Invention
Over the last several years, detecting face spoofing has proven to be an extremely difficult task. Even though there has been much research into the problem, spoofing attacks continue to be a security risk for face biometric systems. In spite of the task's rising profile in recent years, it remains mostly unchecked. We think deep neural networks trained using residuals have a far brighter future in facial recognition. To get over this problem, we suggest using CNN for image identification in this innovation.
Background of the Invention
In (CN2017/107423690B), The preferred version of the invention provides an approach for recognizing faces and a face comprehension device, in which the method includes the steps of gathering Haar features from a current face image to be recognised, recognizing a face area of the picture of the face to be recognised by using an AdaBoost classifier to obtain a face area image, performing multi-scale extracting of features on the region around the face image by using a model of convolutional neural network to obtain a characteristic, and finally, obtaining a face area visual. The efficiency and precision of facial recognition are increased because of the derived features' strong resilience and generalization ability, which also increases the security of identity validation. In addition (CN2022/115775409A), A human faces image tamper-proof integration identification technique according to the present invention, consists of the steps of gathering a human face image, entering the human face image into a pre-built and developed human face image tamper-proof fusion recognition model, and determining whether or not the human face image is tampered. The prepared human face image tamper-proof fusion recognition simulation first determines the Hash value resemblance of each human face image and the pictures in the tamper-proof fusion identification framework.
The (CN2022/111695406B), The present invention, which relates to the scientific discipline of face recognition, describes a face recognition anti-spoofing technique, a platform, and an interface based on infrared rays, whereby the near-infrared ray camera is used to obtain a near-infrared ray photo of the face of a user; enhancement of the image is performed on the obtained near-infrared ray face image; a face reliability classification based on the convolutional neural network is used. Additionally, if the classifier produces a fake face, the system determines that the user is a part of the deception assault, forbids the user from using the next step in the face recognising process, and processes an alarm.
Most existing facial recognition frameworks have been shown to be vulnerable to spoofing. If an attacker uses a mask or a photo of another person to fool a biometric facial system, they have committed a spoofing attack. To determine if the face image was taken from a real person or a printed photograph, face-spoofing location is often used as a first step in face recognition frameworks (replay video). Thus, recognizing a fake face is a dual-category problem. Face likeness detection is another name for this process. When assets are protected by a biometric verification framework, the spoofing attack includes the use of fake biometric features to gain unauthorized access. There is an instant threat to the practical value of a biometric framework, and the attacker doesn't need any background knowledge of the verification process. Now that we have access to so much more visual information in the form of videos and images, there is an urgent need for automated understanding and assessment of data from the smart framework. In light of this, the programmed face discovery framework plays a crucial role in areas such as face recognition, external appearance recognition, head presence evaluation, human-computer interaction, and so on.
Summary of the Invention
Several methods for bettering the data used by the CNN model have been explored. In addition, VGG-16 was employed in the suggested system for feature classification learning. Adding layers upon layers to this model has been essential to correctly analyzing its performance, since CNN works by the embedding of layers inside by linking neurons. The data collection used to train this model includes both genuine and highly altered synthetic face photos. This approach is effective in terms of facial verification and the data set used places a premium on detecting face-spoofing.
Detailed Description of the Invention
An individual's face is significant in social interactions due to the information it provides about that person's identity and emotions. When comparing humans to machines, we excel in the ability to recognise individual faces. Therefore, applications such as face identification, facial expression recognition, head-pose estimation, and human-computer interaction rely heavily on an accurate and reliable automated face detection system. Here, however, we will explain how to protect yourself against face impersonation using a deep convolutional neural network (CNN). This is our first assignment, and we're expected to do it well. In this case, CNN-learned features could be used instead of hand-crafted ones since they are often more effective at identifying discriminating qualities in data-driven methods. First, we compiled a database of known fake and real faces for use in our Real and Fake Face Detection study. Both TensorFlow and Keras, which are also used, are open-source libraries that provide ANN interfaces. For this, mostly internet-based research was necessary. We gathered all available datasets, including both actual and sham photographs, via independent web research, and we manually attempted to use feature extraction and machine learning methods to extract certain characteristics relating to face expressions. We are use the Real and Fake Face Detection dataset for this development. When we unpack the dataset, we'll see two folders inside: one with actual photos, and another with fakes. This data is then used as a training set.
The first step is to scale all the pictures to the same dimensions. Images are stretched or shrunk to fit a standard format. Additionally, the colors have been stripped from these pictures. All photos must be resized without losing quality. We combined the usage of CNN with that of other algorithms, most notably VGG 16. We next evaluate our own findings against those of past studies and those of other researchers in an effort to improve our own accuracy rate. The researchers have extensive experience in a wide range of CNN-related fields. Several research publications review numerous studies on the topic of face recognition, face spoofing, feature extraction, and so on. Anti-spoofing technology for the face help identify real from fake ones. There is a growing worry about this in the field of biometrics. In this report, we discuss some of the earlier research publications that we have reviewed. Using a novel deep CNN architecture, we provide a game-changing method for preventing face spoofing. To differentiate between real and fake video or picture sequences, it employs the LBP-TOP descriptor to pull out relevant spatiotemporal information. Instead of relying on common preprocessing steps like face identification, face refinement, and rescaling, as do most existing methods, this one doesn't. The examination of textural features is the basis of LBP-TOP. Fake faces, in general, have a different feel to the touch than real ones. In this research, the authors investigate whether or not a dominant countermeasure for enhancing Local Binary Pattern (LBP) image sequences may be built out of both time and space. The first layer of the CNN uses unprocessed pixels to locate the edges. Through the use of edges to differentiate between more basic shapes, and then forms to identify more complex aspects like facial shapes, the second layer is able to detect higher-level traits. Given their greater information density compared to the previous two layers, the final three Convolutional layers are selected for this topic. Two datasets are used to test the effectiveness of the proposed method. Complete discrimination between authentic access and impostor assaults and very competitive results were shown using the REPLAY-ATTACK dataset provided by CASIA. The CASIA Database covers 50 distinct areas (20 for training and 30 for the test set). There are a total of twelve sequences available to each inhabitant: three for legitimate entry and nine for fending off impostors. Each resident's pictures are taken at one of three different quality settings. Warped image attacks, sliced image assaults, and video replay attacks are all covered in CASIA DB. Instead, the REPLAY-ATTACK dataset supplies the validation set, and results are reported in terms of the EER calculated on the training set and the HTER calculated on the test dataset. The attack or authentic video score is calculated by averaging the scores of all frames from the video, and the error rate is assessed per movie rather than per frame. The final video score reflects whether or not the attack shown was genuine.
Throughout this invention, we have described in detail the steps used to gather data, the steps taken to label and refine that data, the steps taken to validate the dataset, and the test data used in the validation process. We are not shocked by the exponential growth of cybercrime in the current age. Companies are spending large sums on machine learning experts to research biometric face recognition for security purposes. But even the most sophisticated methods of face recognition have limitations. Recognizability systems are tricked by being shown photos of random persons. A proper dataset must be selected in order to properly categorise face spoofing methods. Yonsei University's Department of Computer Science made available on Kaggle the "Real and Fake Face Detection" dataset used in this investigation. The risks associated with using a false identity inspired its development. Images of human faces that have been digitally altered to a high standard make up the dataset. Images are composited using many distinct faces, with considerable changes made to each of the individual characters' appearances. Because the developers took into account the training of the classifier for these photographs, only trained professionals are allowed to make changes to them. The dataset has two subdirectories: I training real (1081 files) and (ii) training fake (960 files). Classifying and refining data: these pictures are split into two groups: authentic and fraudulent. Skilled professionals are responsible for altering the bogus photos by fusing together previously separate elements. Since our models rely on accurate data, we capture the true photographs in an unstaged manner. A Dataset for Use in Training To fit the model, we utilised around 1634 photos. This accounts for over 80% of the whole dataset and includes both authentic and sham photos. The photographs were used as input for our models, which were then run on the data. Twenty percent of these 1634 training photos were utilised for an independent examination of the training dataset to create the validation dataset. As additional model configurations are incorporated with the validation dataset, the assessment becomes increasingly skewed. 20 percent of the whole dataset was utilised for our tests. This validation set of data shows us how the models are scored. When the models have finished processing the training dataset, they move on to the test dataset. CNN models vary in their accuracy. Please note that neither the testing dataset nor the training dataset was utilized in the other.
6 Claims & 1 Figure
Brief description of Drawing
In the figure which is illustrate exemplary embodiments of the invention.
Figure 1, The Process of Proposed Invention , Claims:The scope of the invention is defined by the following claims:
Claim:
1. A system/method to detect face spoof detection using the Artificial neural networks-based Machine Learning algorithms, said system/method comprising the steps of:
a) The system starts with datasets collection from various cameras (1), from that all the attributes given to make compress the image datasets (2).
b) Then proposed invention is divided the datasets into training and testing sets using tensroboard (3), from this the proposed invention model is built with CNN algorithms (4).
c) The innovation system to calculate the loss function (5), to perform the optimizer features (6), to predict the synthetic face images (7).
2. As mentioned in claim 1, the face-spoofing location is often used as a pre-processing step in face recognition systems to determine whether the face image was captured from a real person or a printed photograph.
3. As per claim 1, the face is significant in social interactions due to the information it communicates about a person's identity and emotions. When comparing humans to machines, we excel in the ability to recognize individual faces.
4. As mentioned in claim 1, to begin, we'll evaluate how well color texture features perform in comparison to their grayscale analogues. Next, we'll combine complementary facial color texture representations to create the final face description used in the anti-spoofing approach and evaluate how well it performs in comparison to leading-edge algorithms.
5. As mentioned in claim 1, these photos are split into two distinct sets: authentic and fabricated. Skilled professionals are responsible for altering the bogus photos by fusing together previously separate elements.
6. As per claim 1, one popular kind of ANN is the Convolutional Neural Network (CNN). At first, it was put to use in the identification of complex architectural patterns inside images.
| # | Name | Date |
|---|---|---|
| 1 | 202341077573-REQUEST FOR EARLY PUBLICATION(FORM-9) [15-11-2023(online)].pdf | 2023-11-15 |
| 2 | 202341077573-FORM-9 [15-11-2023(online)].pdf | 2023-11-15 |
| 3 | 202341077573-FORM FOR STARTUP [15-11-2023(online)].pdf | 2023-11-15 |
| 4 | 202341077573-FORM FOR SMALL ENTITY(FORM-28) [15-11-2023(online)].pdf | 2023-11-15 |
| 5 | 202341077573-FORM 1 [15-11-2023(online)].pdf | 2023-11-15 |
| 6 | 202341077573-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [15-11-2023(online)].pdf | 2023-11-15 |
| 7 | 202341077573-EVIDENCE FOR REGISTRATION UNDER SSI [15-11-2023(online)].pdf | 2023-11-15 |
| 8 | 202341077573-DRAWINGS [15-11-2023(online)].pdf | 2023-11-15 |
| 9 | 202341077573-COMPLETE SPECIFICATION [15-11-2023(online)].pdf | 2023-11-15 |