Abstract: A REAL-TIME EYE-BLINK LIVENESS VALIDATION SYSTEM FOR EDGE-IOT DEVICES The invention discloses a system and method for real-time eye-blink liveness validation optimized for Edge-IoT devices. The system comprises a camera interface for capturing facial input, a preprocessing unit for illumination normalization and landmark detection, a lightweight convolutional neural network (CNN) for classifying eye states, and a temporal blink analysis module for validating dynamic blink patterns. An anti-spoofing subsystem enhances resistance against photographs, video replays, and 3D masks by analyzing temporal inconsistencies. An inference engine enables efficient on-device execution, with the decision module delivering liveness validation within 250 milliseconds. The invention operates using less than 40 megabytes of memory and under 1.8 watts of power, making it suitable for ARM-based processors in resource-constrained IoT environments. By ensuring that biometric data is processed entirely on-device without transmission, the invention preserves privacy while enabling secure, real-time, and scalable authentication for applications including smart locks, mobile devices, ATM systems, and surveillance.
Description:FIELD OF THE INVENTION
The present invention relates to the field of biometric authentication and edge computing. More specifically, it concerns a real-time eye-blink liveness validation system and method designed for Edge-IoT devices. The invention provides a lightweight, low-latency, and privacy-preserving biometric security framework that can detect spontaneous eye blinks to confirm user liveness without requiring cloud-based computation or specialized hardware.
BACKGROUND OF THE INVENTION
To design a light-weight, real-time eye-blink authentication system for optimal performance on Edge-IoT devices, we need to design a resource-efficient algorithm that can successfully identify spontaneous blinks in order to authenticate user liveness. Such a system needs to work reliably under low power consumption, sparsely available network connectivity, and changing environmental or illumination conditions. It needs to employ methods such as illumination normalization, light-weight neural network architectures, and temporal blink analysis in order to provide high accuracy without depleting system resources. Security must be ensured through anti-spoofing, even in offline mode. Concurrently, the system must provide an unhindered user experience with no or negligible delay or user effort, suitable for real-world, privacy-constrained deployments.
Biometric authentication systems, including facial recognition, are popular due to their convenience and non-invasive feature. Biometric systems are now increasingly susceptible to
presentation or spoofing attacks as images, videos, or 3D masks. These attacks can mislead facialrecognition systems to grant unauthorized access and can be of great security and privacy risk.
Legacy liveness detection solutions, which are meant to mitigate these weaknesses, are usually
based on sophisticated algorithms and consume large amounts of computational power, hence
are not fit for running in resource-limited edge and IoT devices. Such approaches tend to depend
on cloud processing or specific hardware, contributing to added latency, dependence on the
network, and exposure to privacy threats.
Edge-IoT devices like intelligent door locks, video surveillance, and mobile terminals are becoming ubiquitous in today's connected world. They have minimal processing power andmemory and hence require lightweight and efficient solutions that are able to carry out real-timebiometric verification. For the purpose of successful user verification without impacting systemresponsiveness and autonomy, it is necessary to implement a liveness detection scheme that doesnot need to be hosted on central servers but can be executed directly on edge devices withoutdegrading performance and security.
This invention outlines a new approach to real-time eye-blink liveness verification specifically
suited for Edge-IoT implementations. Through the utilization of the involuntary and dynamic nature of human blinks, the system facilitates correct discrimination between genuine subjects and spoofing objects. The invention centers on developing a low-latency, efficient algorithm that
can be run locally, minimizing dependency on external servers and improving user privacy and
system robustness. This context provides the backdrop to a secure, scalable, and real-time solution for facial biometric spoofing in edge-based use cases.
OBJECTS OF INVENTION:
• To achieve a low-weight and efficient liveness detection system Design an algorithm that effectively identifies spontaneous eye-blinks with limited computation resources for real-time execution on edge and IoT devices. To make facial recognition systems more secure against spoofing attacks.
• Identify and thwart ubiquitous spoofing methods, including photo, video, and 3D mask attacks, by taking advantage of natural, involuntary human eye-blink behavior.To make privacy-preserving authentication possible.
• Implement the system to perform all biometric processing locally on the device, thus preventing the sharing of sensitive data over networks and minimizing privacy risks.For real-time operation in low-resource environments.
• Tune the blink detection algorithm for low-latency execution on embedded devices without depending on cloud computing or high-end hardware. To be compatible with standard hardware.
US20230206700A1: A method, system, and computer readable medium are described to capture, detect, and recognize faces using machine learning and a single-stage face detector. A method to determine live faces from spoof 2D and spoof 3D images using a liveness score as well as a method to classify faces using machine learning deep convolutional neural networks is also described.
US11789699B2: A set of measurable encrypted feature vectors can be derived from any biometric data and/or physical or logical user behavioral data, and then using an associated deep neural network (“DNN”) on the output (i.e., biometric feature vector and/or behavioral feature vectors, etc.) an authentication system can determine matches or execute searches on encrypted data. Behavioral or biometric encrypted feature vectors can be stored and/or used in conjunction with respective classifications, or in subsequent comparisons without fear of compromising the original data. In various embodiments, the original behavioral and/or biometric data is discarded responsive to generating the encrypted vectors. In other embodiment, helper networks can be used to filter identification inputs to improve the accuracy of the models that use encrypted inputs for classification.
Conventional biometric liveness detection systems are computationally intensive and often require specialized hardware such as infrared sensors or 3D cameras. Many rely on cloud-based processing, resulting in high latency, increased bandwidth usage, and exposure of sensitive biometric data to privacy risks. These systems are not suited for deployment in resource-constrained Edge-IoT devices, which must operate with limited power, memory, and network connectivity. Furthermore, traditional methods are vulnerable to spoofing attacks using high-quality photographs, videos, or 3D masks.
The present invention addresses these shortcomings by introducing a lightweight, real-time liveness validation system that works locally on Edge-IoT devices with standard RGB cameras. The system employs illumination normalization, temporal blink pattern recognition, and lightweight convolutional neural network (CNN) architectures optimized for edge hardware. This enables fast, energy-efficient, and privacy-preserving liveness detection that resists spoofing attacks while ensuring seamless user experience with minimal delay.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention provides a system and method for real-time eye-blink liveness validation specifically optimized for Edge-IoT devices. The system comprises a camera interface for capturing live facial input, a preprocessing unit for illumination normalization and facial landmark detection, a lightweight CNN for eye-state classification, a temporal blink analysis module for detecting dynamic blink patterns, and a decision module for confirming liveness.
The system operates entirely on-device using an inference engine such as TensorFlow Lite, minimizing memory and power consumption while delivering decisions in under 250 milliseconds. A privacy-preserving framework ensures that biometric data is processed locally without transmission to external servers, reducing the risk of data breaches.
A spoof-resistant subsystem enhances security by analyzing temporal consistency of eye movements and detecting anomalies indicative of photographs, videos, or 3D mask attacks. The framework is energy-efficient, scalable, and adaptable to various applications such as smart locks, mobile authentication, ATM security, and surveillance.
By combining lightweight computer vision, optimized neural networks, and temporal blink analysis, the invention enables secure, efficient, and user-friendly biometric authentication on low-resource edge devices.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the This invention comes with an actual-time liveness detection mechanism that uses natural patterns of eye blinks to authenticate users on Edge-IoT devices. In contrast to conventional biometric systems that heavily depend on cloud computation, custom hardware, or static facial analysis, the method being proposed is specifically tailored to be deployed on low-resource devices with common RGB cameras. The system identifies spontaneous and involuntary eye-blinks using lightweight computer vision with optimized temporal pattern recognition algorithms. This method allows real-time authentication, high spoofing attack resistance (e.g., images, videos, or masks), and privacy via all local computation—no need to send biometric data to external Servers.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
FIGURE 2: OPERATIONAL WORKFLOW
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention discloses a lightweight system for real-time eye-blink liveness validation optimized for Edge-IoT environments. The system employs a standard RGB camera to capture facial video streams at frame rates compatible with low-power embedded devices. Input frames undergo preprocessing, including illumination normalization and facial landmark detection, to ensure consistent accuracy under varying environmental conditions.
A lightweight CNN is deployed for eye-state classification, distinguishing between open and closed eye states. This CNN model is designed with minimal convolutional layers and occupies less than one megabyte of memory, making it suitable for execution on embedded ARM-based processors. The use of TensorFlow Lite or equivalent inference engines ensures efficient real-time performance under constrained computing resources.
Temporal analysis plays a crucial role in validating liveness. The blink detection module examines dynamic patterns of eye closure and reopening over consecutive frames. By modeling involuntary and spontaneous blink behaviors, the system can reliably differentiate genuine human activity from spoofing attempts such as static images or replayed videos.
An anti-spoofing subsystem enhances robustness by detecting inconsistencies in temporal eye movements. Static artifacts from printed photographs, video replays, or 3D mask simulations are identified through micro-motion analysis. This subsystem ensures high spoof resistance, blocking over 94% of known presentation attacks while maintaining a smooth user experience.
The decision module integrates outputs from the CNN and temporal blink analysis to provide liveness validation within 250 milliseconds. This ensures negligible user inconvenience and seamless interaction in authentication workflows. The system is designed to comply with privacy regulations such as GDPR by processing all biometric data locally on the device, without persistent storage or external transmission.
Energy efficiency is achieved through model compression, quantization, and optimized use of device resources. The system consumes less than 1.8 watts of power and operates with under 40 megabytes of memory, making it ideal for deployment on resource-constrained IoT and edge platforms.
A hybrid training strategy is employed to improve accuracy and robustness. Custom datasets captured from edge device cameras are combined with open-source blink datasets and synthetically generated blink sequences under varied lighting and occlusion conditions. This diverse training data ensures strong generalization across environments and user demographics.
Deployment is supported in multi-tier frameworks, including fully autonomous edge-only operation and optional cloud synchronization for periodic model updates. Dynamic workload distribution allows adaptive balancing between edge and cloud resources depending on latency requirements and network conditions.
The invention is versatile and applicable across domains such as secure access control, smart homes, mobile authentication, ATM security, and embedded surveillance. By delivering reliable real-time liveness detection on constrained hardware, the invention advances biometric security for modern connected infrastructure.
Best Method of Working
The best method of working involves implementing the system on an ARM-based processor integrated within an IoT device such as a smart lock or mobile terminal. A standard RGB camera captures live facial input, and the preprocessing module performs illumination normalization and facial landmark detection. The lightweight CNN classifies eye states, while the temporal blink analysis module validates dynamic blink patterns. Decisions are rendered locally within 250 milliseconds, ensuring real-time usability. The anti-spoofing subsystem analyzes temporal consistency to resist attacks using photos, videos, or masks. The model is trained on a combination of custom edge-device datasets, open-source blink datasets, and synthetically generated sequences to ensure robustness. This configuration offers maximum effectiveness, energy efficiency, and privacy preservation for real-world biometric authentication.
The current invention pertains to a new system and method for real-time eye-blink-based liveness detection that is especially tailored for use on Edge-IoT (Internet of Things) devices. The main aim of this system is to effectively and accurately ascertain the liveness of a human subject by utilizing natural, spontaneous eye blinks, hence offering robust protection against spoofing attacks as well as impersonation attacks in biometric authentication systems.
Edge-IoT devices such as smart locks, wearable devices, embedded access terminals, surveillance nodes, and mobile security modules have, in general, limited computational and energy budgets, no constant internet connectivity, and are often subject to time-varying ambient lighting and environmental conditions. They are not well-suited for edge deployment by conventional methods for liveness detection, which depend on high-power cloud computation or active user collaboration.
Low-Latency Interaction: Maintains processing speeds under 250 ms for seamless, real-time usability.
Effective Spoof Attack Prevention: Blocks over 94% of visual spoof attempts, including high-resolution photos, videos, and 3D masks.
Optimized for Edge Hardware: Built for minimal power and memory overhead, ideal for portable and embedded devices.
Privacy-Centric Design: Ensures sensitive biometric information is never transmitted, staying entirely on the local device.
This invention presents a light-weight, real-time eye-blink liveness detection system tailored for Edge-IoT devices. It employs a minimal vision processing pipeline and temporal analysis of blink patterns to identify authentic human presence effectively. The system works purely offline and learns to adapt to different environmental conditions and is also spoof-resistant against attacks with static images or recorded videos. Focusing on user privacy and low energy use, it allows secure biometric authentication in resource-limited IoT devices, without requiring cloud connectivity or costly hardware like depth sensors.
, Claims:1. A system for real-time eye-blink liveness validation on Edge-IoT devices, comprising:
a camera interface configured to capture facial video input;
a preprocessing unit adapted to perform illumination normalization and facial landmark detection;
a lightweight convolutional neural network (CNN) configured to classify eye states as open or closed;
a temporal blink analysis module adapted to evaluate dynamic blink patterns over consecutive frames;
an anti-spoofing subsystem configured to detect presentation attacks including photographs, videos, and 3D masks;
an inference engine configured for on-device execution with minimal computational overhead;
a decision module configured to validate user liveness in real time; and
an output interface configured to provide authentication signals to an application or device.
2. The system as claimed in claim 1, wherein the lightweight CNN model occupies less than one megabyte of memory and comprises no more than three convolutional layers.
3. The system as claimed in claim 1, wherein the preprocessing unit normalizes illumination and extracts facial landmarks to improve robustness under varying environmental conditions.
4. The system as claimed in claim 1, wherein the temporal blink analysis module identifies involuntary human blink patterns to differentiate genuine users from spoofing attempts.
5. The system as claimed in claim 1, wherein the anti-spoofing subsystem detects static artifacts or micro-motion inconsistencies indicative of spoofing.
6. The system as claimed in claim 1, wherein the inference engine is implemented using TensorFlow Lite or an equivalent lightweight framework.
7. The system as claimed in claim 1, wherein all biometric data is processed locally without persistent storage or external transmission.
8. The system as claimed in claim 1, wherein the system operates using less than 40 megabytes of memory and consumes less than 1.8 watts of power.
9. The system as claimed in claim 1, wherein the output interface is configured for integration with IoT applications including smart locks, mobile devices, and ATM terminals.
10. A method for real-time eye-blink liveness validation on Edge-IoT devices, comprising the steps of:
capturing facial video input through a camera interface;
preprocessing the input using illumination normalization and facial landmark detection;
classifying eye states as open or closed using a lightweight CNN model;
analyzing temporal blink patterns to detect spontaneous human blinks;
detecting spoofing attacks using an anti-spoofing subsystem;
validating liveness through a decision module within 250 milliseconds; and
providing authentication signals to an application or device through an output interface.
| # | Name | Date |
|---|---|---|
| 1 | 202541090170-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2025(online)].pdf | 2025-09-22 |
| 2 | 202541090170-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-09-2025(online)].pdf | 2025-09-22 |
| 3 | 202541090170-POWER OF AUTHORITY [22-09-2025(online)].pdf | 2025-09-22 |
| 4 | 202541090170-FORM-9 [22-09-2025(online)].pdf | 2025-09-22 |
| 5 | 202541090170-FORM FOR SMALL ENTITY(FORM-28) [22-09-2025(online)].pdf | 2025-09-22 |
| 6 | 202541090170-FORM 1 [22-09-2025(online)].pdf | 2025-09-22 |
| 7 | 202541090170-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-09-2025(online)].pdf | 2025-09-22 |
| 8 | 202541090170-EVIDENCE FOR REGISTRATION UNDER SSI [22-09-2025(online)].pdf | 2025-09-22 |
| 9 | 202541090170-EDUCATIONAL INSTITUTION(S) [22-09-2025(online)].pdf | 2025-09-22 |
| 10 | 202541090170-DRAWINGS [22-09-2025(online)].pdf | 2025-09-22 |
| 11 | 202541090170-DECLARATION OF INVENTORSHIP (FORM 5) [22-09-2025(online)].pdf | 2025-09-22 |
| 12 | 202541090170-COMPLETE SPECIFICATION [22-09-2025(online)].pdf | 2025-09-22 |