Abstract: The present invention relates to a smart wearable stick-on camera system with integrated Wi-Fi connectivity, onboard facial recognition, and an autonomous emergency alert mechanism. The system comprises an adhesive-backed housing configured for unobtrusive attachment to skin, garments, or structural substrates, the housing incorporating a miniature imaging sensor, a microcontroller with hardware-accelerated facial recognition capability, a wireless communication module supporting secure data transmission, and an emergency alert subsystem operable via tactile or voice triggers. The system further integrates a power management subsystem comprising a thin-film rechargeable battery and energy harvesting layers for extended operation. In use, the system continuously captures video frames, executes real-time facial recognition algorithms, and generates recognition metadata while preserving biometric privacy by storing only encrypted feature vectors. Upon detection of an emergency event, the device compresses and transmits live video, recognition metadata, and geolocation coordinates via Wi-Fi to pre-registered endpoints.
Description:TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to wearable electronics and imaging systems, and more particularly to a smart wearable stick-on camera device that integrates wireless connectivity, onboard face recognition, and an emergency alert system. The invention also relates to a method for implementing such a device in a machine or structural form factor, enabling discreet, non-intrusive, and autonomous monitoring and alert generation for personal safety, surveillance, and situational awareness applications.
BACKGROUND OF THE INVENTION
Wearable cameras have become increasingly popular as consumer electronics for personal documentation, health monitoring, and security applications. Existing wearable cameras are commonly implemented as head-mounted devices, eyeglasses, or clip-on modules that require continuous manual operation or visible placement on the user’s body. While such devices can capture images and videos, they suffer from limitations in terms of bulkiness, conspicuousness, and inadequate real-time intelligence.
A critical drawback of conventional wearable cameras is their dependence on manual activation. In emergency situations such as harassment, assault, or medical distress, users may be physically unable to press a button or retrieve their mobile phone. Furthermore, while existing devices may provide video recording, they lack autonomous facial recognition, context-based alerts, and intelligent integration with wireless networks to notify caregivers, security personnel, or emergency services.
Some existing smart cameras provide connectivity features such as Bluetooth or Wi-Fi streaming. However, these systems are either power-inefficient or fail to incorporate adaptive protocols for secure, low-latency emergency data transfer. Similarly, available personal safety devices such as panic buttons or dedicated SOS alarms do not provide integrated video evidence capture with real-time identification of nearby individuals.
Therefore, there exists a strong need for a smart wearable camera system that is lightweight, unobtrusive, and stick-on in design, capable of autonomous face recognition, emergency detection, and real-time wireless communication to designated responders. The invention described herein addresses these needs by presenting a comprehensive hardware and software architecture for a self-contained wearable safety device.
The development of wearable imaging systems has emerged as a growing field over the past decade, largely driven by the convergence of miniaturized sensors, low-power processors, and advanced wireless communication technologies. At the heart of this evolution is the desire to provide individuals with unobtrusive devices that can capture real-time visual data, serve as safety companions, and extend human situational awareness. Early wearable cameras were predominantly designed for recreational activities such as sports recording, action documentation, or lifelogging, with popular models taking the form of clip-on devices, chest mounts, or head-strapped modules. While these devices provided portability and hands-free usage, they were largely dependent on manual activation and did not incorporate any level of autonomous intelligence. They functioned primarily as passive recorders and could not address scenarios where the user was incapacitated or under threat, thereby limiting their applicability for personal safety or emergency response.
The consumer market has also witnessed the proliferation of smart glasses and head-mounted displays incorporating front-facing cameras. While these systems represented an advance over purely passive cameras, they came with their own set of drawbacks. Smart glasses, for instance, are bulky, highly conspicuous, and not universally acceptable in social or professional environments. Their visible form factor often deters adoption due to privacy concerns raised by bystanders who are aware of potential recording. Furthermore, the hardware complexity of such systems contributes to high cost and limited accessibility. The user must also wear them continuously for effective coverage, which is impractical for individuals who require lightweight and discreet safety solutions. The reliance on large batteries further compromises comfort, making such devices unsuitable for prolonged usage, especially in daily life contexts where discretion and convenience are paramount.
Another line of existing solutions lies in body-worn cameras, which have gained widespread deployment among law enforcement personnel. These devices are typically attached to uniforms and designed for continuous recording during patrols or interactions with the public. While they provide a valuable record of events, their drawbacks become apparent when evaluated for civilian use. Firstly, body-worn cameras are relatively bulky and designed to be ruggedized for institutional deployment, making them unsuitable for everyday wear by ordinary individuals. Secondly, they are primarily oriented toward continuous recording rather than selective, intelligent capture, resulting in enormous data storage and processing requirements. Moreover, most law enforcement body cameras lack integrated real-time facial recognition and are generally used for archival purposes rather than live emergency detection. They also require manual activation or centralized management, which limits their effectiveness in fast-evolving emergencies faced by civilians who may not have the time or presence of mind to activate such a device.
On the consumer safety side, the market offers several personal security devices such as panic buttons, SOS pendants, and wristband alarms. These devices are primarily designed to emit loud audio signals or send distress notifications to preconfigured contacts when activated. While effective in drawing attention in some scenarios, they lack visual context. The absence of an imaging system means that no evidence of the situation is captured or transmitted, reducing the ability of responders to assess the nature and severity of the emergency. Moreover, panic buttons require conscious activation, which is not always possible if the user is physically restrained or incapacitated. In addition, many of these devices rely on cellular networks for alert transmission, and connectivity can be inconsistent in indoor or remote environments. Their design as standalone devices without integrated imaging and recognition capability makes them insufficient for modern safety needs where real-time intelligence and evidence capture are critical.
Recent advances in miniaturized cameras integrated into mobile phones have also been leveraged as an ad hoc solution for emergency imaging. Mobile devices allow users to record video and send it through messaging applications or cloud services. However, the effectiveness of smartphones in emergency situations is limited by several factors. First, they are handheld devices requiring the user’s active participation, meaning the user must unlock, open an application, and start recording. In high-stress or physically dangerous situations, these steps are impractical. Second, smartphones are easily visible, and an aggressor may prevent recording or seize the device. Third, while smartphones can transmit data over Wi-Fi or cellular networks, they are not designed for autonomous operation and lack real-time facial recognition as an embedded, always-active feature. Battery drain is also a significant concern, as continuous camera operation and wireless transmission rapidly deplete mobile power reserves, leaving users vulnerable in extended incidents.
Other wearable health and fitness devices, such as smartwatches, offer limited safety features like fall detection or heart-rate-based distress alerts. These are valuable for medical emergencies but insufficient for threats involving human aggression, intrusion, or abduction, where visual evidence and identification of potential perpetrators are critical. Smartwatches with cameras have been prototyped, but these designs are constrained by small form factors that severely limit camera field of view and quality. Their battery capacities are also insufficient to support continuous image capture and wireless streaming. Thus, while health-oriented wearables offer a model for passive monitoring and wireless communication, they do not provide comprehensive coverage for personal security applications.
Efforts have been made to combine imaging with communication in compact wearable modules, particularly in the form of clip-on cameras with Bluetooth or Wi-Fi streaming. Such devices can send captured video to smartphones or cloud servers for live viewing. While closer in functionality to a safety monitoring device, they still suffer from fundamental drawbacks. Most lack integrated intelligence, requiring the user to manually initiate streaming, and provide no automated face recognition to contextualize recorded individuals. They also consume considerable power when streaming continuously, necessitating frequent recharging. Moreover, because they are clip-on devices, they are vulnerable to being dislodged or removed in hostile encounters, reducing reliability in actual emergencies. The absence of discreet adhesive-based attachment further compromises their concealability, making them less effective in scenarios where the user wishes to remain unobtrusive while documenting an unfolding threat.
From an architectural perspective, existing systems generally adopt one of two models: continuous recording or manual activation. Continuous recording provides comprehensive data but imposes high energy and storage burdens, as well as significant privacy concerns. Manual activation minimizes unnecessary data capture but fails in emergencies where the user cannot physically engage the device. Neither approach fully satisfies the requirement for an intelligent wearable that can autonomously sense, decide, and act in real-time without depending entirely on user intervention. Moreover, many existing solutions lack seamless integration between imaging, recognition, and emergency communication. Systems that offer face recognition typically do so through post-processing on remote servers, which introduces latency, requires stable connectivity, and raises privacy concerns. Conversely, devices that provide instant alerts often lack video or identity verification, leading to false alarms or limited contextual utility.
In addition to functional shortcomings, cost and accessibility are critical drawbacks of current solutions. Advanced wearable cameras and smart glasses with integrated features are prohibitively expensive for average consumers, limiting their deployment in the populations that would benefit most from enhanced personal safety. Devices requiring subscriptions to cloud services for data storage or recognition processing add recurring costs. Many systems are also platform-dependent, requiring proprietary applications or specific smartphone models, thereby restricting their compatibility and usability. Furthermore, their size, bulk, and weight often discourage users from incorporating them into daily routines, which undermines the concept of always-available safety monitoring.
The cumulative effect of these drawbacks is a clear technological gap between available wearable imaging systems and the actual requirements of a robust, consumer-friendly personal safety device. Individuals require a solution that is lightweight, discreet, and adhesive-based for unobtrusive attachment to clothing or skin, eliminating the conspicuousness of glasses or bulky clips. The device must integrate real-time face recognition without relying solely on external servers, thus ensuring low-latency decision-making and privacy preservation. At the same time, it must provide emergency alerts with contextual video evidence and geolocation, transmitted wirelessly over secure channels, without draining power excessively. Current devices address some but not all of these requirements, leaving users with fragmented solutions that do not fully protect them in critical situations.
It is within this context that the concept of a smart wearable stick-on camera with Wi-Fi connectivity, face recognition, and an emergency alert system finds its technical relevance. The shortcomings of existing systems underscore the importance of a holistic design that combines discreet form factor, onboard intelligence, and efficient wireless communication into a single, unified device. By overcoming the limitations of bulkiness, conspicuousness, manual dependency, and lack of contextual evidence in emergencies, such an invention fills a pressing unmet need in the wearable safety technology domain.
SUMMARY OF THE INVENTION
The invention discloses a smart wearable stick-on camera device comprising an adhesive-mounted housing incorporating a miniature wide-angle camera module, a microcontroller unit (MCU) or system-on-chip (SoC) with dedicated face recognition capability, a wireless connectivity module configured for Wi-Fi communication, a local storage element, and an emergency alert subsystem comprising a tactile or voice-triggered interface. The device further integrates a power supply, preferably a thin-film rechargeable battery with energy harvesting support.
The system operates by continuously monitoring the wearer’s surroundings using the camera module. The onboard processor executes a lightweight facial recognition algorithm to detect and identify faces within the camera’s field of view. When unknown, suspicious, or predefined “watch-list” faces are encountered, the system autonomously records and stores video snippets while simultaneously preparing an emergency alert packet. In response to user-triggered or algorithm-detected emergencies, the alert subsystem transmits data including live video, facial recognition metadata, and geolocation coordinates via Wi-Fi to preconfigured endpoints such as smartphones, security control centers, or cloud servers.
The invention also discloses a method of operating such a device, wherein steps include adhesive mounting, camera activation, facial recognition processing, alert triggering, and wireless emergency communication. In another embodiment, the device may be embedded into helmets, exoskeletons, or structural safety machines to serve as an intelligent monitoring subsystem.
The principal object of the present invention is to provide a smart wearable stick-on camera that offers a discreet, lightweight, and adhesive-based form factor for seamless integration with the user’s clothing, skin, or wearable gear, thereby overcoming the bulkiness and conspicuousness associated with existing wearable cameras. Another object of the invention is to incorporate intelligent face recognition capabilities directly into the device’s processing unit so that identification of known, unknown, or suspicious individuals can occur locally and in real time without dependency on external servers, ensuring both low latency and preservation of user privacy. A further object of the invention is to furnish an integrated emergency alert subsystem that can be activated either manually through tactile or voice commands or autonomously through detection of predefined emergency conditions, such that visual evidence, facial recognition metadata, and geolocation information can be transmitted instantly to designated responders. It is also an object of the invention to deliver secure wireless connectivity through a Wi-Fi communication module that supports encrypted data transmission, thereby ensuring reliability and confidentiality of alerts under emergency scenarios.
Another object of the invention is to create a power-efficient wearable platform that leverages thin-film rechargeable batteries supplemented by energy harvesting mechanisms such as photovoltaic or piezoelectric modules, enabling continuous monitoring and extended operation without frequent recharging. It is further an object of the invention to allow seamless integration of the stick-on camera into different environments and machines, including industrial helmets, exoskeletons, and structural safety frameworks, so that the device can serve not only personal users but also institutional and occupational safety applications. An additional object of the invention is to provide adaptive system intelligence, wherein the device autonomously adjusts recognition thresholds, communication protocols, and energy allocation in response to real-time environmental and user conditions, thereby ensuring robust and context-aware performance. The invention also seeks to provide a cost-effective solution that can be widely adopted across diverse user groups, eliminating the need for expensive proprietary equipment or subscription-based services. Collectively, these objects ensure that the invention addresses the critical gaps in current wearable imaging and safety devices, offering a unified solution that is unobtrusive, intelligent, secure, and practical for real-world emergency response.
BRIEF DESCRIPTION OF FIGURES
These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read concerning the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 displays a block diagram of a smart wearable stick-on camera system; and
Figure 2 displays flow chart of a method for operating a smart wearable stick-on camera system.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have been necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present disclosure. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having benefit of the description herein.
Detailed Description of the Invention
For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
Reference throughout this specification to “an aspect”, “another aspect” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises...a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components.
Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings.
Referring to Figure 1, a block diagram of a system for privacy-preserving data transmission in Internet-of-Things (IoT) networks, is illustrated. The system 100 comprises: an adhesive housing(102) configured for conformal attachment to a surface of a human body, garment, or structural substrate, the housing fabricated from a breathable polymeric laminate with integrated microcavity channels for skin ventilation; a miniature imaging sensor(104) disposed within said housing, the sensor comprising a CMOS array with a wide-angle lens configured to capture video frames at a resolution of at least 720p, the imaging sensor operatively coupled to a signal conditioning circuit for noise suppression and exposure compensation; a microcontroller unit (MCU) or system-on-chip (SoC)(106) disposed in said housing and operatively connected to the imaging sensor, the MCU including an embedded hardware accelerator for executing a facial recognition algorithm based on extracted feature vectors and similarity scoring against a locally stored encrypted identity database; a wireless communication module(108) disposed within said housing and operatively linked to said MCU, the wireless communication module configured for Wi-Fi connectivity supporting both access-point and peer-to-peer modes, and further configured to transmit compressed video packets and facial recognition metadata via an encrypted communication protocol; an emergency alert subsystem (110)operatively linked to the MCU, the subsystem comprising at least one tactile activation zone integrated within the adhesive housing and at least one voice-trigger recognition circuit, the subsystem configured to autonomously transmit an emergency alert signal in response to detected trigger events; and a power management subsystem(112) disposed within said housing, the subsystem comprising a thin-film rechargeable lithium-polymer battery laminated within the adhesive patch and at least one energy harvesting layer selected from a photovoltaic layer and a piezoelectric film, the power management subsystem configured to dynamically allocate energy to the imaging sensor, MCU, and wireless communication module based on operational states.
In an embodiment, the adhesive housing (102) further comprises a micro-perforated breathable polymer layer and a medical-grade pressure-sensitive adhesive, such that prolonged attachment to the skin of the user minimizes irritation and ensures stable mounting without slippage during physical activity, the housing further including a biocompatible hydrophobic coating to resist perspiration ingress into the internal electronics.
In an embodiment, the imaging sensor (104) is configured with a low-light enhancement circuit including an active infrared illumination source disposed within the housing, the infrared source being automatically activated under ambient illumination levels below a predetermined threshold, thereby enabling the system to maintain facial recognition accuracy in dimly lit environments without requiring external light sources.
In an embodiment, the MCU (106) executes a facial recognition module that employs cascaded feature extraction using local binary patterns and convolutional feature maps optimized for execution on a low-power DSP core, the MCU further being configured to store only hashed feature vectors in local memory rather than raw image data, thereby ensuring user privacy and reducing the risk of unauthorized biometric leakage.
In an embodiment, the wireless communication module (108) is further configured to establish a secure TLS-based handshake with pre-registered mobile devices or remote servers, the module employing a lightweight data compression protocol that fragments video streams into segmented packets with adaptive bitrates, such that the emergency alert transmission is sustained under conditions of variable bandwidth without loss of critical visual evidence.
In an embodiment, the emergency alert subsystem (110) comprises a capacitive tactile sensor embedded within the adhesive housing configured to distinguish intentional double-tap gestures from random environmental pressure, and a voice-trigger circuit implemented using a low-power wake-word engine trained on a predefined emergency keyword, the subsystem being capable of dual-modality activation to ensure redundancy in triggering emergency alerts.
In an embodiment, the power management subsystem (112) further includes a power management integrated circuit (PMIC) configured to continuously monitor the state of charge of the thin-film battery, dynamically switch between direct sensor operation and low-power standby modes, and redirect harvested energy from the photovoltaic and piezoelectric layers to maintain a minimum operational reserve for emergency alert transmission even under depleted battery conditions.
In an embodiment, the adhesive housing(102) further integrates a flexible printed circuit board (FPCB) substrate on which the imaging sensor, MCU, wireless communication module, and energy harvesting elements are mounted, the FPCB being encapsulated with a conformal silicone elastomer layer to provide shock absorption, mechanical flexibility, and sweat-proofing for the integrated electronics.
In an embodiment, the system further comprises a companion mobile application installed on a paired device, the application configured to receive real-time video and recognition metadata, to store encrypted logs of emergency alerts, and to permit remote configuration of recognition thresholds, contact lists, and operational parameters, wherein said configuration updates are transmitted back to the MCU over Wi-Fi in the form of encrypted control packets.
In an embodiment, the emergency alert signal (110) transmitted by the wireless communication module further comprises geolocation coordinates obtained via a GNSS interface linked to the companion mobile device, the system being configured to combine said geolocation data with captured facial recognition metadata into a structured alert payload, thereby enabling responders to simultaneously localize the user and identify potential aggressors in real-time.
Referring to Figure 2, a flow chart of a method of operating a smart wearable stick-on camera system comprising an adhesive housing, an imaging sensor, a microcontroller unit, a wireless communication module, an emergency alert subsystem, and a power management subsystem, the method comprising the steps of the method is illustrated. The method 200 comprises:
At step 202, the method 200 includes mounting the adhesive housing on a surface selected from a skin surface, an article of clothing, or a wearable structure, the adhesive housing being fabricated from a breathable polymer laminate configured to permit prolonged attachment without irritation;
At step 204, the method 200 includes activating the imaging sensor to continuously capture environmental video frames at a resolution of at least 720p, and passing said frames through a signal conditioning circuit for noise suppression and exposure normalization;
At step 206, the method 200 includes processing said video frames in real time on the microcontroller unit, wherein the MCU executes an embedded facial recognition algorithm comprising extraction of facial feature vectors, comparison of said feature vectors against an encrypted local database of stored identities, and generation of recognition metadata corresponding to matched or unmatched individuals;
At step 208, the method 200 includes determining whether an emergency trigger event has occurred, wherein said trigger event is detected either through (i) a tactile gesture on a capacitive sensing zone embedded in the adhesive housing, (ii) a voice-trigger detection circuit identifying a preconfigured emergency keyword, or (iii) a recognition module detecting the presence of an unauthorized or suspicious individual;
At step 210, the method 200 includes upon detection of said emergency trigger event, activating the wireless communication module to establish a secure Wi-Fi connection via TLS encryption with a pre-registered endpoint selected from a mobile device, a remote server, or a cloud-based monitoring system;
At step 212, the method 200 includes compressing and packetizing the captured video frames and associated recognition metadata into a structured alert payload, further integrating geolocation coordinates obtained from a GNSS module of a companion device;
At step 214, the method 200 includes transmitting said structured alert payload over Wi-Fi to the pre-registered endpoint, thereby enabling remote responders to receive live video, identity recognition data, and user location in real-time; and
At step 216, the method 200 includes managing system power through the power management subsystem by dynamically allocating energy between the imaging sensor, MCU, and wireless module, and diverting harvested energy from photovoltaic and piezoelectric elements to sustain minimum operational reserve required for emergency alert transmission under depleted battery conditions.
In an embodiment, the facial recognition algorithm executed on the MCU comprises a cascaded feature extraction process utilizing local binary pattern descriptors followed by convolutional feature mapping, the method further including a step of discarding raw image frames post-processing and storing only hashed feature vectors in encrypted memory to ensure biometric privacy.
In an embodiment, the tactile trigger detection step further includes distinguishing between intentional double-tap gestures and incidental contact by applying a temporal threshold filter to the capacitive sensor output, thereby minimizing false-positive emergency activations caused by environmental pressure or accidental touch.
In an embodiment, the video compression and packetization step further includes implementing an adaptive bitrate streaming protocol, the method dynamically adjusting frame resolution and transmission packet size in response to real-time variations in Wi-Fi bandwidth, thereby ensuring continuity of emergency alert transmission without loss of essential facial detail.
In an embodiment, the power management step further includes monitoring the state of charge of the thin-film battery, placing the imaging sensor into low-power standby mode when no facial motion is detected for a predetermined interval, and instantly reactivating the imaging sensor upon detection of movement through an embedded motion-sensing circuit, thereby optimizing energy usage while maintaining emergency readiness.
The smart wearable stick-on camera system described herein integrates compact hardware and intelligent software into a unified platform designed for unobtrusive personal safety and emergency response. The device is configured within an adhesive-backed housing fabricated from a breathable polymer laminate that enables comfortable attachment to human skin, garments, or structural supports. The housing is designed with microcavity ventilation channels and a hydrophobic outer coating to ensure that extended wear does not cause irritation and that internal electronics remain protected from perspiration or environmental moisture. Within this compact enclosure resides a flexible printed circuit board that supports the imaging sensor, microcontroller, wireless transceiver, emergency trigger circuits, and power management components, all encapsulated within a conformal elastomer layer to maintain structural integrity during daily wear and physical activity.
The imaging subsystem employs a miniature CMOS sensor equipped with a wide-angle lens capable of capturing high-definition video frames. The sensor is coupled to a signal conditioning module that performs real-time exposure normalization, dynamic range optimization, and noise suppression, ensuring that facial features remain discernible under varying lighting conditions. To enhance low-light operation, the system integrates an infrared illumination source that activates automatically when ambient light drops below a preset threshold. This allows the camera to maintain recognition accuracy in dimly lit environments without requiring external lighting, an important feature for nighttime safety applications.
Captured video frames are streamed directly to the microcontroller unit or system-on-chip located within the housing. This MCU is selected with embedded digital signal processing cores and a dedicated hardware accelerator to efficiently execute facial recognition algorithms under strict power constraints. The recognition algorithm employed in the device follows a cascaded feature extraction architecture. Initially, lightweight descriptors such as local binary patterns (LBP) are extracted from the incoming frames to rapidly isolate potential facial regions. These candidate regions are then passed through convolutional feature mapping layers optimized for embedded execution. The convolutional process generates multi-scale feature vectors representing spatial patterns of facial landmarks such as eye contours, nose bridge geometry, and lip boundaries. To ensure low memory usage, the algorithm discards raw video frames after processing and retains only hashed feature vectors within an encrypted local database, thereby preserving biometric privacy.
Once the feature vectors are extracted, the MCU computes similarity scores against a set of reference vectors stored in secure memory. These reference vectors correspond to authorized individuals pre-enrolled through a companion mobile application. A threshold-based decision process determines whether the detected face belongs to a recognized individual, an unknown person, or a flagged suspicious category. If the individual is unknown or flagged, the MCU prepares metadata indicating the recognition result, which includes timestamped confidence scores and facial bounding box coordinates. This metadata can be stored locally in encrypted form or transmitted during an emergency alert.
The system continuously monitors for emergency triggers through two redundant modalities. A tactile activation zone is embedded in the adhesive housing as a capacitive sensor that can differentiate deliberate double-tap gestures from incidental contact by applying temporal and amplitude filters. In parallel, a voice trigger circuit operates in an ultra-low-power state, running a wake-word detection engine trained on a preconfigured emergency keyword. This ensures that a wearer can activate the system hands-free in situations where physical access to the device is restricted. In addition, the algorithm can autonomously classify an emergency trigger if the facial recognition module detects the presence of individuals not in the authorized database under conditions deemed threatening by user-defined rules.
Upon trigger detection, the MCU activates the wireless communication subsystem, which is designed around a compact Wi-Fi transceiver supporting both infrastructure and peer-to-peer modes. A secure handshake using Transport Layer Security (TLS) protocols is established with pre-registered mobile devices, security servers, or cloud services. To minimize latency and data loss, the system employs adaptive video compression and packetization. This involves encoding frames using a lightweight codec optimized for embedded use, then fragmenting them into packets with dynamically adjusted sizes according to real-time bandwidth conditions. The adaptive bitrate streaming protocol ensures that the emergency transmission maintains continuity, even under poor connectivity, while preserving sufficient facial detail for recognition by remote responders.
The structured emergency payload transmitted by the system consists of three main elements: compressed video frames showing the ongoing situation, recognition metadata containing feature vector match results and confidence levels, and geolocation coordinates. The geolocation data is obtained either from an integrated GNSS module or through tethering with a paired mobile device. The payload is assembled into a time-synchronized data stream and transmitted in real time to the designated endpoint, enabling responders to receive both visual context and analytical insights into the identities of nearby individuals.
The power management architecture is designed to sustain continuous readiness without frequent recharging. A thin-film lithium-polymer battery is laminated into the adhesive structure, supplying baseline power. To extend operational life, the system integrates energy harvesting layers, including a flexible photovoltaic film to capture ambient light and a piezoelectric layer that generates charge during user motion. A power management integrated circuit (PMIC) supervises energy allocation by prioritizing emergency-critical modules. During idle periods, the PMIC transitions the imaging sensor and Wi-Fi transceiver into low-power standby modes while keeping the motion sensor and voice trigger circuit active. Upon detecting environmental motion or receiving a wake word, the PMIC instantly reallocates energy to reactive subsystems. Furthermore, the PMIC maintains a protected energy reserve dedicated solely to emergency alert transmission, ensuring that the system can always issue at least one complete alert payload even under depleted battery conditions.
The companion mobile application further extends system functionality by serving as both a configuration and monitoring platform. Through the application, users may update the facial recognition database, define emergency contact lists, adjust sensitivity thresholds, and review encrypted logs of past events. Configuration updates are transmitted as encrypted control packets to the MCU, which adjusts recognition thresholds or network parameters in real time. The application also enables responders to remotely access live video streams during active emergencies, providing situational awareness beyond the immediate environment of the user.
Through the integration of these hardware and software elements, the invention provides a wearable system capable of discreet operation, real-time facial recognition, and autonomous emergency alert transmission. The adhesive form factor ensures unobtrusive deployment, while the embedded algorithmic framework provides context-aware decision-making without reliance on continuous user input. Unlike conventional wearable cameras that are either passive recorders or manually activated devices, the present system creates a self-sustaining, intelligent safety node that bridges the gap between personal monitoring and immediate emergency communication. The detailed algorithm design, particularly the cascaded feature extraction and adaptive recognition thresholds, ensures that the device operates within the constraints of low-power embedded hardware while delivering accurate, privacy-preserving identification and timely alerts.
In one embodiment, the smart wearable stick-on camera comprises a polymeric adhesive patch housing with embedded electronic components. The adhesive patch is designed to adhere to skin, clothing, or helmets while being lightweight and breathable. Within the patch housing resides a miniature CMOS camera sensor configured with a wide-angle lens system for capturing high-resolution video frames. The sensor is interfaced with a microcontroller unit or application-specific SoC integrating a digital signal processor (DSP) and AI accelerator cores for real-time image analysis.
The MCU is configured with an embedded face recognition module trained on deep convolutional feature extraction methods optimized for low-resource execution. The module is capable of distinguishing known individuals stored in a local encrypted facial database from unknown or suspicious individuals. Detection thresholds and recognition accuracy are dynamically adjustable via a companion mobile application, allowing the wearer to customize privacy and sensitivity levels.
The wireless connectivity module comprises an integrated Wi-Fi transceiver capable of both peer-to-peer (Wi-Fi Direct) and access-point-based communication. In normal operation, the module remains in low-power standby mode, activating only upon face recognition events or emergency trigger activation to conserve energy. The communication stack supports secure TLS-based transmission protocols ensuring encrypted transfer of sensitive video and metadata.
The emergency alert subsystem is configured with multiple activation mechanisms. A tactile pressure-sensitive zone on the adhesive patch enables manual activation through a double-tap gesture. Additionally, the subsystem incorporates a voice keyword detection circuit that allows the wearer to trigger alerts through preconfigured emergency words even in hands-free conditions. Upon activation, the subsystem immediately establishes a wireless connection with pre-registered receivers and transmits live video stream, recognized facial identities, and GPS coordinates obtained through a companion smartphone or integrated GNSS module.
The power supply subsystem consists of an ultra-thin lithium-polymer battery laminated within the patch structure. The subsystem is supplemented by an energy harvesting unit comprising a flexible photovoltaic film and piezoelectric layer that generates charge from ambient light and wearer motion, thereby extending operational life. A power management integrated circuit (PMIC) governs dynamic allocation of energy between sensing, processing, and wireless transmission modules.
In a machine or structural embodiment, the stick-on camera unit may be integrated into wearable helmets, exoskeletal braces, or vehicular safety frames. In such implementations, the device not only monitors the wearer’s environment but also functions as a structural safety module. For example, when mounted on an industrial helmet, the device can autonomously identify unauthorized individuals entering a restricted site and simultaneously alert security personnel.
The method of the invention comprises steps of mounting the device on a desired surface, activating the imaging module for continuous environmental capture, executing onboard face recognition, detecting emergency triggers, and transmitting live data packets via Wi-Fi to designated receivers. The method further includes steps of adaptive power management, encrypted storage of sensitive video frames, and dynamic adjustment of recognition thresholds.
The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefit s, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
, Claims:1. A smart wearable stick-on camera system comprising:
an adhesive housing configured for conformal attachment to a surface of a human body, garment, or structural substrate, the housing fabricated from a breathable polymeric laminate with integrated microcavity channels for skin ventilation;
a miniature imaging sensor disposed within said housing, the sensor comprising a CMOS array with a wide-angle lens configured to capture video frames at a resolution of at least 720p, the imaging sensor operatively coupled to a signal conditioning circuit for noise suppression and exposure compensation;
a microcontroller unit (MCU) or system-on-chip (SoC) disposed in said housing and operatively connected to the imaging sensor, the MCU including an embedded hardware accelerator for executing a facial recognition algorithm based on extracted feature vectors and similarity scoring against a locally stored encrypted identity database;
a wireless communication module disposed within said housing and operatively linked to said MCU, the wireless communication module configured for Wi-Fi connectivity supporting both access-point and peer-to-peer modes, and further configured to transmit compressed video packets and facial recognition metadata via an encrypted communication protocol;
an emergency alert subsystem operatively linked to the MCU, the subsystem comprising at least one tactile activation zone integrated within the adhesive housing and at least one voice-trigger recognition circuit, the subsystem configured to autonomously transmit an emergency alert signal in response to detected trigger events; and
a power management subsystem disposed within said housing, the subsystem comprising a thin-film rechargeable lithium-polymer battery laminated within the adhesive patch and at least one energy harvesting layer selected from a photovoltaic layer and a piezoelectric film, the power management subsystem configured to dynamically allocate energy to the imaging sensor, MCU, and wireless communication module based on operational states.
2. The system according to claim 1, wherein the adhesive housing further comprises a micro-perforated breathable polymer layer and a medical-grade pressure-sensitive adhesive, such that prolonged attachment to the skin of the user minimizes irritation and ensures stable mounting without slippage during physical activity, the housing further including a biocompatible hydrophobic coating to resist perspiration ingress into the internal electronics.
3. The system according to claim 1, wherein the imaging sensor is configured with a low-light enhancement circuit including an active infrared illumination source disposed within the housing, the infrared source being automatically activated under ambient illumination levels below a predetermined threshold, thereby enabling the system to maintain facial recognition accuracy in dimly lit environments without requiring external light sources; and wherein the MCU executes a facial recognition module that employs cascaded feature extraction using local binary patterns and convolutional feature maps optimized for execution on a low-power DSP core, the MCU further being configured to store only hashed feature vectors in local memory rather than raw image data, thereby ensuring user privacy and reducing the risk of unauthorized biometric leakage.
4. The system according to claim 1, wherein the wireless communication module is further configured to establish a secure TLS-based handshake with pre-registered mobile devices or remote servers, the module employing a lightweight data compression protocol that fragments video streams into segmented packets with adaptive bitrates, such that the emergency alert transmission is sustained under conditions of variable bandwidth without loss of critical visual evidence.
5. The system according to claim 1, wherein the emergency alert subsystem comprises a capacitive tactile sensor embedded within the adhesive housing configured to distinguish intentional double-tap gestures from random environmental pressure, and a voice-trigger circuit implemented using a low-power wake-word engine trained on a predefined emergency keyword, the subsystem being capable of dual-modality activation to ensure redundancy in triggering emergency alerts.
6. The system according to claim 1, wherein the power management subsystem further includes a power management integrated circuit (PMIC) configured to continuously monitor the state of charge of the thin-film battery, dynamically switch between direct sensor operation and low-power standby modes, and redirect harvested energy from the photovoltaic and piezoelectric layers to maintain a minimum operational reserve for emergency alert transmission even under depleted battery conditions.
7. The system according to claim 1, wherein the adhesive housing further integrates a flexible printed circuit board (FPCB) substrate on which the imaging sensor, MCU, wireless communication module, and energy harvesting elements are mounted, the FPCB being encapsulated with a conformal silicone elastomer layer to provide shock absorption, mechanical flexibility, and sweat-proofing for the integrated electronics.
8. The system according to claim 1, wherein the system further comprises a companion mobile application installed on a paired device, the application configured to receive real-time video and recognition metadata, to store encrypted logs of emergency alerts, and to permit remote configuration of recognition thresholds, contact lists, and operational parameters, wherein said configuration updates are transmitted back to the MCU over Wi-Fi in the form of encrypted control packets.
9. The system according to claim 1, wherein the emergency alert signal transmitted by the wireless communication module further comprises geolocation coordinates obtained via a GNSS interface linked to the companion mobile device, the system being configured to combine said geolocation data with captured facial recognition metadata into a structured alert payload, thereby enabling responders to simultaneously localize the user and identify potential aggressors in real-time.
10. A method of operating a smart wearable stick-on camera system comprising an adhesive housing, an imaging sensor, a microcontroller unit, a wireless communication module, an emergency alert subsystem, and a power management subsystem, the method comprising the steps of:
mounting the adhesive housing on a surface selected from a skin surface, an article of clothing, or a wearable structure, the adhesive housing being fabricated from a breathable polymer laminate configured to permit prolonged attachment without irritation;
activating the imaging sensor to continuously capture environmental video frames at a resolution of at least 720p, and passing said frames through a signal conditioning circuit for noise suppression and exposure normalization;
processing said video frames in real time on the microcontroller unit, wherein the MCU executes an embedded facial recognition algorithm comprising extraction of facial feature vectors, comparison of said feature vectors against an encrypted local database of stored identities, and generation of recognition metadata corresponding to matched or unmatched individuals;
determining whether an emergency trigger event has occurred, wherein said trigger event is detected either through (i) a tactile gesture on a capacitive sensing zone embedded in the adhesive housing, (ii) a voice-trigger detection circuit identifying a preconfigured emergency keyword, or (iii) a recognition module detecting the presence of an unauthorized or suspicious individual;
upon detection of said emergency trigger event, activating the wireless communication module to establish a secure Wi-Fi connection via TLS encryption with a pre-registered endpoint selected from a mobile device, a remote server, or a cloud-based monitoring system;
compressing and packetizing the captured video frames and associated recognition metadata into a structured alert payload, further integrating geolocation coordinates obtained from a GNSS module of a companion device;
transmitting said structured alert payload over Wi-Fi to the pre-registered endpoint, thereby enabling remote responders to receive live video, identity recognition data, and user location in real-time; and
managing system power through the power management subsystem by dynamically allocating energy between the imaging sensor, MCU, and wireless module, and diverting harvested energy from photovoltaic and piezoelectric elements to sustain minimum operational reserve required for emergency alert transmission under depleted battery conditions.
| # | Name | Date |
|---|---|---|
| 1 | 202541091214-STATEMENT OF UNDERTAKING (FORM 3) [23-09-2025(online)].pdf | 2025-09-23 |
| 2 | 202541091214-REQUEST FOR EARLY PUBLICATION(FORM-9) [23-09-2025(online)].pdf | 2025-09-23 |
| 3 | 202541091214-POWER OF AUTHORITY [23-09-2025(online)].pdf | 2025-09-23 |
| 4 | 202541091214-FORM-9 [23-09-2025(online)].pdf | 2025-09-23 |
| 5 | 202541091214-FORM 1 [23-09-2025(online)].pdf | 2025-09-23 |
| 6 | 202541091214-FIGURE OF ABSTRACT [23-09-2025(online)].pdf | 2025-09-23 |
| 7 | 202541091214-DRAWINGS [23-09-2025(online)].pdf | 2025-09-23 |
| 8 | 202541091214-DECLARATION OF INVENTORSHIP (FORM 5) [23-09-2025(online)].pdf | 2025-09-23 |
| 9 | 202541091214-COMPLETE SPECIFICATION [23-09-2025(online)].pdf | 2025-09-23 |