Abstract: The present disclosure provides a multi-robot networked system (100) and a method (400) for handwashing monitoring and a method for promoting hygiene habits through social robotic interaction. The system includes at least one social robot (104) having camera (202), speaker (204), microphone (206), LCD touch screen (208), and two rotational degrees of freedom (210) coordinated by processing systems (110) executing behavioural analysis. The apparatus enables personalized hygiene education through feedback generation system (122) delivering adaptive nudges based on real-time handwashing detection, managed by behavioural pattern processing unit (120) that analyzes behavioural data and determines optimal intervention timing. Unlike conventional hygiene education methods using static materials, this interactive approach achieves sustained behavior modification through user identification fusion system (116) and reinforcement learning modules. The multi-robot architecture enables collaborative hygiene monitoring across educational environments with improved compliance rates compared to traditional approaches while maintaining engaging child-friendly interactions.
Description:TECHNICAL FIELD
[0001] The present disclosure relates to the field of robotic systems and IoT-based monitoring technologies. More particularly, the present disclosure relates to a multi-robot networked system for handwashing station monitoring.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Maintaining proper hand hygiene remains a critical challenge in educational environments where children interact closely throughout the day. Educational institutions rely on traditional methods including static posters, verbal reminders from teachers, and periodic hygiene demonstrations that lack real-time engagement and fail to establish consistent handwashing behaviors among young students.
[0004] Current hygiene monitoring approaches in schools primarily depend on passive observation and manual compliance tracking, which cannot provide immediate feedback or personalized guidance to individual children. These conventional systems lack the capability to assess handwashing technique accuracy, monitor adherence to recommended handwashing steps, or leverage peer influence and social dynamics that significantly impact behavior formation in young learners.
[0005] Therefore, there exists a requirement for an enhanced hygiene monitoring approach that provides real-time interaction, enables collaborative learning experiences among children, and delivers sustained behaviour modification through engaging technological interfaces while addressing the limitations inherent in traditional hygiene education methods.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0007] An object of the present disclosure is to provide a multi-robot networked system that enables real-time monitoring of handwashing activities and delivers interactive feedback through social robotic interfaces positioned at handwashing stations.
[0008] Another object of the present disclosure is to provide a collaborative hygiene education system that leverages peer interaction dynamics and group-based learning mechanisms to reinforce proper handwashing behaviors among children in educational environments.
[0009] Yet another object of the present disclosure is to provide an adaptive behavior monitoring mechanism that tracks individual hygiene patterns and generates personalized interventions based on historical interaction data to promote sustained habit formation.
SUMMARY
[0010] This section is provided to introduce certain objects and aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0011] In an aspect, the present disclosure provides a multi-robot networked system for handwashing monitoring including at least one social robot positioned near a handwashing station with at least one camera, speaker, microphone, LCD touch screen, and two rotational degrees of freedom. The system includes at least one RFID-enabled badge operatively coupled with at least one beacon, a user identification fusion system, a behavioural analysis engine, IoT sensors at the handwashing station, multiple social robots connected via mesh network, and a dashboard interface. The system processes behavioural data through a behavioural pattern processing unit and transmits nudge signals through a feedback generation system to the at least one social robot.
[0012] In another aspect, the present disclosure provides a method for handwashing monitoring including detecting user presence by motion sensors, activating a beacon to detect RFID badges, identifying users through facial recognition, fusing identification data, detecting handwashing steps through cameras, sensing behavioural data by IoT sensors, generating behavioural patterns, determining nudge timing, generating personalized nudges, communicating data through multi-robot network, and displaying hygiene performance metrics on a dashboard interface.
[0013] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF DRAWINGS
[0014] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limitation of the present disclosure.
[0015] FIG. 1 illustrates an exemplary representation of a multi-robot networked system for handwashing monitoring, in accordance with an embodiment of the present disclosure.
[0016] FIG. 2 illustrates an exemplary schematic representation of social robot components with processing units, in accordance with an embodiment of the present disclosure.
[0017] FIG. 3 illustrates an exemplary representation of data flow architecture for handwashing monitoring system, in accordance with an embodiment of the present disclosure.
[0018] FIG. 4 illustrates an exemplary flow diagram depicting a method for handwashing monitoring using multi-robot networked system, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0019] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
Definitions:
Social Robot: An autonomous robotic system that can engage with users through interactive capabilities including visual, audio, and display interfaces to promote hygiene habits among children using AI-driven behavioural analysis.
Nudge Timing Control Signals: Digital control signals that can specify timing parameters encoded as offset values from current time, content identifiers mapped to feedback templates, intensity levels represented as scalar values within predetermined ranges, and delivery modalities indicated through enumerated types, all packaged in structured message formats suitable for network transmission.
[0020] An embodiment of the present disclosure relates to a multi-robot networked system for handwashing monitoring, the system including at least one social robot positioned near a handwashing station, where the at least one social robot includes at least one camera, at least one speaker, at least one microphone, at least one LCD touch screen, and two rotational degrees of freedom, where the at least one camera is operatively coupled with an image processing unit, and where the at least one camera is operatively coupled with a facial recognition module. System can include at least one RFID-enabled badge operatively coupled with at least one beacon positioned at the handwashing station, where the at least one beacon is communicatively coupled with the at least one social robot. System can include a user identification fusion system operatively coupled with the facial recognition module and the at least one beacon, where the user identification fusion system is operatively coupled with a behavioural analysis engine.
[0021] Various embodiments of the present disclosure are described using FIGs. 1 to 4.
[0022] FIG. 1 illustrates an exemplary representation of a multi-robot networked system for handwashing monitoring, in accordance with an embodiment of the present disclosure.
[0023] Referring to FIG. 1, a multi-robot networked system (100) for handwashing monitoring is disclosed, the system (100) can include a handwashing station (102), at least one social robot (104), IoT sensor integration (106), at least one beacon (108), processing systems (110), and a multi-robot communication network (124). System (100) can establish handwashing monitoring architecture by implementing hierarchical data flow from distributed sensors and robots through mesh network (126) to a data processing unit (128), which can enable seamless integration of behavior detection and feedback generation functionalities across multiple handwashing stations. The data processing unit (128) can process behavioural data received from the IoT sensor integration (106) and the at least one camera (202) through the behavioural pattern processing unit (120) and can transmit processed nudge signals through the feedback generation system (122) to the at least one social robot (104).
[0024] In an embodiment, handwashing station (102) can function as a monitored hygiene zone by providing physical infrastructure where children perform handwashing activities under observation of social robot (104) and IoT sensor integration (106). Handwashing station can operate by accommodating soap dispensers with integrated sensors, water faucets with flow monitoring capabilities, and mounting positions for sensors that may capture comprehensive behavioural data. The handwashing station can establish operative coupling between components by providing electrical power distribution through voltage lines ranging from low voltage DC suitable for sensors to standard AC for robotic systems, implementing communication buses utilizing protocols such as I2C, SPI, or UART for sensor data transmission at frequencies appropriate for real-time monitoring, and maintaining mechanical mounting points with vibration dampening materials reducing mechanical noise by factors suitable for stable sensor operation.
[0025] In an embodiment, social robot (104) positioned near handwashing station (102) can operate as an interactive hygiene companion by engaging children through multimodal interfaces including visual displays, audio feedback, and gesture recognition capabilities. Social robot can function by monitoring handwashing activities through the at least one camera (202) that is operatively coupled to the image processing unit (112) through high-speed interfaces carrying video data at frame rates suitable for real-time processing, where the operative coupling is established through dedicated hardware connections implementing video transmission protocols that maintain synchronization between frame capture and processing. Robot can enhance hygiene education by delivering personalized interactions based on behavioural patterns processed in machine-readable form as structured data objects containing user identifiers, temporal data, and compliance metrics.
[0026] In an embodiment, IoT sensor integration (106) including at least one motion sensor (106-2), at least one soap dispenser sensor (106-4), and at least one water flow sensor (106-6) positioned at the handwashing station (102) can operate as distributed monitoring components. The IoT sensor integration (106) can be communicatively coupled with the behavioural analysis engine (118) through wireless protocols operating at frequency bands allocated for IoT applications. Motion sensor (106-2) can detect user presence by measuring thermal energy variations. Soap dispenser sensor (106-4) can operate by implementing proximity sensing that detects hand presence within predetermined ranges. Water flow sensor (106-6) can function by utilizing flow measurement techniques converting physical flow into electrical signals calibrated to volume measurements.
[0027] In an embodiment, beacon (108) positioned at handwashing station (102) can operate as a proximity detection device that is communicatively coupled with the at least one social robot (104) through wireless protocols suitable for short-range communication. The communicative coupling can be established by beacon (108) broadcasting advertisement packets at intervals balancing power consumption with detection responsiveness, containing payload data structured with identifier fields, signal calibration parameters, and optional sensor data. When RFID-enabled badge (214) enters detection zones defined by signal propagation patterns, beacon (108) can detect badge presence by measuring received signal strength indicators and comparing against adaptive threshold levels that account for environmental variations. The beacon can transmit RFID detection signals to social robot (104) through established communication channels utilizing packet structures containing badge identification encoded in formats preventing collision, signal quality metrics represented as normalized values, and temporal data with resolution suitable for movement tracking.
[0028] FIG. 2 illustrates an exemplary schematic representation of social robot components with processing units, in accordance with an embodiment of the present disclosure.
[0029] Referring to FIG. 2, an exemplary schematic representation (200) is disclosed, where social robot (104) components can operate collectively as an integrated interaction platform where the at least one camera (202) is operatively coupled with both the image processing unit (112) and facial recognition module (114) through dedicated hardware interfaces. The operative coupling of camera (202) with image processing unit (112) can be implemented through camera interface protocols including but not limited to MIPI CSI, USB Video Class, or parallel camera interfaces, transmitting video data over physical connections optimized for high-bandwidth transfer while maintaining signal integrity through differential signaling or appropriate shielding. Camera (202) can simultaneously provide video streams to facial recognition module (114) through memory sharing mechanisms implementing zero-copy transfers where frame buffers are mapped to multiple address spaces, enabling concurrent access without data duplication overhead. The two rotational degrees of freedom (210) can be mechanically coupled with the at least one camera (202) through motor assemblies implementing gear reduction ratios providing torque multiplication suitable for smooth camera movement, with pan rotation covering angular ranges sufficient for room-wide tracking and tilt rotation enabling vertical coverage accommodating height variations of different users.
[0030] In an embodiment, camera (202) integrated within social robot (104) can operate as a dual-function visual sensor by implementing image sensor technology with resolution suitable for facial recognition and activity monitoring. The camera can be operatively coupled with the image processing unit (112) by transmitting image data through interfaces utilizing synchronization signals and pixel clock coordination. The image processing unit (112) can process video frames by implementing convolutional operations through neural network layers detecting progressively complex features. For facial recognition module (114) coupling, camera (202) can provide preprocessed face regions extracted through detection modules operating within latency budgets enabling responsive interaction.
[0031] In an embodiment, speaker (204) operatively coupled with the at least one social robot (104) can operate by receiving digital audio signals from feedback generation system (122) through audio interfaces maintaining signal integrity. Speaker can function by implementing amplification suitable for speech reproduction within frequency ranges optimized for intelligibility. The operative coupling can process audio feedback by converting nudge timing control signals into synthesized speech through text-to-speech engines, with output modulated based on ambient noise levels detected by microphone (206).
[0032] In an embodiment, microphone (206) can operate by capturing audio input through acoustic transduction suitable for speech capture. Microphone arrays can implement beamforming modules creating directional pickup patterns focused on user zones. The coupling can enable voice activity detection by analyzing signal energy in speech-characteristic frequency bands, utilizing decision thresholds adapted to ambient conditions.
[0033] In an embodiment, LCD touch screen (208) operatively coupled with the at least one social robot (104) can operate by receiving display data from feedback generation system (122) through display interfaces. Touch screen can function by implementing capacitive sensing detecting finger contacts with resolution suitable for child interaction. Visual feedback can be rendered through graphics processing achieving frame rates providing smooth animation perception.
[0034] In an embodiment, two rotational degrees of freedom (210) mechanically coupled with the at least one camera (202) can operate through motor control systems where position feedback from encoders is processed by control modules maintaining accurate positioning despite mechanical variations. The mechanical coupling can be achieved through kinematic linkages designed to minimize backlash while providing ranges of motion suitable for tracking users of varying heights, implementing bearing systems reducing friction to levels enabling smooth motion at low velocities. Pan axis motor can provide horizontal rotation through drive mechanisms where motor torque is transmitted through gear trains or belt drives with reduction ratios balancing speed and torque requirements, implementing position sensing through encoders providing resolution suitable for precise pointing. Tilt axis motor can enable vertical rotation through similar mechanisms adapted for gravitational loading variations throughout the range of motion, implementing counterbalancing techniques reducing motor load requirements. The coupling can maintain camera stability by implementing control modules where position errors are processed through compensators designed using control theory principles, with gains tuned to achieve response characteristics balancing speed with stability, preventing oscillations while maintaining tracking accuracy suitable for video capture.
[0035] In an embodiment, processing systems (110) can operate as computational infrastructure where the image processing unit (112) is operatively coupled with the at least one camera (202) through hardware acceleration interfaces enabling efficient neural network execution. The coupling implements data paths where video frames are transferred to neural processing units through direct memory access reducing CPU overhead, with memory bandwidth allocation ensuring sustained data rates suitable for real-time inference. The user identification fusion system (116) can be operatively coupled with the facial recognition module (114) through shared memory architectures where facial feature vectors are written to memory regions accessible by fusion modules, implementing synchronization mechanisms preventing data races while maintaining low latency access. The fusion system can also be operatively coupled with the at least one beacon (108) through protocol stacks implementing wireless communication layers from physical radio control through application messaging, with quality of service parameters ensuring reliable delivery of identification data. The behavioural analysis engine (118) can be operatively coupled with the user identification fusion system (116) through event-driven architectures where identification confirmations trigger behavioural tracking processes, utilizing message passing frameworks providing decoupling between components while maintaining temporal relationships necessary for accurate behavioural attribution.
[0036] In an embodiment, image processing unit (112) operatively coupled with the at least one camera (202) can operate by implementing neural network architectures specialized for temporal action recognition where three-dimensional convolutions process spatial and temporal features simultaneously. The image processing unit can detect WHO-prescribed handwashing steps by analyzing video sequences through hierarchical processing where low-level features detect motion patterns, mid-level features recognize hand configurations, and high-level features identify complete actions. The unit can identify palm-to-palm rubbing by extracting optical flow fields between consecutive frames using gradient-based methods or neural flow estimation, analyzing flow vector magnitudes and directions to detect oscillatory patterns characteristic of rubbing motions, with frequency analysis performed through short-time Fourier transforms identifying periodic components within ranges typical of manual rubbing actions. Interlacing fingers can be detected by first localizing hand regions through detection networks, then applying pose estimation models trained to identify joint locations despite occlusions common in interlaced configurations, analyzing relative positions of detected joints to determine finger intersection patterns. Back-of-hands cleaning can be recognized by tracking hand orientation changes through rotation matrices estimated from landmark correspondences, detecting characteristic flipping motions combined with rubbing patterns. The image processing unit can generate handwashing step data as structured arrays where each element contains step identifiers mapped to WHO guidelines, confidence scores computed from neural network softmax outputs, duration measurements accumulated from frame counts, and spatial bounding boxes in normalized coordinates, all formatted in schema-compliant structures enabling downstream processing.
[0037] In an embodiment, facial recognition module (114) operatively coupled with the at least one camera (202) can operate by implementing deep learning architectures where convolutional layers extract hierarchical facial features invariant to common variations in pose, lighting, and expression. Module can process facial images through pipelines where face detection identifies regions containing faces by analyzing image pyramids with learned detectors, face alignment normalizes detected faces to canonical poses through affine transformations guided by facial landmark detection, and feature extraction processes aligned faces through deep networks generating compact representations. The operative coupling with user identification fusion system (116) can transmit facial embeddings through inter-process communication mechanisms where serialized feature vectors are transmitted with metadata including confidence scores and quality metrics, implementing protocols ensuring reliable delivery while maintaining temporal constraints for real-time interaction. Module can maintain privacy by implementing privacy-preserving techniques where facial embeddings are generated through one-way transformations preventing reconstruction of original images, with storage implementing encryption appropriate for biometric data protection and access controls limiting data availability to authorized processes.
[0038] In an embodiment, user identification fusion system (116) operatively coupled with the facial recognition module (114) and the at least one beacon (108) can operate by implementing probabilistic fusion frameworks where multiple evidence sources are combined using Bayesian inference principles. The system can process identification inputs by maintaining probability distributions over possible user identities, updating beliefs as new evidence arrives through recursive Bayesian filtering. The fusion process can resolve conflicts between identification sources by implementing weighted voting schemes where weights are determined through maximum likelihood estimation on historical accuracy data, with adaptive mechanisms adjusting weights based on environmental conditions affecting sensor reliability. When facial recognition indicates user A with confidence C_face and RFID indicates user B with confidence C_rfid, the system can compute joint probabilities considering temporal correlations, spatial constraints, and prior probabilities of user presence. The fusion module can handle missing data by maintaining multiple hypotheses with probabilities updated as evidence accumulates, implementing pruning strategies removing low-probability hypotheses to maintain computational efficiency. The system can generate user confirmation signals encoding fused identity estimates with uncertainty quantification, enabling downstream systems to make decisions considering identification confidence.
[0039] In an embodiment, behavioural analysis engine (118) operatively coupled with the user identification fusion system (116) can operate by implementing stream processing architectures where behavioural events from multiple sources are processed in real-time to extract patterns and anomalies. The engine can correlate multi-source data streams by implementing temporal alignment modules where events are buffered in time-ordered queues, with interpolation methods estimating values between discrete samples to create synchronized representations. The correlation process can identify relationships between sensor events by computing cross-correlations, mutual information, or learned association models discovering statistical dependencies. Behavioural patterns can be extracted through sliding window analysis where fixed-duration windows advance through event streams, with feature extraction modules computing statistics including event frequencies, durations, sequences, and intervals. The engine can identify specific behavioural patterns by implementing pattern matching modules where observed sequences are compared against pattern templates using similarity metrics accounting for temporal variations. Complex patterns such as "incomplete handwashing" can be detected through finite state machines or temporal logic evaluators processing event sequences against formal specifications. The generated behavioural patterns in machine-readable form can include structured representations where patterns are encoded as feature vectors with defined semantics, temporal annotations indicating pattern occurrence times, and metadata describing pattern detection confidence and contributing evidence.
[0040] In an embodiment, behavioural pattern processing unit (120) implemented in data processing unit (128) and operatively coupled with the behavioural analysis engine (118) can operate by implementing reinforcement learning modules that learn optimal intervention policies through interaction with user responses. The unit can process behavioural patterns by maintaining state representations encoding relevant behavioural history, with state construction modules selecting and transforming raw features into compact representations suitable for policy learning. The reinforcement learning process can determine optimal nudge timing by modeling the problem as a Markov Decision Process where states represent behavioural contexts, actions represent nudging decisions, and rewards encode desired outcomes. The learning module can estimate action values through temporal difference methods where value functions are updated based on observed rewards and estimated future values, with function approximation using neural networks enabling generalization across similar states. The unit can balance exploration of new strategies with exploitation of learned knowledge through modules implementing adaptive exploration rates or upper confidence bounds. Nudge timing control signals can be generated by evaluating learned policies on current behavioural states, with decision processes considering multiple factors including time since last nudge, behavioural trend directions, and estimated user receptivity. The generated control signals can encode timing specifications as relative or absolute timestamps, nudge content selections from available template libraries, intensity parameters controlling nudge prominence, and delivery channel specifications determining output modalities.
[0041] In an embodiment, feedback generation system (122) operatively coupled with the behavioural pattern processing unit (120) and the at least one social robot (104) can operate by implementing content generation pipelines where abstract nudge specifications are transformed into concrete multimedia content. The system can process nudge timing control signals by parsing structured messages to extract nudge parameters, validating parameters against system capabilities, and scheduling nudge delivery according to timing specifications. Content generation can select appropriate templates by evaluating nudge type identifiers against template metadata, considering factors including user demographics, historical response patterns, and current context. Template instantiation can populate variable fields with user-specific data retrieved from user profiles, current performance metrics calculated from behavioural analysis, and contextual information including time of day and recent activity patterns. The operative coupling with speaker (204) can involve audio generation pipelines where text content is processed by prosody models adding appropriate intonation patterns, speech synthesis generates audio waveforms with voice characteristics appealing to children, and audio post-processing applies effects enhancing clarity. The coupling with LCD touch screen (208) can implement rendering pipelines where vector graphics are rasterized at appropriate resolutions, animations are generated through interpolation between keyframes, and composite scenes are constructed layering multiple visual elements. Personalized nudges (216) can be packaged as synchronized multimedia presentations where audio and visual elements are temporally aligned using synchronization markers, with metadata enabling coordinated playback across output devices.
[0042] In an embodiment, multi-robot communication network (124) including multiple social robots connected via mesh network (126) can operate by implementing self-organizing network protocols where robots dynamically establish and maintain communication paths adapting to changing network conditions. The mesh network formation can begin with robots broadcasting presence announcements containing capability descriptions and network metrics, with neighboring robots establishing peer connections through handshaking protocols negotiating communication parameters. Routing tables can be constructed through distributed modules where robots exchange topology information, with path costs calculated considering factors including hop counts, link quality metrics, and node capabilities. The mesh network (126) can be operatively coupled with the data processing unit (128) through gateway nodes implementing protocol translation between mesh protocols and infrastructure networks, with traffic management ensuring quality of service for different data types. Data aggregation can be implemented through hierarchical clustering where robots are organized into groups with elected cluster heads, implementing modules balancing load distribution with energy efficiency. The aggregation process can combine behavioural data from multiple robots by computing summary statistics at cluster heads, with compression modules reducing redundant information while preserving relevant behavioural patterns. The network can transmit behavioural data (218) using reliable transmission protocols implementing acknowledgments and retransmissions, with adaptive packet sizing optimizing for network conditions. Message serialization can utilize efficient encoding schemes where data structures are converted to byte streams using techniques minimizing overhead while maintaining schema compatibility.
[0043] In an embodiment, data processing unit (128) operatively coupled with the multi-robot communication network (124) can operate as centralized computational infrastructure implementing scalable processing architectures. The unit can implement stream processing frameworks performing transformations including filtering, aggregation, and pattern detection. The behavioural pattern processing unit (120) within data processing unit (128) can execute distributed learning modules updating model parameters based on multi-robot data. Dashboard data (220) can be generated through analytical pipelines aggregating behavioural data across temporal and user-specific dimensions.
[0044] In an embodiment, dashboard interface (130) operatively coupled with the data processing unit (128) can operate by implementing responsive web architectures where client-side applications communicate with backend services through well-defined APIs. The interface can establish operative coupling through authentication mechanisms where users are verified using protocols suitable for educational environments, with session management maintaining secure connections. Data retrieval can implement query optimization where client requests are translated to efficient database queries, with caching layers reducing latency for frequently accessed data. The operative coupling with Learning Management Systems (132) can be established through integration APIs where authentication tokens are exchanged enabling single sign-on, with data synchronization protocols ensuring consistency between systems. Dashboard visualizations can be rendered using frameworks where data is bound to visual elements through declarative specifications, with interactive features implemented through event handlers responding to user inputs. Real-time updates can be delivered through push mechanisms where server-side changes are propagated to connected clients, implementing protocols managing connection state and handling reconnections. The interface can generate reports by processing templates where data is merged with formatting specifications, producing documents in formats suitable for distribution and archival.
[0045] FIG. 3 illustrates an exemplary representation of data flow architecture for handwashing monitoring system, in accordance with an embodiment of the present disclosure.
[0046] Referring to FIG. 3, a data flow architecture (300) is disclosed that can operate through systematic data transformation where each module implements specific processing stages with well-defined interfaces enabling modular system design. The architecture can implement flow control mechanisms where data rates are managed through buffering and backpressure, preventing overwhelming downstream components while maintaining real-time performance requirements. Inter-module communication can utilize asynchronous messaging patterns where modules operate independently, with message queues decoupling producers from consumers enabling scalable processing. Data schemas can define structured formats for information exchange between modules, with versioning mechanisms enabling system evolution while maintaining compatibility.
[0047] In an embodiment, input module (302) can operate by implementing multi-channel data acquisition where heterogeneous sensor inputs are collected through specialized interfaces. Camera data acquisition can implement techniques enabling direct memory access. RFID badge data collection can handle simultaneous transmissions with collision resolution. IoT sensor interfaces can accommodate varying data rates with adaptive sampling. Timestamp synchronization can align distributed data sources achieving temporal alignment suitable for behavioural correlation.
[0048] In an embodiment, processing module (304) including image processing unit (112), activity recognition (306), and user identification (116) can operate through coordinated processing pipelines where outputs from one component feed into others. Image processing can implement optimization techniques including model quantization reducing numerical precision while maintaining accuracy, kernel fusion combining multiple operations, and hardware-specific optimizations leveraging available acceleration. The CNN processing for activity recognition can utilize architectures where spatial and temporal features are extracted through separate pathways then combined, enabling efficient processing of video data. Activity recognition (306) can implement hierarchical models where low-level actions are combined into complex activities, using grammatical models or probabilistic frameworks capturing temporal relationships. The recognition process can handle partial observations where activities are identified from incomplete sequences, maintaining multiple hypotheses until sufficient evidence accumulates. User identification can coordinate outputs from facial recognition and RFID systems, implementing tracking modules maintaining identity associations across time despite temporary occlusions or sensor failures. The module can generate intermediate representations where processed features are stored in formats enabling efficient access by subsequent stages, with indexing structures supporting rapid retrieval based on temporal or identity queries.
[0049] In an embodiment, learning module (308) incorporating behavioural analysis engine (118), reinforcement learning model (120), and feedback generation system (122) can operate by implementing online learning paradigms adapting to observed behaviors. The behavioural analysis engine can detect patterns using temporal analysis techniques. The reinforcement learning model can optimize nudging strategies through policy learning methods. The feedback generation system can create varied feedback messages ensuring linguistic diversity and appropriate emotional tone.
[0050] In an embodiment, output module (310) delivering multimodal feedback can operate through synchronized content delivery ensuring coherent user experience. Visual feedback (312) can adapt graphical complexity to available processing power. Audio output (314) can implement spatial processing and dynamic range adjustment. Gamification (316) elements can track progress through point systems optimizing engagement. Data logging (318) can ensure event durability through sequential storage. Network sync (320) can maintain distributed state consistency. Dashboard (130) updates can implement throttling preventing information overload.
[0051] FIG. 4 illustrates an exemplary flow diagram depicting a method (400) for handwashing monitoring using multi-robot networked system, in accordance with an embodiment of the present disclosure.
[0052] Referring to FIG. 4, a method (400) for monitoring handwashing using multi-robot networked system is disclosed. At block (402), detecting presence of user by at least one motion sensor (106-2) can operate through passive infrared sensors analyzing thermal signatures via Fresnel lens arrays creating multiple detection zones. The detection module distinguishes human heat patterns from environmental sources by analyzing temperature differentials and object dimensions, generating user detection signals containing confidence scores and movement data transmitted through interrupt-driven mechanisms for low-latency response.
[0053] In an embodiment, at block (404) activating at least one beacon (108) by the at least one motion sensor (106-2) can operate through power management protocols implementing staged wake-up where detection signals trigger transition from low-power to active broadcasting states. Beacon configuration adapts transmission parameters based on user proximity, with multiple beacons coordinated through time-division multiplexing preventing interference.
[0054] In an embodiment, at block (406) detecting at least one RFID-enabled badge (214) by the at least one beacon (108) can operate using anti-collision modules such as ALOHA variants resolving simultaneous badge responses. The detection implements adaptive power control adjusting interrogation signals based on environmental conditions, extracting badge identification and signal quality data while maintaining rapid response times.
[0055] In an embodiment, at block (408) identifying user by at least one camera (202) triggered by RFID detection signal can operate through coordinated activation where RFID proximity data enables camera pre-positioning toward expected user locations. Face detection implements cascaded approaches eliminating non-face regions before applying recognition models, handling occlusions and pose variations through robust feature extraction comparing against enrolled templates.
[0056] In an embodiment, at block (410) fusing facial recognition data with RFID badge data by user identification fusion system (116) can operate through probabilistic frameworks implementing Bayesian updating combining evidence from both sources. The fusion handles conflicting evidence through weighted voting based on sensor reliability, generating user confirmation signals with identity estimates and uncertainty quantification.
[0057] In an embodiment, at block (412) detecting handwashing steps by the at least one camera (202) through image processing unit (112) can operate by decomposing sequences into atomic actions using specialized recognition models. Palm-to-palm rubbing detection analyzes optical flow for oscillatory patterns within expected frequency ranges. Interlaced finger washing recognition uses keypoint analysis determining relative finger positions. The system maintains action histories identifying missed steps with confidence scores based on observation quality.
[0058] In an embodiment, at block (414) sensing handwashing behavioural data by IoT sensors (106) can operate through multi-sensor correlation where soap dispenser sensors detect quantities through pressure measurements, water flow sensors measure rates through high-resolution sampling, and motion sensors track spatial coverage patterns. Edge analytics processes initial data reducing volumes while preserving behavioural information.
[0059] In an embodiment, at block (416) generating behavioural patterns by behavioural analysis engine (118) can operate through multi-resolution temporal analysis using sliding windows for short-term patterns within sessions, aggregation for medium-term trends across sessions, and long-term habit discovery. Pattern extraction utilizes unsupervised learning with anomaly detection identifying unusual behaviors. Generated patterns in machine-readable form include structured feature vectors with temporal annotations and confidence measures.
[0060] In an embodiment, at block (418) determining nudge timing control signals by behavioural pattern processing unit (120) can operate through reinforcement learning modeling nudging as sequential decisions affecting future behavioural states. State representation encodes behavioural history with multi-objective optimization balancing immediate compliance and sustained habit formation. Control signals specify timing parameters, nudge categories, and contingency plans for adaptive interaction.
[0061] In an embodiment, at block (420) generating personalized nudges (216) by feedback generation system (122) can operate through template-based generation where contextual selection determines appropriate message types. Natural language generation creates variations through paraphrasing maintaining clarity. Multimodal content synchronizes verbal messages with visual animations, implementing affect modeling for emotional appropriateness and cultural adaptation based on user backgrounds.
[0062] In an embodiment, at block (422) communicating behavioural data (218) by multi-robot communication network (124) to data processing unit (128) through mesh network (126) can operate using resilient protocols with distributed aggregation reducing overhead. Hierarchical aggregation preserves recent detail while summarizing historical data. The mesh network self-heals discovering alternative paths when links fail, with load balancing distributing traffic efficiently.
[0063] In an embodiment, at block (424) displaying processed behavioural data and hygiene performance metrics by dashboard interface (130) can operate through visualization systems implementing information hierarchy with critical insights prominent. Real-time updates use differential rendering for smooth interaction. The interface generates natural language summaries explaining patterns, with anomaly detection triggering intelligent alert routing to appropriate personnel.
[0064] Thus, the proposed disclosure provides the system (102) and method (400) for multi-robot networked handwashing monitoring that combines social robots, IoT sensors to deliver personalized hygiene interventions. The system achieves real-time behavioral tracking through user identification fusion and generates adaptive feedback based on individual handwashing patterns, enabling sustained habit formation in educational environments.
[0065] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0066] The present disclosure provides a multi-robot networked system for handwashing monitoring that eliminates conventional static hygiene education limitations through real-time behavioural analysis and personalized feedback, enabling adaptive interventions based on individual handwashing patterns while promoting sustained habit formation through engaging social robot interactions in educational environments.
[0067] The present disclosure provides a system that implements multi-modal user identification through fusion of facial recognition and RFID detection, ensuring accurate behavioural tracking while enabling collaborative hygiene education through mesh-networked robots that coordinate group activities and peer-based learning mechanisms across distributed handwashing stations.
[0068] The present disclosure provides a system that enables comprehensive behavioural monitoring through integration of IoT sensors with robotic platforms, supporting scalable deployment across educational facilities while optimizing intervention effectiveness through reinforcement learning modules that adapt nudging strategies based on observed user responses and behavioural improvements.
, Claims:1. A multi-robot networked system (100) for handwashing monitoring, the system (100) comprising:
at least one social robot (104) positioned near a handwashing station (102), wherein the at least one social robot (104) comprises at least one camera (202), at least one speaker (204), at least one microphone (206), at least one LCD touch screen (208), and two rotational degrees of freedom (210), wherein the at least one camera (202) is operatively coupled with an image processing unit (112), and wherein the at least one camera (202) is operatively coupled with a facial recognition module (114);
at least one RFID-enabled badge (214) operatively coupled with at least one beacon (108) positioned at the handwashing station (102), wherein the at least one beacon (108) is communicatively coupled with the at least one social robot (104);
a user identification fusion system (116) operatively coupled with the facial recognition module (114) and the at least one beacon (108), wherein the user identification fusion system (116) is operatively coupled with a behavioural analysis engine (118);
a behavioural pattern processing unit (120) implemented in a data processing unit (128) and operatively coupled with the behavioural analysis engine (118), wherein the behavioural pattern processing unit (120) is operatively coupled with a feedback generation system (122), and wherein the feedback generation system (122) is operatively coupled with the at least one social robot (104) through the at least one speaker (204) and the at least one LCD touch screen (208);
IoT sensor integration (106) comprising at least one motion sensor (106-2), at least one soap dispenser sensor (106-4), and at least one water flow sensor (106-6) positioned at the handwashing station (102), wherein the IoT sensor integration (106) is communicatively coupled with the behavioural analysis engine (118);
a multi-robot communication network (124) comprising multiple social robots connected via mesh network (126), wherein the multi-robot communication network (124) is operatively coupled with the data processing unit (128), and wherein each social robot of the multiple social robots is communicatively coupled with the data processing unit (128) through the mesh network (126); and
a dashboard interface (130) operatively coupled with the data processing unit (128), wherein the system processes behavioural data received from the IoT sensor integration (106) and the at least one camera (202) through the behavioural pattern processing unit (120) and transmits processed nudge signals through the feedback generation system (122) to the at least one social robot (104).
2. The system (100) as claimed in claim 1, wherein the at least one camera (202) comprises dual cameras comprising a first camera operatively coupled with the image processing unit (112) and a second camera operatively coupled with the facial recognition module (114).
3. The system (100) as claimed in claim 1, wherein the image processing unit (112) comprises an attention mechanism module for real-time video frame analysis.
4. The system (100) as claimed in claim 1, wherein the at least one social robot (104) further comprises two rotational degrees of freedom (210) mechanically coupled with the at least one camera (202).
5. The system (100) as claimed in claim 1, wherein the multi-robot communication network (124) utilizes Wi-Fi or LoRa protocols for mesh network connectivity (126) between the multiple social robots.
6. The system (100) as claimed in claim 1, wherein the at least one beacon (108) utilizes Bluetooth Low Energy or RF signals and is operatively coupled with the at least one RFID-enabled badge (214) when positioned within proximity of the handwashing station (102).
7. The system (100) (100) as claimed in claim 1, wherein the IoT sensor integration (106) further comprises temperature sensors and proximity sensors operatively coupled with the data processing unit (128).
8. The system as claimed in claim 1, wherein the dashboard interface (130) comprises a teacher-facing interface operatively coupled with Learning Management Systems (132) and timetable APIs.
9. The system (100) as claimed in claim 1, wherein the behavioural analysis engine (118) generates anonymized interaction patterns stored in machine-readable form and transmitted to the behavioural pattern processing unit (120).
10. A method (400) for monitoring handwashing using a multi-robot networked system, the method comprising:
detecting (402), by at least one motion sensor (106-2) positioned at a handwashing station (102), presence of a user approaching the handwashing station (102) and generating a user detection signal;
activating (404), by the at least one motion sensor (106-2) upon detecting the user presence, at least one beacon (108) positioned at the handwashing station (102) by transmitting the user detection signal;
detecting (406), by the at least one beacon (108) upon activation, at least one RFID-enabled badge (214) when the user approaches within proximity and generating an RFID detection signal;
identifying (408), by at least one camera (202) triggered by the RFID detection signal and operatively coupled with a facial recognition module (114), the user and generating facial recognition data;
fusing (410), by a user identification fusion system (116), the facial recognition data from the facial recognition module (114) with RFID badge data from the at least one beacon (108) and generating a user confirmation signal;
detecting (412), by the at least one camera (202) upon receiving the user confirmation signal and through the image processing unit (112), handwashing steps and generating handwashing step data;
sensing (414), by IoT sensors (106) comprising soap dispenser sensor (106-4) and water flow sensor (106-6), handwashing behavioural data during the handwashing process;
generating (416), by the behavioural analysis engine (118) upon receiving the handwashing step data from the image processing unit (112) and sensed data from the IoT sensor integration (106), behavioural patterns and transmitting the behavioural patterns to a behavioural pattern processing unit (120) implemented in a data processing unit (128);
determining (418), by the behavioural pattern processing unit (120) upon receiving the behavioural patterns in machine-readable form, nudge timing control signals and transmitting the nudge timing control signals to a feedback generation system (122);
generating (420), by the feedback generation system (122) upon receiving the nudge timing control signals, personalized nudges (216) and transmitting the personalized nudges to the at least one social robot (104) through at least one speaker (204) and at least one LCD touch screen (208);
communicating (422), by a multi-robot communication network (124), the behavioural data (218) to the data processing unit (128) through a mesh network (126) connecting multiple social robots; and
displaying (424), by a dashboard interface (130) operatively coupled with the data processing unit (128), processed behavioural data and hygiene performance metrics.
| # | Name | Date |
|---|---|---|
| 1 | 202541074224-STATEMENT OF UNDERTAKING (FORM 3) [04-08-2025(online)].pdf | 2025-08-04 |
| 2 | 202541074224-REQUEST FOR EXAMINATION (FORM-18) [04-08-2025(online)].pdf | 2025-08-04 |
| 3 | 202541074224-REQUEST FOR EARLY PUBLICATION(FORM-9) [04-08-2025(online)].pdf | 2025-08-04 |
| 4 | 202541074224-FORM-9 [04-08-2025(online)].pdf | 2025-08-04 |
| 5 | 202541074224-FORM FOR SMALL ENTITY(FORM-28) [04-08-2025(online)].pdf | 2025-08-04 |
| 6 | 202541074224-FORM 18 [04-08-2025(online)].pdf | 2025-08-04 |
| 7 | 202541074224-FORM 1 [04-08-2025(online)].pdf | 2025-08-04 |
| 8 | 202541074224-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-08-2025(online)].pdf | 2025-08-04 |
| 9 | 202541074224-EVIDENCE FOR REGISTRATION UNDER SSI [04-08-2025(online)].pdf | 2025-08-04 |
| 10 | 202541074224-EDUCATIONAL INSTITUTION(S) [04-08-2025(online)].pdf | 2025-08-04 |
| 11 | 202541074224-DRAWINGS [04-08-2025(online)].pdf | 2025-08-04 |
| 12 | 202541074224-DECLARATION OF INVENTORSHIP (FORM 5) [04-08-2025(online)].pdf | 2025-08-04 |
| 13 | 202541074224-COMPLETE SPECIFICATION [04-08-2025(online)].pdf | 2025-08-04 |
| 14 | 202541074224-FORM-26 [03-11-2025(online)].pdf | 2025-11-03 |