Abstract: A system and method for recommending corrective action to an individual for personalized well-being based on analyzed breathing pattern is disclosed. The system comprises of a head mounted device to identify and record facial features, and a wrist wearable embodied with an array of non-invasive sensors that are smartly positioned in a manner to read and measure subtle pulse sensations, and a wearable neck band to measure breathing signals. The facial features along with measured pulse signals and analyzed breathing signals are fed as an input to a machine learning algorithm to predict behavioural traits of individual from correlated breathing patterns, facial features and measured pulse signal.
DESC:FIELD OF THE INVENTION
Embodiment of the present invention relates to a system and method for providing healthy living and well-being recommendation to the user based on measuring and monitoring physiological parameters in real time and more particularly for a system and method that is capable of measuring user facial features, pulse signals, breathing patterns and other physiological parameters for recommending personalized healing measures for the user.
BACKGROUND OF THE INVENTION
Measuring and monitoring user physiological parameters have been a known medical science where primarily using invasive tools and devices for analysing body vitals are common. Even for general health check-up, an individual is required to book an expert for his medical test and wait until the medical report is received. Collecting samples, conducting test procedures, analysing the results and publishing the report usually takes no less than 12-18 hours even for basic tests.
Additionally, the measurement of body vitals and physiological parameters in real time is a major limitation. Knowing certain combinations of physiological traits is significant as these may provide vital signs of individual physical, mental and emotional health at a particular instance. As commonly understood, all external events that happens around an individual are triggers in form of stimuli to which the individual is expected to react or respond. Choosing a reaction or response is a kind of action that forms a behavior and eventually governs individual personality.
Choosing whether to react or respond is thus a significant link in one’s behavioural loop that needs to be carefully watched for. If ignored, it may even threaten the integrity of individual. Breathing is one of the most fundamental physiological functions of human body that reflects the internal state within the context of external environment and happening of events around him. It is an important physiological marker which can help to identify persisting physical, mental and emotional state of the individual. Breathing, however is a deterministic biomarker in understanding individual’s phenomenological, physiological and emotional state at any given instance. These experiences are tightly bound to bodily sensations such as breathing pace, muscle tension, heartbeat, pulse sensations, facial expressions and the like. Abnormalities in the pattern of breathing can be persistent in the case of chronic diseases, but can also develop extremely quickly in medical emergencies, so being able to diagnose such patterns quickly and reliably is often critical.
Recent research in field of facial expressions recognition has enabled gathering of valuable data for inferring subtle facial features based on eye tracking, face tracking etc. However, a non-invasive monitoring of breathing patterns in real time, which is understood as an index of human physical and emotional state, is not explored much.
Agreeably, most real world events are motley of mixed and distinct emotions and expressions that are often difficult to elicit in laboratory settings, thereby making the identification of exact physiological state and patterns of specific emotions difficult. Breathing is a peripheral rhythm with a special relationship with the mind, which may provide much useful insight into one’s mind and emotional state. Mind–body response is a term for the psychophysiological change that occurs due to the interaction between the body and the brain, particularly focusing on the effects body rhythms (like breathing) can have on one’s behavior and overall state of mind.
The real time measurement of breathing patterns may reveal many embodiments of cognitive processes and one’s ability to process emotions before their outward expression. Besides being of much relevance for social behavior, the monitoring and moderation of breathing patterns is important for improved performance of one’s mental and physical functionality. Currently, there are no ready devices that could non-invasively monitor breathing movements in real time, let alone providing recommendation for modulating such breathing patterns.
The present disclosure sets forth system and method for measuring breathing patterns via a non-invasive device, which when clubbed with other physiological parameters recorded from other devices will be able to assist individual in preparing himself for response to external events besides enabling him achieve his overall functional and emotional well-being. This disclosure embodies advantageous alternatives and improvements to existing body humor measuring systems and methods, and that may address one or more of the challenges or needs mentioned herein, as well as provide other benefits and advantages.
OBJECT OF THE INVENTION
An object of the present invention is to provide a system and method that enables an individual to directly measure his subtle breathing movements and signals for modulating overall responsive behaviour.
Another object of the present invention is to provide a system and method that is designed to monitor a comprehensive set of physiological and behavioral parameters of the user in real time for eventually analysing breathing patterns.
Another object of the present invention is to provide a system and method that provides an individual with overall well-being and self-healing recommendations taking into consideration his hold and control over breathing pattern.
Yet another object of the present invention is to provide a systematic, quick, reliable, and non-invasive system for embedding the idea of well-being in individuals based on understanding and moderating the body elements.
In one another object of the present invention, recommendation for conditioning emotional, behavioural responses by way of monitoring and modulating breathing patterns is provided.
Yet another object of the present invention is to provide a system and method that enables individuals to have conscious control and awareness of breathing pattern to achieve overall well-being in terms of mental activity, wakefulness, respiratory responses or other breathing pattern disorder.
Yet another object of the present invention is to provide an automated device that can determine individual breathing pattern at any time without requiring assistance of any medical practitioner.
Yet another object of the present invention is to provide a system and method that provides instant, clear and unambiguous feedback to user regarding his real time breathing condition along with the corrective action he should be performing in a given situation.
Yet another object of the present invention is to provide a cost-effective device for monitoring breathing pattern and early detection and prevention of breathing pattern associated disorders.
SUMMARY OF THE INVENTION
This section is for the purpose of summarizing some aspects of the present invention and to briefly introduce some preferred embodiments. Simplifications or omissions may be made to avoid obscuring the purpose of the section. Such simplifications or omissions are not intended to limit the scope of the present invention.
In first aspect of the disclosure, a system for non-invasive monitoring of breathing patterns and associated emotional state of user is disclosed. The system comprising of a head mounted device, which is configured with a combination of optical and infrared sensors to capture signals pertaining to facial expressions and micro-expressions of the user. The system further comprises of a wrist wearable that is configured with an array of sensors to measure signals pertaining to user pulse fluctuations and related parameters. Furthermore, the system comprises of a neck wearable that includes a plurality of layers configured with a plurality of biosensors, fluidic microchannels and deformable sensing elements positioned at optimal positions to record signals pertaining to breathing patterns from user neck. The system, finally, includes a computing device configured to receive the plurality of signals, assess quality of the received signal and process the received signal to obtain a single fused feature vector that is fed as an input to a machine learning model for predicting user state.
In second aspect of the disclosure, a method for non-invasive monitoring of breathing patterns and associated emotional state of user is disclosed. Here, the method comprises of capturing signals pertaining to facial expressions and micro-expressions of the user via a head mounted device; measuring signals pertaining to user pulse fluctuations and related parameters via a wrist wearable; recording signals pertaining to breathing patterns from user neck via a neck wearable, wherein the neck wearable comprises of a plurality of layers that are configured with a plurality of biosensors, fluidic microchannels and deformable sensing elements; and receive the plurality of signals, assess quality of the received signal and process the received signal to obtain a single fused feature vector that is fed as an input to a machine learning model for predicting user state.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
Fig. 1 illustrates a block diagram of the present system, in accordance with an embodiment of the present invention.
Fig. 2 illustrates a detailed diagram of the head mounted device, in accordance with an embodiment of the present invention.
Fig. 3 illustrates a detailed diagram of the neck wearable, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" be used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense, (i.e., meaning must). Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles, and the like are included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
In this disclosure, whenever a composition or an element or a group of elements is preceded with the transitional phrase “comprising”, it is understood that we also contemplate the same composition, element or group of elements with transitional phrases “consisting of”, “consisting”, “selected from the group of consisting of, “including”, or “is” preceding the recitation of the composition, element or group of elements and vice versa.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
Breathing is a combination of mechanical, physical, and chemical processes, which when closely monitored can be useful in enabling self-healing of an individual or may prove critical when analyzed by an observant expert. It is dynamically modulated by metabolic needs as well as by emotional states. The breathing dynamics provide a vital sign of different behaviors and associated internal state within the individuals. Collectively, breathing patterns are likely to vary drastically across behaviors and potentially serve as a physiological signal that helps to distinguish gross behavior and other associated physiological state.
One exemplary embodiment of present disclosure provides for a non-invasive monitoring of physiological parameters and mental state of target individual. In accordance with one general embodiment of present disclosure, the present system and method provides a device for directly and objectively measuring breathing patterns and other physiological parameters within an individual such that a customized and personalized self-healing recommendation may be provided to the individual based on his innate body humor or prakriti and temperament or gunas prevailing in a given situation.
Accordingly, as shown in Fig. 1, the system 1000 comprises of a multi-modal, body-distributed sensing network integrating a head mounted device 100, a wrist wearable/ wearable band 300 or other body wearable that can be worn around the neck region 500, all wearables comprising of one or more non-invasive sensors adapted to sense one or more vital parameters, pulse signals, breathing patterns and other physiological parameters of the individual.
In one primary embodiment, the individual is equipped with a head mounted device (HMD) 100, as shown in Fig. 2, which serves both as a visual interface and a sensor-rich platform for capturing upper facial and ocular activity of the individual. In accordance with one example embodiment, the HMD 100 comprises of optical sensors 50 and infrared sensors 70 including one or more cameras 52, eye tracking unit 54 that provides vital input to determine eye gaze (e.g., direction or orientation of one or both eyes) besides capturing images (still and video) of unobservable areas of face e.g. lips, mouth, chin area, nose, peri-eye region, top of head; forehead; eyebrows; eye sockets; temples; upper cheeks; lower cheeks; nose; philtrum; lips; chin boss; and jaw line etc along with observing muscle contractions, changes in eye shape, or trait changes that produce movement.
The facial information so captured is processed by a computing device 700 of the system 1000 to deduce eye movements along with micro-facial expressions that are determinant in understanding user cognitive load, stress levels, attention and emotional state and even indicators like fatigue, anxiety, or psychophysiological imbalance. In one alternate embodiment, the facial EMG (electromyography) signals may optionally be captured via surface electrodes integrated into the HMD 100 frame to enhance facial expression resolution.
Further, the system 1000 comprises of a wrist wearable 300 that is configured to measure and record dynamic pulse fluctuations that are fed as an input to the computing device 700 of the system 1000. Such a wrist wearable device 300 may comprise of a wrist band, patch, strap, bracelet, textile material or a combination thereof. The device 300 may be worn on individual wrist or ankle or any other part of the body to measure human body vitals.
The pulse signal and associated vital information measured by the array of sensors 350 is collected and sent to the computing device 700. In one example embodiment, the wrist wearable comprises of multi-wavelength photoplethysmography (PPG) sensors 355, electrodermal activity (EDA) sensors 360, and inertial measurement units (IMUs) 365 that monitors pulse waveforms, including pulse rate, pulse rate variability, or pulse transit time and the like that may be indicative of physiological dysregulation, illness symptoms or even imbalances in nervous system.
The pulse condition is captured as a time series and advanced machine learning algorithm is applied to identify the pulse patterns and extract dynamic features of underlying system. The different and unique combinations of pulse signals help one understand status quo of imbalances, illness or fluctuating emotional state of wearer.
Next, complex breathing patterns of individual are monitored by way of a band worn around the neck region, also referred as neck mounted wearable 500. The wearer’s motion and gauge stimulations of individual muscle group forming the breathing patterns are recorded and played. Understandably, a single breath consists of as many as four phases: an inhale, an inhale pause, an exhale, and an exhale pause prior to the initiation of the next inhale. The dynamics of these individual breath phases not only impact overall frequency but also influence measurements of duty cycle and variability of breathing patterns.
The complex variability in breathing pattern is caused in part by pattern generator in brain stem, from chemical and mechanical feedback control loops, network reorganization and network sharing with non-respiratory motor acts, as well as inputs from cortical and subcortical systems which may contain hidden signals that are to be carefully measured and interpreted. The correct interpretation of these subtle motions provides a valuable insight to emotional and physiological state of individual. Based on these interpretations, measures for modulating breathing patterns may be recommended for enhanced cognitive performance, thus more reasonable response to dynamically varying ongoing activities.
Accordingly, as shown in Fig. 3, the wearer is configured to wear the flexible, soft-composite neck mounted band 500 designed to encircle the anterior and lateral regions of the cervical neck. The band 500 comprises of integrated biosensors 520, haptic feedback modules 540, and deformable sensing elements 560 that provides a multimodal tactile feedback to the user in form of pressure, lateral stretch or vibrations to simulate a physiological muscle. In one example embodiment, the wearable band 500 is made of soft fabric composite that monitors different muscle group and enables continuous, non-invasive monitoring of respiratory and vascular activity, when the wearer breaths in and out. Following from above, a movement-based interaction design comprising of a deformational sensor 560 is disclosed.
In one alternate embodiment, such an outer covering of neck wearable 500 is a breathable and soft polyester-spandex composite that protects internal components, ensure flexibility and wearability. Exemplarily, the soft fabric composite has a low-profile cross-section (~2-5 mm), designed to conform to neck curvature.
The neck mounted wearable 500 configured with deformable sensing elements 560 and biosensors 520 such as pulse sensors that monitor jugular venous pulse, carotid arterial pulse, intrapleural pressure swings related to respiration, contraction of neck muscles, and swallowing deflection to record breathing patterns. Thus, the thin, contoured, and flexible band intimately interface with the skin and establishes a close correspondence therewith for recording breathing movements and patterns.
In one working embodiment, the neck mounted band 500 has a plurality of layers 505, namely outer layer 505(a), a middle layer 505(b) and an inner layer 505(c). Here, as mentioned above the outer layer 505(a) can be a soft, breathable textile (e.g. nylon-lycra blend, polyester-spandex composite, neoprene or silicon-infused fabric) that provides comfort, protection and flexibility. Next, the middle layer 505(b) includes an array of sensors comprising of deformable sensing elements 560 such as stretchable electrodes 560(a), strain sensors 560(b), and fluidic microchannels 562 that detect strain as change in electrical resistance. Finally, the innermost layer, which is a skin-contact layer is provided to enhance adhesion, signal quality and skin coupling.
The middle layer 505(b) comprising of stretchable strain sensors includes soft, carbon nanotube (CNT) based stretchable resistive or capacitive sensors embedded into fabric using conductive ink or serpentine-pattern metal traces. These detect mechanical strain, deformation, and stretching due to neck muscle expansion, tracheal movement during breathing or swallowing, or any external mechanical pressure due to tilt or tension. In accordance with one preferable embodiment, the deformable sensing elements 560, particularly strain sensors 560(b) may be positioned at anterior midline of neck, precisely at trachea, thyroid cartilage, cricothyroid area to detect breathing-related movement (inhalation/exhalation) and subtle neck muscle contractions.
In one advantageous embodiment, the strain sensors 560(b) include vascular and respiratory pulse sensors that is strategically placed over carotid artery and jugular vein to measure pressure variations. Precisely, the optomechanical (e.g., PPG) or piezoelectric sensors are used to monitor carotid arterial pulse waveform, jugular venous pulse (JVP) morphology and timing, along with respiratory-induced pressure modulations. Specifically, the capacitive or micro-piezoelectric sensor may be positioned at left lateral neck to measure low-pressure vascular waveform; while PPG/piezoelectric or optomechanical transducer may be introduced at right lateral neck (i.e. carotid artery path) to measure arterial pulse waveform.
Furthermore, the deformable sensing elements 560 comprising of stretchable electrodes 560(a) and/or other pressure sensors may be positioned at both lateral sides of neck- around jugular vein, carotid artery (near suprasternal notch) to capture jugular venous pulse (JVP) and carotid arterial pulse along with pressure wave changes and blood flow dynamics. In one example embodiment, Ecoflex, a highly stretchable silicone, may be used as a CNT-based stretchable resistor to create strain sensor for monitoring breathing. Likewise, the capacitive sensors consisting of conductive electrodes separated by a dielectric material will have altered capacitance with changing neck circumference as the user inhales and exhales.
Finally, the middle layer 505(b) is configured with a network of microfluidic channels 562 filled with ionic or conductive fluid (e.g. NaCl-glycerol mix) in order to detect strain by measuring fluid displacement or resistance change across deformed channels. This also enables high-resolution detection of intrapleural pressure shifts during inhalation/exhalation along with vascular pulse wave propagation. In one example, the microfluidic channels 562 is formed of PDMS (Polydimethylsiloxane) or Ecoflex with glycerol-based fluid. In one preferable embodiment, the microfluidic channels 562 are coupled with resistive sensors 562(b) to be positioned at central lower neck and tracheal line to measure pressure swings due to respiratory effort. The fluidic channels 562 along with deformable sensors 560(b) with close loop control ensures high response speed to provide a multimodal haptic feedback while detecting user input.
Additionally, a piezo strip or an accelerometer may be provided at front centre of neck band to detect laryngeal movement during swallowing. Next, a motion sensor (such as an IMU) may be provided at back of neck, specifically around occipital base and clasp region to monitor neck movements and tremors.
The thin form factor of the wearable- neck band 500 accommodates the strain experienced by the user skin, where a mechanical strain causes a strain in electrical resistance to detect deformation inputs like stretch and pressure for closed loop control. The multiple fibres within the fabric composite are disposed in modular and reconfigurable compositions for detecting respiratory breathing movements in user’s body.
The above breathing patterns captured from jugular vein, carotid pulse, strain, vibration and pressure sensors may be afflicted with signal artefacts that may affect accuracy and signal fidelity. For example, head or neck movement may shift sensor alignment, or even the signal measure from jugular vein has a very low pressure, which can be easily drowned by skin shifts or pressure noise. Likewise, non-breathing related pressures associated with swallowing, speaking, coughing may look like respiratory deflections as they produce similar waveform.
Importantly, closed proximity of sensors around neck band 500 may induce cross-talk between sensors or electromagnetic interference leading to spurious signal distortions. Further, even the perspiration and humidity may contribute to changes in skin impedance or dielectric properties besides issues of thermal drift which may cause resistance change in strain sensors or PPG baseline shifts. While there can be various approaches to mitigate the sensor noise such as adaptive filtering tailored to breathing frequency range, mechanical isolation of sensors, closed-loop calibration and even use of machine classifiers to distinguish between normal breath, cough, swallow, and speech artefacts; the present disclosure makes use of sensor fusion algorithms to combine and compare modalities for cross validation.
Thence, coupling the readings from neck mounted wearable 500 with the HMD 100 and the wrist wearable 300 enhances the precision and robustness of breathing pattern monitoring besides enabling multimodal physiological interpretation, which is far more valuable than any single-source monitoring. The attempt is to combine dynamic weight assignment with multimodal sensor fusion to monitor breathing patterns, emotional states, and physiological imbalances. To begin with, real-time multimodal estimation of respiratory and emotional states is achieved by combining signals from neck wearable 500, HMD 100 and wrist wearable 300. Table 1 below gives an example of sensor source, signal types, and/or sampling rate at which signal is fetched.
Sensor Source Signal Types Sampling Rate
Neck Band Strain (respiration), Pulse (JVP/CAP), Pressure (IP), IMU 50-200Hz
HMD Facial EMG, Blink rate, Micro-expressions 30-100Hz
Wrist Wearable PPG, GSR, Skin Temp 25-100Hz
Table 1
These sensor signals and movements are received by a computing device 700 that is operable to generate a breathing pattern and breathing depth to provide therapy or guidance for individual’s functional improvement. The present system 1000 comprises a combination of wearable devices such as head mounted device 100, wearable wrist 300, wearable neck band 500 that provide user-friendly and easily understandable interfaces for individuals to assist them in identifying, regulating, and modifying their emotional state.
As mentioned previously, breathing patterns vary from regular (periodic or stationary) to very irregular patterns, which makes their analysis and prediction difficult. In one significant embodiment, the machine learning based computing device 700 is operable for objectively identifying nuanced differences in breathing patterns across emotionally salient behavioural contexts. The bio-signal (of breathing) produced by electrical, chemical, or mechanical activity in individual’s body is captured. The collected signals are filtered using multiple filters, and features are extracted. In one working embodiment, machine learning based computing device is used for automatic pattern detection during breathing, comprising steps of a) feature extraction; b) classification selection; c) application of machine learning algorithm; d) interpretation and comprehension of output (as will be discussed later).
Thence, each sensor signal ???? is paired with a Signal Quality Indicator (SQI) in real time, where xi represents the data being collected from the ith sensor. The SQI is a value that tells you how reliable or trustworthy the current sensor signal is. The poor quality of signal could be due to noise, sensor malfunction, movement artefacts, etc.
SQIi =QualityFunction(xi)
The Quality Function is an algorithm or mathematical function programmed to analyse the sensor signal and produce a number (often between 0 and 1, or a percentage) that describes the signal's quality. For instance, it may check if the signal is very noisy, if the expected patterns are present, or if the signal drops out. Thus, it helps to understand which signals are trustworthy at the moment, so that only high-quality data is used for decision-making, analytics, or storage.
Each signal then receives a dynamic weight wi based on both its signal quality and its task relevance.
Let:
• SQIi: current signal quality
• bi: fixed prior weight based on physiological relevance
• n: total number of sensor signals used
Final weight:
For each signal xi, instead of using the raw signal, one or more features are extracted and at each time point t, all signals’ feature vectors are calculated to obtain a time-synchronized feature vector from each sensor.
xi=[fi1,fi2,…,fik]
At each time t, a weight is assigned to each signal's feature vector. This weight can be dynamic, changing over time. The weight typically reflects things like:
• Signal quality (if a sensor is noisy now, lower wi)
• Physiological importance
Thus, as shown above, each feature is scaled and weighted by wi at time t.
Now, all the weighted feature vectors from each sensor at each timestamp, are merged into one single “fused” feature vector, as shown below:
Since wi(t) can change (depending on the reliability of each sensor and its physiological importance at the current time), the fusion automatically adapts: (a) If a sensor signal is bad (e.g., due to motion artifact), its weight drops; or (b) If a signal is critical now, its weight increases. This results in a fused feature vector, dynamically adapting to reliability and physiological priority of each signal stream.
This fused feature vector is used an input to a machine learning model (e.g. LSTM, Gated Recurrent Unit, Random Forest or Support Vector Machine). In one example, diversity in breathing patterns across a range of well-defined behaviors may be obtained using principal component analysis (PCA) and unbiased k-means clustering based on breathing rate, breathing volume, peak inspiratory flow, average inhale duration, tidal volume, duty cycle exhale. For example, quick and high breathing rate may be symbolic of emotional stress, anxiety, fear, anger, palpitation etc.; while the slow breathing rate may signify controlled temper, freezing, relief or other calming composure.
In other alternate working embodiment, support vector machine may be adopted for classification purposes based on degree of similarity between detected bio-signal features, a kernel function as a means of comparison. Since the breathing patterns are to be observed in real time, naïve baiyes approach may be adopted to identify patterns during various activities and cluster them to identify underlying emotion.
In order to predict very irregular, complex and non-stationary patterns, neural network approach may be adopted to improve the accuracy of prediction. Any of feedforward backpropagation network or the recurrent network may be utilized as the combination of the neurons and the connections herein helps the network exhibit complex behavior. In the feedforward network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes and to the output nodes. On the other hand, the feedback networks are dynamic networks as the information flow is bi-directional i.e. forward and backward.
However, the internal state change appears to accompany a characteristic pattern of breathing that, to some extent, may overlap with the pattern from other emotions. For example, fast, deep breaths are associated with excitement, while rapid, shallow breaths are associated with concentration, fear, and panic. Thus, the internal state is analyzed as a combination of breathing patterns, facial expressions, eye emotions and pulse signals to aptly understand wearer’s real state of emotions or bodily imbalance, if any and capture temporal dependencies between them.
Thus, for the purposes of the present disclosure, and in accordance with one preferable embodiment, the Long Short-Term Memory Model is employed to fuse time-synchronized, weighted features from multimodal biosensors from different wearable devices-neck band, HMD and wrist wearable. Notably, LSTM model is selected as it can perfectly model temporal dependencies, handle variable length sequences and retain long-term contextual patterns such as breathing trends, HRV shifts, or emotional state transitions.
Accordingly, at first weighted feature vectors from all modalities are concatenated at each time step, and then passed into a single LSTM network. The final input tensor shape:
Input Tensor Shape = (batch_size, time_steps, fused_ feature_dim)
The fused sequence is then passed to LSTM layers (e.g. 2-layer bi-LSTM or uni-directional LSTM). While the first LSTM layer learns low level features (short term breathing fluctuations, pulse spikes etc.), the second LSTM layer learns higher-level patterns (e.g., onset of apnea, emotional episode, muscular fatigue patterns). Thus, after having track patterns across time (e.g., how facial tics, GSR spikes, and breath strain correlate over seconds), LSTM learns temporal relationships, like for e.g.:
• When the intrapleural pressure drops + Hear Rate spikes + blink rate increases, a stress event follows; or
• During apnea, there's a flatline in neck strain + drop in Heart Rate Variability + facial EMG inactivity
For precision, an additional attention mechanism may be included after LSTM(s) to learn which modality/time-step is most informative. The attention layer weight ?t for each time step t to focus on key events:
Where:
• ht: Hidden state at time t
• c: Context vector for prediction
Thus, by leveraging the LSTM model, temporal interpretation of multimodal physiological signals can be captured across the head-mounted display (HMD) 100, wrist wearable 300, and neck band 500. Given the inherent temporal dependencies in respiratory cycles, pulse fluctuations, and micro-expressions, the LSTM architecture effectively learns time-based patterns that may correspond to:
• Breathing anomalies such as apnea or irregular respiration
• Emotional variations via micro-expression dynamics
• Autonomic nervous system fluctuations reflected in pulse and muscular feedback
The output from LSTM model in form of class labels (e.g. if it is determined as “normal breathing, or apnea, elevated stress), regression value (e.g., breathing variability score, HRV index, emotional stability) or a sequence of predicted stage (e.g., per-second breathing classification), may be passed into a dense (fully connected) neural layer for final classification or confidence scoring.
Finally, each sensor modality is processed through dedicated LSTM layers to retain modality-specific time series characteristics. These are then fused using either attention-based weighting or reliability-based fusion to provide a robust, consolidated interpretation of the user's state. Once such predictions are made, the haptic feedback module is triggered. In accordance with one working embodiment, the haptic feedback module integrates soft actuators (e.g., pneumatic bladders, electroactive polymers, or vibrotactile motors) that are configured to deliver localized vibrations, tactile stretch or compression and pulsed pressure stimuli to prompt deeper breathing or calm the user.
Conscious modulation of breathing help in favourably controlling sympathetic and parasympathetic nervous activity; precisely cognitive performance is modulated during nose breathing but not during mouth breathing, and can organize neuronal oscillations throughout the brain. If physiological signals obtained from various wearable devices and executable by machine learning based computing device 700 shows an augmented breathing pattern, one may be guided to slow down his breathing consciously to avoid panic or anxiety related disorders.
Based on above analysis, controlled breathing and way of regulating one’s overall health is proposed. Controlled breathing can cause physiological changes that include: lowered blood pressure and heart rate, reduced levels of stress hormones in the blood reduced lactic acid build-up in muscle tissue, balanced levels of oxygen and carbon dioxide in the blood, improved immune system functioning increased physical energy, increased feelings of calm and well-being.
In one preferred embodiment, the machine learning model outputs a message based on analysis of facial features, pulse signals and breathing patterns, and the user is prompted to take corrective action based on recommendation displayed on head mounted display 100. The facial features are combined with pulse signals and analysed breathing patterns to find a correlation therebetween. The correlation value between the bio-signals and facial analysis provides a behavioural cue that represents an affective state. Insights of underlying emotion are inferred from above correlation value and corresponding message may be displayed on the head mounted display 100 or a wearable hand worn patch 300 that may be configured with a display for ease and convenience of user.
In accordance with another exemplary embodiment, a method for non-invasive monitoring of breathing patterns and associated emotional state of user is disclosed. Here, the method comprises of capturing signals pertaining to facial expressions and micro-expressions of the user via a head mounted device 100; measuring signals pertaining to user pulse fluctuations and related parameters via a wrist wearable 300; recording signals pertaining to breathing patterns from user neck via a neck wearable 500, wherein the neck wearable 500 comprises of a plurality of layers 505 that are configured with a plurality of biosensors 520, fluidic microchannels 562 and deformable sensing elements 560; and receive the plurality of signals, assess quality of the received signal and process the received signal to obtain a single fused feature vector that is fed as an input to a machine learning model for predicting user state.
In accordance with an embodiment, the machine-readable instructions may be loaded into the memory unit from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit. The memory unit in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Further, the micro controller is operably connected with the memory unit. In various embodiments, the micro controller is one of, but not limited to, a general-purpose processor, an application specific integrated circuit (ASIC) and a field-programmable gate array (FPGA).
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g., RAM) and/or non-volatile (e.g., ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention.
,CLAIMS:We Claim:
1) A system (1000) for non-invasive monitoring of breathing patterns and associated emotional state of user, the system (1000) comprising:
a head mounted device (100) configured with a combination of optical (50) and infrared sensors (70) to capture signals pertaining to facial expressions and micro-expressions of the user;
a wrist wearable (300) configured with an array of sensors (350) to measure signals pertaining to user pulse fluctuations and related parameters;
a neck wearable (500) comprising of a plurality of layers (505) that are configured with a plurality of biosensors (520), fluidic microchannels (562) and deformable sensing elements (560) positioned at optimal positions to record signals pertaining to breathing patterns from user neck; and
a computing device (700) configured to receive the plurality of signals, assess quality of the received signal and process the received signal to obtain a single fused feature vector that is fed as an input to a machine learning model for predicting user state.
2) The system (1000), as claimed in claim 1, wherein the head mounted device (100) comprises of optical sensors (50), infrared sensors (70) and an eye tracking unit (54) to capture the signals from lips, mouth, chin area, nose, peri-eye region, top of head; forehead; eyebrows; eye sockets; temples; upper cheeks; lower cheeks; nose; philtrum; lips; chin boss; and jaw line etc along with observing muscle contractions, changes in eye shape, or trait changes that produce movement.
3) The system (1000), as claimed in claim 1, wherein the wrist wearable (300) is configured with the array of sensors (350) comprising of multi-wavelength photoplethysmography (PPG) sensors 355, electrodermal activity (EDA) sensors (360), and inertial measurement units (IMUs) (365) that monitors pulse waveforms, including pulse rate, pulse rate variability, or pulse transit time.
4) The system (1000), as claimed in claim 1, wherein the plurality of biosensors (520) integrated into the neck wearable (500) comprises of one or more pulse sensors configured to monitor jugular venous pulse, carotid arterial pulse, intrapleural pressure swings related to respiration, contraction of neck muscles, and swallowing deflections.
5) The system (1000), as claimed in claim 1, wherein the neck wearable (500) is formed of a soft and breathable composite that is configured to conform to neck curvature.
6) The system (1000), as claimed in claim 1, wherein the deformable sensing elements (560) comprise of stretchable electrodes (560(a)) and strain sensors (560(b)) that are configured to detect strain as change in electrical resistance.
7) The system (1000), as claimed in claim 6, wherein the strain sensors (560(b)) may be optically placed at anterior midline of user neck, around trachea, thyroid cartilage, cricothyroid area.
8) The system (1000), as claimed in claim 1, wherein the fluidic microchannels (562) is filled with ionic or conductive fluid to detect strain by measuring fluid displacement or resistance change across deformed channels.
9) The system (1000), as claimed in claim 1, wherein the fluidic microchannels (562) are positioned at central lower neck and tracheal line to measure pressure swings due to respiratory effort.
10) The system (1000), as claimed in claim 1, wherein the computing device (700) is configured to assess the quality of the received signal, wherein each of the received signal is paired with a signal quality indicator; and a dynamic weight is assigned to the each of the received signal based on physiological relevance of the respective signal.
11) The system (1000), as claimed in claim 1, wherein the computing device (700) is configured to obtain a time synchronized feature vector from each of the received signal to obtain the single fused feature vector that is fed as an input to the machine learning model.
12) The system (1000), as claimed in claim 1, wherein the machine learning model is selected from Random Forest, Support Vector Machine, Gated Recurrent Unit and Long Short-Term Memory.
13) The system (1000), as claimed in claim 12, wherein in an event the Long Short-Term Memory model is selected, the fused feature vector is passed to one or more LSTM layers, an additional attention mechanism and a dense neural layer for predicting the user state.
14) A method for non-invasive monitoring of breathing patterns and associated emotional state of user, the method comprising:
capturing signals pertaining to facial expressions and micro-expressions of the user via a head mounted device (100);
measuring signals pertaining to user pulse fluctuations and related parameters via a wrist wearable (300);
recording signals pertaining to breathing patterns from user neck via a neck wearable (500), wherein the neck wearable (500) comprises of a plurality of layers (505) that are configured with a plurality of biosensors (520), fluidic microchannels (562) and deformable sensing elements (560); and
receive the plurality of signals, assess quality of the received signal and process the received signal to obtain a single fused feature vector that is fed as an input to a machine learning model for predicting user state.
15) The method, as claimed in claim 14, wherein the head mounted device (100) is configured with a combination of optical (50), infrared sensors (70), and an eye tracking unit (54) to capture the signals from lips, mouth, chin area, nose, peri-eye region, top of head; forehead; eyebrows; eye sockets; temples; upper cheeks; lower cheeks; nose; philtrum; lips; chin boss; and jaw line etc along with observing muscle contractions, changes in eye shape, or trait changes that produce movement.
16) The method, as claimed in claim 14, wherein the wrist wearable (300) is configured with an array of sensors (350) comprising of multi-wavelength photoplethysmography (PPG) sensors 355, electrodermal activity (EDA) sensors (360), and inertial measurement units (IMUs) (365) that monitors pulse waveforms, including pulse rate, pulse rate variability, or pulse transit time.
17) The method, as claimed in claim 14, wherein the plurality of biosensors (520) integrated into the neck wearable (500) comprises of one or more pulse sensors configured to monitor jugular venous pulse, carotid arterial pulse, intrapleural pressure swings related to respiration, contraction of neck muscles, and swallowing deflections.
18) The method, as claimed in claim 14, wherein the deformable sensing elements (560) comprise of stretchable electrodes (560(a)) and strain sensors (560(b)) that are configured to detect strain as change in electrical resistance.
19) The method, as claimed in claim 18, wherein the strain sensors (560(b)) may be optically placed at anterior midline of user neck, around trachea, thyroid cartilage, cricothyroid area.
20) The method, as claimed in claim 14, wherein the fluidic microchannels (562) is filled with ionic or conductive fluid to detect strain by measuring fluid displacement or resistance change across deformed channels.
21) The method, as claimed in claim 14, wherein the fluidic microchannels (562) are positioned at central lower neck and tracheal line to measure pressure swings due to respiratory effort.
22) The method, as claimed in claim 14, wherein the quality of the received signal is assessed by way of pairing each of the received signal with a signal quality indicator, and assigning a dynamic weight to each of the received signal based on physiological relevance of the respective signal.
23) The method, as claimed in claim 14, wherein a time synchronized is obtained from feature vector from each of the received signal to obtain the single fused feature vector that is fed as an input to the machine learning model.
24) The method, as claimed in claim 14, wherein the machine learning model is selected from Random Forest, Support Vector Machine, Gated Recurrent Unit and Long Short-Term Memory.
25) The method, as claimed in claim 24, wherein in an event the Long Short-Term Memory model is selected, the fused feature vector is passed to one or more LSTM layers, an additional attention mechanism and a dense neural layer for predicting the user state
| # | Name | Date |
|---|---|---|
| 1 | 202421058526-PROVISIONAL SPECIFICATION [01-08-2024(online)].pdf | 2024-08-01 |
| 2 | 202421058526-FORM FOR STARTUP [01-08-2024(online)].pdf | 2024-08-01 |
| 3 | 202421058526-FORM FOR SMALL ENTITY(FORM-28) [01-08-2024(online)].pdf | 2024-08-01 |
| 4 | 202421058526-FORM 1 [01-08-2024(online)].pdf | 2024-08-01 |
| 5 | 202421058526-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [01-08-2024(online)].pdf | 2024-08-01 |
| 6 | 202421058526-DRAWINGS [01-08-2024(online)].pdf | 2024-08-01 |
| 7 | 202421058526-FORM-5 [29-07-2025(online)].pdf | 2025-07-29 |
| 8 | 202421058526-DRAWING [29-07-2025(online)].pdf | 2025-07-29 |
| 9 | 202421058526-COMPLETE SPECIFICATION [29-07-2025(online)].pdf | 2025-07-29 |
| 10 | 202421058526-FORM-9 [04-08-2025(online)].pdf | 2025-08-04 |
| 11 | 202421058526-MSME CERTIFICATE [05-08-2025(online)].pdf | 2025-08-05 |
| 12 | 202421058526-FORM28 [05-08-2025(online)].pdf | 2025-08-05 |
| 13 | 202421058526-FORM 18A [05-08-2025(online)].pdf | 2025-08-05 |