Abstract: Currently, many critical care indices are not captured automatically at a granular level, rather are repetitively assessed by overburdened nurses. In this pilot study, we examined the feasibility of using pervasive sensing technology and artificial intelligence for autonomous and granular monitoring in the Intensive Care Unit (ICU). As an exemplary prevalent condition, we characterized delirious patients and their environment. We used wearable sensors, light and sound sensors, and a camera to collect data on patients and their environment. We analyzed collected data to detect and recognize patient’s face, their postures, facial action units and expressions, head pose variation, extremity movements, sound pressure levels, light intensity level, and visitation frequency. We found that facial expressions, functional status entailing extremity movement and postures, and environmental factors including the visitation frequency, light and sound pressure levels at night were significantly different between the delirious and non-delirious patients. Our results showed that granular and autonomous monitoring of critically ill patients and their environment is feasible using a noninvasive system, and we demonstrated its potential for characterizing critical care patients and environmental factors.
Description:FIELD OF INVENTION
Many advanced technologies are being added to medical system day by day through the indefatigable research of scientists throughout the world. In this digital age, an e-healthcare system can play a vital role as a dedicated ecosystem for medical treatment and supervision. As an e-healthcare, IoT is a reliable technology that we can use to improve emergency health monitoring system. The usage of IoT with microcontroller and intensive healthcare process on basis of sensors is expanding rapidly, which makes human life easier, more productive and smarter.
BACKGROUND OF INVENTION
Due to the rapid increase of population in different countries of the world and insufficient manpower in medical sec-tors, healthcare units hardly can provide equal medical aid for each patient. Due to this situation, the medical system is being modernized by including internet of Things (IoT)-based system. In IoT-based systems the medical sensors and wearable devices can capture essential health signs for health monitoring. Sensors can capture pressure level, glucose, weight, ECG signal, heart rate etc. to observe paralyzed and aged person. The system may also alert in medical emergency situations like falling of old aged patients and emergency abnormal condition as within the medical aid unit (ICU). Both patients and doctors may persistently monitor the heart rate, can get more useful data and take proper actions to halt intense injuries using smart devices. Enabling every person to look after their health conditions and to advice the most efficient solutions whenever any emergency occurs, may save those human lives.
The patent application number 201911049335 discloses a novel polishing process for nano-finishing of b-phase ti-35nb-7ta-5zr biomedical alloy.
The patent application number 201921024960 discloses a method for synthesis of doped la f3:ce nanoparticles modified by glutamine for biomedical application
The patent application number 201941043755 discloses a collagen based wearable sensor tracking for diabetic and heart patients.
SUMMARY
Every year, more than 5.7 million adults are admitted to intensive care units (ICU) in the United States, costing the health care system more than 67 billion dollars per year. A wealth of information is recorded on each patient in the ICU, including high-resolution physiological signals, various laboratory tests, and detailed medical history in electronic health records (EHR). Nonetheless, important aspects of patient care are not yet captured in an autonomous manner. For example, environmental factors that contribute to sleep disruption and ICU delirium, such as loud background noise, intense room light, and excessive rest-time visits, are not currently measured. Other aspects of patients’ well-being, including patient’s facial expressions of pain and various emotional states, mobility and functional status are not captured in a continuous and granular manner and require self-reporting or repetitive observations by ICU nurses. It has been shown that self-report and manual observations can suffer from subjectivity, poor recall, limited number of administrations per day, and high staff workload. This lack of granular and continuous monitoring can prevent timely intervention strategies. With recent advancements in artificial intelligence (AI) and sensing, many researchers are exploring complex autonomous systems in real-world settings. In ICU settings, doctors are required to make life-saving decisions while dealing with high level of uncertainty under strict time constraints to synthesize high-volume of complex physiologic and clinical data. The assessment of patients’ response to therapy and acute illness, on the other hand, is mainly based on repetitive nursing assessments, thus limited in frequency and granularity. AI technology could assist not only in administering repetitive patient assessments in real-time, but also in integrating and interpreting these data sources with EHR data, thus potentially enabling more timely and targeted interventions.
DETAILED DESCRIPTION OF INVENTION
AI in the critical care setting could reduce nurses’ workload to allow them to spend time on more critical tasks, and could also augment human decision-making by offering low-cost and high capacity intelligent data processing. In this study, we examined how pervasive sensing technology and AI can be used for monitoring patients and their environment in the ICU. We utilized three wearable accelerometer sensors, a light sensor, a sound sensor, and a high-resolution camera to capture data on patients and their environment in the ICU (Fig. 1). We used computer vision and deep learning techniques to recognize patient’s face, posture, facial action units, facial expressions, and head pose from video data. We also analyzed video data to find visitation frequency by detecting the number of visitors or medical staff in the room. To complement vision information for activity recognition, we analyzed data from wearable accelerometer sensors worn on the wrist, ankle, and arm. Additionally, we captured the room’s sound pressure levels and light intensity levels to examine their effect on patients’ sleep quality, assessed by the Freedman Sleep Questionnaire. For recruited patients, we retrieved all available clinical and physiological information from EHR. For a pilot study, we prospectively recruited 22 critically ill patients with and without ICU delirium to determine whether the Intelligent ICU system can be used to characterize the difference between their functional status, pain and environmental exposure. The Confusion Assessment Model-Intensive Care Unit (CAM-ICU) was administered daily as the gold standard for detecting delirium.
Figure 1: (a) Intelligent ICU uses pervasive sensing for collecting data on patients and their environment. The system includes wearable accelerometer sensors, video monitoring system, light sensor, and sound sensor. (b) The Intelligent ICU information complements conventional ICU information.
Face detection
To detect all individual faces in each video frame (including the patient, visitors, and clinical staff), we used the pretrained Joint Face Detection and Alignment using Multi-Task Cascaded Convolutional Network (MTCNN). Face detection was evaluated on 65,000 annotated frames containing at least one individual face, resulting in a Mean Average Precision (mAP) value of 0.94.
Patient face recognition
To recognize the patient’s face among detected faces, we implemented the FaceNet algorithm as an Inception-ResNet v1 model. The algorithm achieved an overall mAP of 0.80 and had slightly higher mAP value of 0.82 among non-delirious patients compared to delirious patients mAP of 0.75.
Patient facial action unit detection
We detected 15 facial action units (AUs) from 3,203,153 video frames using the pretrained OpenFace deep neural network. The 15 AUs included six binary AUs (0 = absent, 1 = present), and 12 intensity-coding AUs (0 = trace, 5 = Maximum value), with three AUs reported as both binary and intensity. Successful detection was defined as the toolbox being able to detect the face and its facial AUs. Successful detection was achieved for 2,246,288 out of 3,203,153 video frames (%70.1). The 15 detected AUs were compared between the delirious and non-delirious patients (Fig. 2). All AUs were shown to be significantly different between the two groups (p-value < 0.01).
Figure 2: (a) Distribution of intensity-coding facial Action Units (AUs) among delirious and non-delirious patients shown as boxplots where middle line represents median and lower and upper end lines represents 25th and 75th percentiles, respectively. (b) Percentage of frames with each binary-coding facial AU present among delirious and non-delirious patients during their enrollment period.
Patient facial expression recognition
We used the Facial Action Coding System (FACS) to identify common facial expressions from their constituent AUs. Eight common expressions were considered, including pain, happiness, sadness, surprise, anger, fear, disgust, and contempt. The occurrence rate of facial expressions was compared between the delirious and non-delirious patients (Fig. 3). We were able to show that distributions of several facial AUs are different among the delirious and non-delirious groups. The differences in the distribution of such AUs point to the differences in affections of delirious and non-delirious patients. For instance, the presence of brow lowerer AU signals a negative valence and is stronger among the delirious patients than non-delirious patients (Fig. 2). Facial expressions patterns can also potentially be used in predicting deterioration risks in patients. Delirious patients had suppressed expression for seven out of eight emotions. All facial expressions except for anger had significantly different distribution among the delirious and non-delirious patients (p-value < 0.001).
Figure 3: Percentage of frames with each facial expression present among the delirious and non-delirious patients, calculated based on constituent Aus
Head pose detection
We detected three head poses including yaw, pitch, and roll, using the pretrained Open Face deep neural network tool. The head rotation in radians around the Cartesian axes was compared between the delirious and non-delirious patients, with the left-handed positive sign convention, and the camera considered as the origin. Delirious patients exhibited significantly less variation in roll head pose (rotation in-plane movement), in pitch head pose (up and down movement), and in yaw head pose (side to side movement) compared to the non-delirious patients (Fig. 4). Extended range of head poses in non-delirious patients compared to delirious patients (Fig. 4) might be the result of more communication and interaction with the surrounding environment.
Figure 4: (a–c), *shows statistically significant difference between delirious and non-delirious groups (p-value < 0.001).
Posture recognition
To recognize patient posture, we used a multi-person pose estimation model to localize anatomical key-points of joints and limbs. Then we used the lengths of body limbs and their relative angles as features for recognizing lying in bed, standing, sitting on bed, and sitting in chair. We obtained an F1 score of accuracy of 0.94 for posture recognition. The highest misclassification rate (11.3%) was obtained for sitting on chair (misclassified as standing). The individual classification accuracy of recognizing postures was: lying = 94.5%, sitting on chair = 92.9%, and standing = 83.8%. Delirious patients spent significantly more time lying in the bed and sitting on chair compared to non-delirious patients (p-value < 0.05 for all four postures, Fig. 4).
Extremity movement analysis
We analyzed the data from three accelerometer sensors worn on patient’s wrist, ankle, and arm, and compared the results between delirious and non-delirious patients. For the purpose of feature calculation, we consider daytime from 7 AM to 7 PM, and nighttime from 7 PM to 7 AM, based on nursing shift transitions. Figure 5 shows the smoothed accelerometer signal averaged over all delirious and all non-delirious patients. We also derived 15 features per each accelerometer (Table 2), resulting in 45 total features for the wrist, ankle, and arm sensors. We compared the extracted features in delirious and non-delirious patients for the wrist-worn sensor, arm-worn sensor, and ankle-worn sensor. Delirious patients had higher movement activity for wrist and lower extremity, and lower movement activity for the upper extremity during the entire 24-hours cycle, daytime (7 AM–7 PM), and nighttime (7 PM–7 AM). The 10-hour window with maximum activity intensity showed different levels of activity between the two patient groups. However, activity in the 5-hour window with the lowest activity intensity was not significantly different, possibly due to low activity levels in ICU in general. The number of immobile moments during the day and during the night were also different between the two groups, with less number of immobile moments detected for the delirious patients, hinting at their restlessness and lower sleep quality. The extremity movement features did not show significant difference for arm and ankle. This might stem from the overall limited body movements of all ICU patients.
Figure 5: Delirious and non-delirious group comparisons for (a–e) sensor data and (f–j) physiological data.
Visitation frequency
The pose estimation model was also used on the video data to identify the number of individuals present in the room at any given time, including visitors and clinical staff. Delirious patients on average had fewer visitor disruptions during the day, but more disruptions during the night (Fig. 4).
Room sound pressure levels and light intensity
The sound pressure levels for delirious patients’ rooms during the night were on average higher than the sound pressure levels of non-delirious patients’ rooms (Fig. 5). Average nighttime sound pressure levels were significantly different between the delirious and non-delirious patients (p-value < 0.05). Delirious patients on average experienced higher light intensity during the evening hours, as can be seen in Fig. 5. Average nighttime light intensity levels were significantly different between the delirious and non-delirious patients (p-value < 0.05).
Sleep characteristics
We examined the Freedman Sleep Questionnaire responses for the delirious and non-delirious patients to compare their sleep patterns. While the median of the overall quality of sleep in the ICU and effect of acoustic disruptions and visitations during the night were different among the delirious and non-delirious groups, these differences were not statistically significant. However, delirious patients reported a lower overall ability to fall asleep compared to non-delirious patients, and they were more likely to find the lighting to be disruptive during the night (p-value = 0.01, p-value = 0.04, respectively, Supplementary Fig. S2).
Physiological and EHR data
Patients’ demographic and primary diagnosis were not significantly different between the delirious and non-delirious patients (Table 1). Delirious patients on average had higher average heart rate, oxygen saturation, and respiration rate, a sign of potential respiratory distress and agitation. Systolic and diastolic blood pressure of the delirious patients were lower than non-delirious patients during the evenings (Fig. 5). All delirious patients received continuous enteral feeding orders and were fed throughout the nighttime while 50% of non-delirious patients had enteral feeding order during their enrolment days.
DETAILED DESCRIPTION OF DIAGRAM
Figure 1: (a) Intelligent ICU uses pervasive sensing for collecting data on patients and their environment. The system includes wearable accelerometer sensors, video monitoring system, light sensor, and sound sensor. (b) The Intelligent ICU information complements conventional ICU information.
Figure 2: (a) Distribution of intensity-coding facial Action Units (AUs) among delirious and non-delirious patients shown as boxplots where middle line represents median and lower and upper end lines represents 25th and 75th percentiles, respectively. (b) Percentage of frames with each binary-coding facial AU present among delirious and non-delirious patients during their enrollment period.
Figure 3: Percentage of frames with each facial expression present among the delirious and non-delicious patients, calculated based on constituent Aus.
Figure 4: (a–c), *shows statistically significant difference between delirious and non-delirious groups (p-value < 0.001).
Figure 5: Delirious and non-delirious group comparisons for (a–e) sensor data and (f–j) physiological data. , Claims:1. Design and analysis of a Smart System to Monitor the Patients in ICU through Biomedical Devices claims to perform face detection, face recognition, facial action unit detection, head pose detection, facial expression recognition, posture recognition, extremity movement analysis, sound pressure level detection, light level detection, and visitation frequency detection, in the ICU.
2. As an example, we evaluated our system for characterization of patient and ambient factors relevant to delirium syndrome. Such a system can be potentially used for detecting activity and facial expression patterns in patients.
3. It also can be used to quantify modifiable environmental factors such as noise and light in real time. This system can be built with an estimated cost of < $300 per ICU room, a relatively low cost compared to daily ICU costs of thousands of dollars per patient. It should be noted that after proper cleaning procedures, the same devices can be also reused for other patients, further reducing the amortized cost per patient in the long term.
4. To the best of our knowledge, this is the first study to continuously assess critically ill patients’ affect and emotions using AI. The AI introduces the ability to use the combination of these features for autonomous detection of delirium in real time and would offer a paradigm shift in diagnosis and monitoring of mood and behavior in the hospital setting.
5. Our system also uniquely offers autonomous detection of patients’ activity patterns by applying deep learning on sensor data obtained from video and accelerometer. This previously unattainable information can optimize patients’ care by providing more comprehensive data on patients’ status through accurate and granular quantification of patients’ movement.
6. While there is previous work that has used video recordings in the ICU to detect patient’s status, they were not able to measure the intensity of patients’ physical activity. The combined knowledge of patients’ functional status through video data and their physical activity intensity through movement analysis methods can help health practitioners to better decide on rehabilitation and assisted mobility needs.
| # | Name | Date |
|---|---|---|
| 1 | 202431009517-REQUEST FOR EARLY PUBLICATION(FORM-9) [13-02-2024(online)].pdf | 2024-02-13 |
| 2 | 202431009517-POWER OF AUTHORITY [13-02-2024(online)].pdf | 2024-02-13 |
| 3 | 202431009517-FORM-9 [13-02-2024(online)].pdf | 2024-02-13 |
| 4 | 202431009517-FORM 1 [13-02-2024(online)].pdf | 2024-02-13 |
| 5 | 202431009517-DRAWINGS [13-02-2024(online)].pdf | 2024-02-13 |
| 6 | 202431009517-COMPLETE SPECIFICATION [13-02-2024(online)].pdf | 2024-02-13 |