Abstract: “An Autonomous Social Robotic System” Present Invention presents a social robot for promoting good hand hygiene among the user. This robot can supply valuable objective data to stakeholders and the UN to monitor progress on targets for UN SDGs 3 and 6. The present invention the design parameters of Social Robot and a pilot study conducted with 10 children between the ages 6-11 years hailing from rural and urban locations in Southern India. The social robot of present invention has the potential to be used in educational and children-centered spaces to promote handwashing behavior change. Fig 1
Description:FIELD OF THE INVENTION
The present invention relates to an autonomous social robotic system. More particularly, the present invention relates to an autonomous social robotic system that can monitor handwashing practices, provide guidance for good hygiene practices as well as engage users in conversations to facilitate learning of facts and concepts around hand hygiene for the user and reinforce positive habits in user in the long term.
BACKGROUND OF THE INVENTION
Robotics and Artificial Intelligence are making huge inroads in areas as diverse as consumer applications, agriculture, the manufacturing industry, and the education sector.
These advances have brought down the cost of robotic equipment and the resulting democratization opens up new avenues for the application of robotics for humanitarian applications. In the current COVID-19 pandemic and the post-pandemic era as well, one key trend that is emerging is increased awareness of health. Therefore, a potential application of social robotics is to promote healthy habits such as increased hand hygiene, wearing masks, and maintaining social distancing, especially among children.
There are number of patents and non-patents literature currently available, for example: R. Kittmann, T. Fr¨ohlich, J. Sch¨afer, U. Reiser, F. Weißhardt, and A. Haug, “Let me introduce myself: I am care-o-bot 4, a gentleman robot,” in Mensch und Computer 2015 – Proceedings, S. Diefenbach, N. Henze, and M. Pielot, Eds. Berlin: De Gruyter Oldenbourg, 2015, pp. 223–232 discloses a general purpose service robot that assists users in various tasks where human-like behavior and interfaces are desired.
Another non-patent literature titled as “Hobbit: Providing fall detection and prevention for the elderly in the real world,” Journal of Robotics, vol. 2018, pp. 1–20, 06 2018 by M. Bajones et al, discloses a socially assistive service robot aiming at the challenge of enabling prolonged independent living of elderly people in their own homes.
These existing commercial social robot platforms are cost prohibitive especially when they are to be deployed at scale in developing economies.
This necessitates a custom designed robot with low-cost components which is made with manufacturing technologies accessible in developing countries. Also, the research by Wainer et al. indicate that users find a co-located physically embodied agent more enjoyable due to their ability to engage, which in turn leads to better social acceptance as compared to a remotely operated agent.
In another study by W. A. Bainbridge, J. Hart, E. S. Kim, and B. Scassellati, “The effect of presence on human-robot interaction,” in RO-MAN 2008-The 17th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 2008, pp. 701–706., individuals were more responsive to the commands from a physically present robot than those from an on-screen robot.
In view of above there arises a need for a low-cost social robotic system for long-term interaction, which include: eye contact, ability to direct gaze at users, speech, oculesics, i.e., eye-related non-verbal communication, head, and arm gestures, etc. The exact set of features depends upon the type of interaction desired. For a social robot, the gaze is important as it can convey mental states and augment verbal communications.
Therefore, the present invention discloses a cost-effective social robotic system with long-term interaction features like speech and oculesics, i.e. eye-related non-verbal communication.
OBJECT OF THE INVENTION
In order to obviate the drawbacks in the existing state of art, the main object is to provide an autonomous social robotic system using AI models.
Yet another object of the invention is to provide an autonomous social robotic system to promote proper hand hygiene habits among the users using AI models.
Yet another object of the invention is to provide a conversational AI autonomous social robotic system to help the robot interact with the user.
Yet another object of the invention is to provide a social robotic system at affordable cost, and which is easy to fabricate for the developing countries.
SUMMARY OF THE INVENTION:
Accordingly, present invention discloses an autonomous social robotic system to promote hygiene habits among the users including young children using the present invention which is a social robot. Said social robot is named as “Haksh- E”, the name Haksh-E was given to the robot as a portmanteau of two words from the Sanskrit language that means “hand cleaning”.
This robot can supply valuable objective data to stakeholders and the UN to monitor progress on targets for UN Sustainable Development Goals (SDGs) such as SDG 3 which primarily deals with “Ensure healthy lives and promote well-being for all at all ages” and SDG 6 which primarily deals with clean water and sanitation.
By a social robotic system, applicant discloses an embodied artificial agent that interacts with humans, communicating in a human perceptible way and a socially accepted manner. The robot is designed as an autonomous agent to be used in an “active” mode wherein the robot will actively interact with user and nudge them to adopt positive handwashing behaviors. The robot is connected to at least one camera more preferably an overhead camera which captures video data of the user hand washing and feeds it to a deep learning convolutional neural network (CNN) model running on the robot’s processor. The CNN model classifies the steps of handwashing according to the World Health Organization (WHO) guidelines to provide data on the quality of handwashing performed by the child.
The social robot developed can also be used in a “passive” observer mode to monitor the effectiveness of other hand washing behavior change initiatives. The robot, both in active and passive mode, can supply valuable objective data to stakeholders like the school authorities, health departments, the local government, and the UN to monitor progress on targets for UN Sustainable Development Goals (SDGs) such as SDG 3 which primarily deals with “Ensure healthy lives and promote well-being for all at all ages” and SDG 6 which primarily deals with clean water and sanitation.
Since the robot majorly focuses on the hygiene habits of the children therefore, the robot embodiments are co-designed with children thereby including children in every step of the robot product design cycle as informants.
BRIEF DESCRIPTION OF DRAWINGS
Figure 1 displays physical appearance of the Robot Model.
Figure 2 (A) displays the Pan-Tilted mechanism of the Robot and Figure 2(b) displays the various components of the Robot
Figure 3 displays the block diagram of the components
Figure 4 displays the 3D printed Laser cut parts of the Robot Head and Neck
Figure 5 displays various facial expressions of the Robot of version 0.5
Figure 6 displays box plot showing the responses of the children on the likeability, perceived intelligence, anthropomorphism, trustworthiness and age of the Robot.
Figure 7 is a process flowchart for the autonomous social robotic system
DETAILED DESCRIPTION OF THE INVENTION ILLUSTRATIONS AND EXAMPLES
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
Following is the non-limiting design and development of the social robot of the present invention.
DESIGN IMPLEMENTATION:
The functional and non-functional aspects of the social robot is described below:
A. Physical Appearance
When designing social robots for a specific application, a fine balance between the degree of “humanness” and the degree of “robotness” have to be maintained. A robot with a more anthropomorphic appearance may elicit a higher degree of positive and empathetic response from humans, but only up to a limit, after which the response becomes repulsive. This is called the “uncanny valley”. Moreover, a highly human-like appearance can also cause unmet expectations of the robot’s capability. Further, a social robot’s morphology must match its intended purpose and application. For these reasons, the torso of Haksh-E was fashioned as an anthropomorphic soap dispenser and a soap dispenser pump adornment attached to the head as depicted in Fig. 1.
B. Robot Speech and Behaviour
The Audacity to add reverb and change the pitch of the utterances to make it sound gender neutral to avoid bias and be more child-like. Therefore, the social robot preferred having child-like voices during child-robot interaction. The social robot also termed as “Haksh-E” is designed to speak vernacular languages more specifically English.
The face of Haksh-E was animated using Adobe Animate to mimic human behavior including constant blinking, display of emotions such as happiness, sadness, and confusion by playing a series of frames at 30 FPS. The mouth animation was programmed to lip-sync to the utterances of Haksh-E.
Additionally, a RGB LEDs was added to the robot’s chest, which was used to create a glow based on the robot’s emotions.
C. Motion and Degrees of Freedom
Haksh-E was designed with a stationary torso and a head with two rotational degrees of freedom (DoF) as depicted in Fig. 2. These two DoF permits the robot to maintain its gaze at salient objects and faces in its vision. Limiting the DoF to two also helped in lowering the cost and complexity. The range of motion for pan axis is -70° to +70° and -15° to +20° for the tilt axis. Both axes are actuated by stepper motors driven with silent drivers and synchronous timing belts for quiet operation.
An absolute position magnetic rotary encoders was used to provide closed-loop position feedback for the motion axes as depicted in Fig. 3. Moreover, optical limit switches were also added as an extra fail-safe option.
The entire motion subsystem was build using components commonly mass manufactured for hobbyist 3D printers, which further lowered the robot’s cost and improved the availability of spare parts.
D. Vision System (VS):
The vision system was designed to work with at least two sets of cameras, an overhead camera (oc1, oc2 .... ocn) at the handwashing station, and an on-body camera (rc1, rc2,….rcn) as depicted in Fig. 1. Data collected from two sets of cameras are collated so that they function as the robot’s “eyes” for identifying the approach of a user, recognizing faces and aid in gaze control. The input from the overhead camera is used to compute real-time predictions of the handwashing steps.
E. Speakers and Microphones:
For vocalization, at least two speakers (rs1, rs2 ... sn), with a small form factor and a power rating of 5 Watts RMS was chosen. They are capable of a sound pressure level of 84 decibels (dB) at a distance of 1 meter, which is much more than the 60-70dB range of normal human conversations. The head of the robot has a provision for adding microphones more preferably four microphones (rmf1, rmf2, ..... rmfn),
F. Fabrication and Assembly
Rapid virtual prototyping using CAD modelling was used in the Haksh-E version 0:5 as depicted in Fig. 5. This approach allowed for fast design iterations, validation of motor sizing, interference detection, and manufacturability checks. An exploded view of the fabrication-ready initial prototype can be seen in Fig. 2. The 3D-printed the parts on a BIQU B1 FDM 3D Printer using PLA plastic, and laser-cut flat profiles from cast acrylic sheets using a CW-1610 CO2 laser cutting machine.
After assembly, the final cost of the current prototype of social robot range between USD 500 to USD 600.
G. Computation:
The recognition of individual handwashing steps and measurement of other handwashing metrics, face detection, and tracking requires running multiple deep neural networks. NVIDIA® Jetson NanoTM B01 is a powerful Single Board Computer (SBC) that is capable of running multiple neural networks in parallel for image classification, segmentation, object detection, and natural language processing (NLP). All of Haksh-E’s high-level functions are controlled by a Jetson NanoTM. This also allows the future addition of autonomous conversational capabilities. All low level functions, including reading sensor inputs, actuator, and LED control, are performed by an Arduino Mega, which communicates with Jetson Nano via UART.
PROCESS FLOWCHART OF PERFORMING THE INVENTION:
The flowchart in figure 7 shows the process of working of autonomous robotic system. The invention discloses an autonomous social robotic system to promote hygiene habits among users using AI models, said system comprising of:
? at least one robotic unit (R1, R2....Rn), deployed in a hygiene promoting environment area (EA)
? at least one overhead camera (oc1, oc2 .... ocn)
? at least one hygiene station such as wash basin
? system control unit (ocu)
wherein said robotic unit (R1, R2....Rn) comprises of:
- real time robot possessing 2 rotational degrees of freedom for each arm,
- at least one robot camera (rc1, rc2...rcn),
- at least one robot speaker (rs1, rs2 ... sn),
- at least one robot microphone (rmf1, rmf2, ..... rmfn),
- at least one sensor (s1, s2, s3……sn), including at least one Touch sensor, at least one position encoders, at least one end limit switch sensor
- input-output device (i/o) such as LCD touch screen,
- atleast one other component (as1,..as2...asn) that can be configured to function as per the requirement of said system at least one control unit (rcu)
said system being deployed in an integrated manner such that when the said system senses presence of a human (user) at said hygiene station (hs1, hs2....hsn), said robotic unit provides instruction for correct hygiene at the hygiene station, thereby enabling said system to provide an extensible platform for augmenting interactions capabilities based on users’ feedback.
As depicted in figure 7, the Robot Vision System (RC1) detects the presence of the user and activates Robot Sensor Microphone (rmf1). The microphone triggers keyword spotting system. This further triggers Interaction system which includes Robot I/O device LCD touch screen and robot speaker (rs1) and Robot Motor Control (rmc). The interaction system triggers conversion which includes animation and speech. The robot motor control unit perform speech appropriate gestures.
The autonomous robotic system includes sensor wherein the sensors (s1, s2, s3……sn) are at least one Touch sensor, at least one position encoders, at least one end limit switch sensor.
The interaction system enquires the user’s need. In case the user has specific request/need, the robotic system is directed to perform required specific request or miscellaneous task. In case the user does not have a specific request, the Interaction system engage the user in conversation to initiate handwashing.
After all the hand washing is performed, the robotic system measures the handwashing quality. If the hand washing quality is satisfactory, the system provides task specific praise. In case the hand washing quality is not satisfactory, the system provide feedback to improve the quality of handwashing.
, Claims:We claim
1. An autonomous social robotic system to promote hygiene habits among users using AI models, said system comprising of:
? at least one robotic unit (R1, R2....Rn), deployed in a hygiene promoting environment area (EA)
? at least one overhead camera (oc1, oc2 .... ocn)
? at least one hygiene station such as wash basin
? system control unit (ocu)
wherein said robotic unit (R1, R2....Rn) comprises of:
- real time robot possessing 2 rotational degrees of freedom for each arm,
- at least one robot camera (rc1, rc2...rcn),
- at least one robot speaker (rs1, rs2 ... sn),
- at least one robot microphone (rmf1, rmf2, ..... rmfn),
- at least one sensor (s1, s2, s3……sn)
- input-output device (i/o) such as LCD touch screen,
- at least one other component (as1,..as2...asn) that can be configured to function as per the requirement of said system at least one control unit (rcu)
said system being deployed in an integrated manner such that when the said system senses presence of a human (user) at said hygiene station (hs1, hs2....hsn), said robotic unit provides instruction for correct hygiene at the hygiene station, thereby enabling said system to provide an extensible platform for augmenting interactions capabilities based on users’ feedback.
2. An autonomous social robotic system as claimed in claim 1 wherein said component (as1) is a soap dispenser.
3. An autonomous social robotic system as claimed in claim 2 wherein said soap dispenser is connected to a sensor in order to dispense soap on requirement.
4. A method of autonomous social robotic system wherein, the method comprises of
- Detecting the presence of user by the robot vision system (RC1) comprising of robot camera (rc1)
- Activating the Robot Sensor Microphone (rmf1), Interaction System (is) and Robot Control Unit (rcu)
o Activating activating the keyword spotting system by the Robot Sensor Microphone
o Activating the conversation including animation and speech by interaction system (i.s)
o Performing speech appropriate gestures by robot control unit (rcu); whereupon
? interaction system enquires about user’s need
? in case the user has specific need/request, the robotic system is directed to said other tasks
? in case the user does not have specific need, the user is engaged in conversion to initiate handwashing
? the soap dispenser (as1) dispenses the soap and interaction system provide instructions for proper hand washing using speech, gestures and animation
? after completing the handwashing, the hand washing quality is measured, if the hand washing quality is satisfactory, then the robotic system provides task specific praise
? in case, the hand washing quality is not satisfactory, then the robotic system provides feedback to improve quality of handwashing
| # | Name | Date |
|---|---|---|
| 1 | 202241056402-STATEMENT OF UNDERTAKING (FORM 3) [30-09-2022(online)].pdf | 2022-09-30 |
| 2 | 202241056402-FORM 1 [30-09-2022(online)].pdf | 2022-09-30 |
| 3 | 202241056402-FIGURE OF ABSTRACT [30-09-2022(online)].pdf | 2022-09-30 |
| 4 | 202241056402-DRAWINGS [30-09-2022(online)].pdf | 2022-09-30 |
| 5 | 202241056402-DECLARATION OF INVENTORSHIP (FORM 5) [30-09-2022(online)].pdf | 2022-09-30 |
| 6 | 202241056402-COMPLETE SPECIFICATION [30-09-2022(online)].pdf | 2022-09-30 |
| 7 | 202241056402-Proof of Right [28-10-2022(online)].pdf | 2022-10-28 |
| 8 | 202241056402-ENDORSEMENT BY INVENTORS [28-10-2022(online)].pdf | 2022-10-28 |
| 9 | 202241056402-Correspondence_Form-1 And Form-5_14-11-2022.pdf | 2022-11-14 |
| 10 | 202241056402-FORM-26 [25-11-2022(online)].pdf | 2022-11-25 |
| 11 | 202241056402-Correspondence_Form-26_28-11-2022.pdf | 2022-11-28 |
| 12 | 202241056402-FORM-9 [15-02-2023(online)].pdf | 2023-02-15 |
| 13 | 202241056402-FORM 18 [17-02-2023(online)].pdf | 2023-02-17 |
| 14 | 202241056402-FER.pdf | 2023-04-28 |
| 1 | 202241056402searchE_27-04-2023.pdf |