Abstract: Here, we are going to portray a robotic device that is mainly intended to be used in the field of Industrial Supervision and Maintenance. This device is in the form of a four-wheeler vehicle which can cover a certain area of floor of the industry depending on the range of WLAN connectivity from the main router. Multiple number of sensors are installed on the device and all of them are connected to the main microcontroller. Depending on the values received from the respective sensors, the microcontroller will update the data on a Cloud based Web Server and from there the concerned departments will be notified via mail. The device is also equipped with a camera which will click photos, record short videos and will also stream live video sessions depending on the situation and the verbal commands received from the respective user. The device’s movements can be remotely controlled using Google Assistant Services. Using Voice Commands, the User can instruct the robot to follow Auto-Pilot Mode where the device will drive itself using the ‘Obstacle Avoiding Algorithm’ or else the robot can also follow the Hand-Gestures of the user and reach a specific location and provide necessary information about any adverse situation detected by the pre-installed sensors. All sorts of communication between the User and the Device will be done using the Internet of Things Platform.
Description:The technical features of the entire system along with their respective methodologies have been briefly explained below: -
A. GOOGLE ASSISTANT SERVICES: -
The user can maneuver the device’s movements and switch over the various modes available by using a Google Assistant Device (for example, any Google Home Products, Android Devices or any kind of Google Assistant SDK enabled SBC device). The google account with which the device is signed in needs to be previously linked with backend architecture of the entire protocol. The Google Assistant Device as well as the robot need to be connected to the Internet in order to communicate properly. The most important factor to be used in this case in Web Service Chain. The web service chain accepts the voice commands from the user using the Google Assistant device and, in the output, it gives necessary commands for controlling the robot. The entire Web Chain is discussed in the following mentioned points: -
• On receiving the voice commands, the IFTTT (If This Then That) applet will be triggered directly.
IFTTT is a web-based service for creating web chains.
• The triggered applet is directly connected to ADAFRUIT.IO (a Cloud based IoT Service Platform)
before which all necessary authorization steps are needed to be taken.
• For every verbal command made by the user, the applet will publish specific data on to the
ADAFRUIT.IO cloud platform.
• That published or the retrieved value will be stored as a feed in ADAFRUIT.IO.
• Now the microcontroller installed on the device needs to be previously subscribed to that specific
feed in order to receive commands from the cloud.
• Depending on the commands received from the Cloud, the microcontroller will take necessary
decisions.
Once, the web chain is established properly, the user will just have to mention the mode of operation verbally and accordingly the robot to begin to work. As a confirmation, a response message will also be generated.
The general system flow diagram used for ‘Triggering’ using Google Assistant Services is shown in Fig.2.
B. VOICE-CONTROLLED MODE: -
After the robot is turned on, the user needs to use the command ‘Start Voice-Controlled Mode’ as the initial step. Instantly, an audio response message will be received as confirmation of the above given command. Next, the user needs to use voice commands (like FORWARD, BACKWARD, RIGHT, LEFT, STOP) for controlling the robot’s movements.
C. HAND GESTURE: -
This robotic Device is fully controlled using the Google Assistant Services. So here using the command ‘Start the Hand Gesture Mode’ via GOOGLE ASSISTANT we can change the mode of the robot car from any corner of the world. Firstly, we have to wear a glove where the Accelerometer, Arduino Nano R3, RF transmitter are installed. Basically, by measuring the change in rate of acceleration or deceleration along X and Y axes using Accelerometer, the glove sends a signal to the robot so that it can move in accordance with the input gesture. From approximately 80-meter distance we can control the robot car with high accuracy. On bending the wrist, the glove will send a specific signal to the robot specifying the direction in which it should move. There is a tap switch which turns the glove on or off. When it is off, the robot will continue to follow its last command.
D. AUTO PILOT: -
Apart from the hand gesture mode, the robot has another capability which is the Auto Pilot mode or the Self-Driving Mode. Here, firstly we can change the mode via GOOGLE ASSISTANT using the command ‘Start the Auto Pilot Mode’ and it can automatically switch to the self-driving mode. Mainly in this mode it can sense all the static objects in front of it and avoid them. In this model there are total 3 ultrasonic sensors. Ultrasonic sensors can sense the object in front of it and measure the distance. After getting the distance, robot will avoid the specific object which will be considered as an obstacle in its path. The two front corners of the robot are totally safe because of the presence of two ultrasonic sensors over there. Here we are using the SONAR technology which means that by calculating the time lapse between the generation and the sound waves getting reflected back, we can measure the distance between the source and the obstacle. If any obstacle is sensed by the ultrasonic sensors, then at first the robot moves backward then ultrasonic sensor which is mounted on a servo motor rotates left to right and as per the signal received from this sensor, the micro-controller decides where it should turn- left or right by calculating the respective distances. After turning, if the obstacle is avoided successfully then it continues to move in the forward direction. Until and unless the obstacle is avoided, the above-mentioned process will run continuously. In the front side, two other ultrasonic sensors are fitted at the two corners which are always active to avoid collision. The sensitivity of the ultrasonic sensor can be changed manually.
E. DATA ACQUISITION AND ANALYTICS: -
The four sensors installed on the robotic device will continue to sense its surrounding environment and after every fixed time interval of 15 minutes, it will upload the pre-collected data on to the ThinkSpeak IoT Platform. The data will be stored over there and the user can refer to it anytime for further analysis and visualization. Some specific MATLAB programmes are already stored in the Cloud using which the user can obtain the correlated graphs based on Real-time analysis. For example, a graph showing the raw values received from the Gas Sensor (one of the sensors installed on the device) is given in Fig.3.
F. VIDEO SURVEILLANCE: -
The NodeMCU will use the ArduCam for streaming Live Video Sessions. The user can watch live video sessions, click snapshots, record a short videoclip using his/her smartphone. This entire feature will help the authority to monitor an emergency situation closely depending on which they can take necessary actions. Here, the NodeMCU will act as a HTTP Server and it will use the Web Socket Technology to stream Live Video Sessions using the Cloud Platform. This Web Socket Service is a completely standalone Web Interface and can be easily accessed using any other web server.
G. AUTO-GENERATED RESPONSES: -
As soon as the cloud will receive a value crossing the pre-defined threshold from one of the sensors, the head of the entire industry will be notified via mail. The corresponding values received from the sensors along with the link for live video streaming will be included within that received mail. In addition to this, the head of the concerned department will receive a notification alert so that he can monitor and control the situation effectively. This system works on MQTT Protocol (Message Queuing Telemetry Transport) where the concerned department needs to subscribe to that particular topic beforehand. The system flow diagram explaining the entire Web Service Chain is given in Fig.4.
H. CONNECTIVITY ENHANCEMENTS FOR THE ROBOT: -
Firstly, the robot needs to be connected to a Wi-Fi Network for getting a stable internet connection. The device on getting power, will first search for the pre-used Wi-Fi credentials. At this level, the built-in LED of the NodeMCU will blink with a certain frequency. If it finds that particular hotspot within its range, it will get automatically connected and the LED will turn solid.
In case when the device cannot connect using the previously defined Wi-Fi credentials, the NodeMCU will act as a HTTP Server. In this configuration part, the LED will blink at a faster rate. Automatically, a webpage will be generated and after signing in to the NodeMCU Server, the user can log into the new Wi-Fi Hotspot by entering the correct Wi-Fi SSID and Password. From then onwards, the robot will use that Wi-Fi hotspot for receiving commands and uploading all the necessary informations. Moreover, from that captive portal the user can get information about the Chip ID, Flash Size, IP Address and MAC address and can also reset the entire module. In case if the Web-page do not open instantly due to certain security factors, the user can use the IP Address- 192.168.4.1 to directly sign in to the NodeMCU using any local Web Browser.
Claims:1.This robot can be controlled via voice commands using ‘Google Assistant Services’.
2.This robot is capable of sensing obstacles nearby and can avoid them efficiently (Auto-Pilot Mode).
3.The robot is equipped with WLAN Connectivity.
4.The robot can sense its surrounding environmental conditions efficiently using all the installed sensors.
5.The robot updates the data received from all the sensors on the IoT based Cloud Platform from time to time.
6.The robot notifies the concerned department and the head of the department whenever the values received from any of the sensors cross their pre-defined threshold values using the MQTT Protocol.
7.The robot can click pictures, record video clips and stream live videos using the ArduCam.
8.The robot’s movements can be controlled in any of the three modes – ‘Auto-Pilot’, ‘Hand-Gesture Controlled Mode’ and ‘Voice-Controlled Mode’ as per the user’s convenience.
9.The robot consumes very low power as it uses efficient circuitry and low power batteries.
10.The robot is also efficient and takes less computing and decision-making time
| # | Name | Date |
|---|---|---|
| 1 | 201931036825-FER.pdf | 2025-03-20 |
| 1 | 201931036825-FORM 1 [12-09-2019(online)].pdf | 2019-09-12 |
| 2 | 201931036825-DRAWINGS [12-09-2019(online)].pdf | 2019-09-12 |
| 2 | 201931036825-FORM 18 [04-10-2023(online)].pdf | 2023-10-04 |
| 3 | 201931036825-COMPLETE SPECIFICATION [12-09-2019(online)].pdf | 2019-09-12 |
| 4 | 201931036825-DRAWINGS [12-09-2019(online)].pdf | 2019-09-12 |
| 4 | 201931036825-FORM 18 [04-10-2023(online)].pdf | 2023-10-04 |
| 5 | 201931036825-FER.pdf | 2025-03-20 |
| 5 | 201931036825-FORM 1 [12-09-2019(online)].pdf | 2019-09-12 |
| 1 | 201931036825E_09-05-2024.pdf |