Sign In to Follow Application
View All Documents & Correspondence

Object Detection And Hazard Alert System For Child Safety On Robot Using Yolo

Abstract: The proposed invention integrates the YOLO (You Only Look Once) which helps in real time object detection ensuring child safety. By detecting the hazards and sending notification alerts, the system significantly reduces the hazardous situations providing safety to the children. The system is integrated with advanced machine learning and deep learning techniques for object detection that increases the accuracy and flexibility in automated systems and dynamic environments. This innovation ensures child safety, by enabling secured communication between the system and parents.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 July 2025
Publication Number
31/2025
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Hyderabad

Inventors

1. Dr. K.Sivakrishna
Department of CSE – AI&ML, MLR Institute of Technology, Hyderabad
2. Ms. J. Sree Vaishnavi
Department of CSE – AI&ML, MLR Institute of Technology, Hyderabad
3. Ms. A. Sreelekha
Department of CSE – AI&ML, MLR Institute of Technology, Hyderabad
4. Ms. S. Sindhu
Department of CSE – AI&ML, MLR Institute of Technology, Hyderabad
5. Ms. Ch. Varshitha
Department of CSE – AI&ML, MLR Institute of Technology, Hyderabad

Specification

Description:Field of the Invention
The Object Detection and Hazard Alert System for Child Safety on Robot Using YOLO belongs to the field of Robotics and Object Detection. It uses advanced technologies like Deep Learning, Computer vision, Robotics, Hazard Systems and Object detection techniques such as YOLO (You Only Look Once) to create a system that ensures child safety and create a safe environment for the children by detecting hazardous objects and respond promptly in real-time.
Background of the Invention
Child safety is a key issue, especially in the place or environment like home where the toddlers cannot recognize the harmful object. In today’s generation parents are occupied with their professional responsibilities and household works makes it difficult especially for mothers to continuously have a supervision on children. Hence with the rapid growth of Robotics and Artificial Intelligence (AI), there is a growing need of automated systems that ensures child safety in real time, especially when parents are engaged with other tasks at home and need a continuous supervision over a child. Traditional approaches of child safety systems and object detection depend on devices like monitors or motion detectors that provide a basic examination or monitoring but doesn’t provide efficient result in real time detection of object efficiently in a single loop. These systems use predefined approaches and methods and cannot dynamically detect objects in real time efficiently and accurately. Child safety in home using YOLO and sending real time alerts is not efficiently implemented by the existing approaches. Additionally, the lack of mobility and adaptability limits their coverage and effectiveness, making them inadequate for child safety in homes.
The document “1116383” outlines about an alert system that uses multiple sensors and Machine learning techniques for data analysis and identifying hazards. Multiple sensors are integrated to examine the conditions and send an alert to the operators when dangerous objects are detected. This application is introduced to enhance safety environment in Industries by identifying the risks in real time ensuring a safe environment and mitigate potential dangers.
The “11288509” patent with title “Object Location” outlines a method for Assessing the position of objects in surroundings applying computational algorithms and integrated sensors. The system consists of camera and other sensors like LiDAR to detect and find the position of an object in a predefined environment. It creates a probability distribution for the locations of the objects and then analyse and perform actions derived from this distribution low probability indicates that the object is stable and prevent providing alert. This project aims in detecting and finding the position of objects accurately across various industries and augmented reality environment.

The patent “US201510170002A1” describes about the Object Detection using “Deep Neural Networks” (DNN). This method is introduced to detect an object in images and create mask that are widely implemented in applications like Self Driving cars and image analysis where precision is important. Upon receiving an input images full mask and partial object masks is generated for object present in an input image by Deep Learning object detector and a bounding box is drawn for object in image indicating masks. This application eliminates manual intervention and designing and supports pixel-level detection and analyses critical problems without any manual designed features. The document “20130082831” invention is about child safety in cars. It is a method of continuously cautioning a user of a presence of child in car. Leaving a child in a car for long periods is dangerous and can even lead to death sometimes. The system sends an alert to the parents by detecting the child present in the car for long period and ensures the child safety. It detects the object(child) and continuously monitor child and sends a message to the parents about child position or if there is a dangerous situation of child.
Summary of the Invention
The Object Detect and Hazard Alert System for Child Safety using YOLO is an innovative application that enhances real-time object detection using YOLO, ensuring the child safety from hazard alert systems. The system uses cameras for real time object detection where the YOLO captures the objects in a single loop. The camera integrated with sensors and pre-defined models like YOLO and OpenCV recognizes if there are any hazardous objects near the child and sends an immediate alert to the parents or caretakers ensuring a safe environment for children in real-time. The system is designed to focus on bridging the gaps with current solutions by introducing advanced models and technologies like YOLO and OpenCV that detects the objects efficiently in a single instance in real-time. The traditional systems lack with the advanced technologies like YOLO in real time detection that are used in ensuring child safety in dynamic environments. The system’s inability to detect or classify hazards near children efficiently highlights the need for the solution which helps in real time objection, identifying hazards and automated alerts to ensure a secured environment for the children. The idea integrates Artificial Intelligence, Deep learning and Object Detection technology to provide a robust and effective hazard alert system. Its adaptability and efficiency make it innovative system for safety of the children.

Brief Description of Drawings
The invention will be described in detail with reference to the exemplary embodiments shown in the figures wherein:
Figure-1: Flow chart representing the work flow of the system
Figure-2: Architecture for Hazard Detection System using Deep Learning for Child Safety.
Detailed Description of the Invention
The current innovation involves real-time hazard detection ensuring child safety. The system combines technologies such as YOLO and OpenCV to enhance system ability to recognize and respond to hazardous objects near child. This technology is important in dynamic environments where objects change quickly. This innovation represents advancement in child safety systems offering a better solution to prevent unexpected accidents to the children. It provides a safety monitoring system that adapts to different conditions, providing the safety for children in real-time. The application consists of camera that is used to navigate and monitor the surroundings to capture images and videos for real-time object detection. YOLO (You Only Look Once) is the object detection technique used to detect objects in a single instance and OpenCV is used to preprocess the images or objects and draw the boundary boxes to the detected objects. The system uses computer vision techniques and Sensors to ensure smooth operations and detect the objects and classify whether the object is hazardous or not and send an alert to the parent or caretaker and ensure that child is safe from the hazardous objects.
The flow of the application starts with the inspection of the surroundings and environment in the home using a camera integrated with the sensor. The object detection techniques and modules are used to detect the objects identifies its type, shape and position of the objects around a child, draw boundary boxes and classify whether the object is hazardous or not. Based on the object detection results the system analyzes the distance of baby and hazard objects and sends an immediate alert to the to parent or caretaker and even provide a sensor alarm so that parents can take immediate action and ensure child is in safe and secured place. The camera of the system is designed in a way that it can capture and examine large areas clearly at different angles of the view. It is designed with a high degree to clarity and flexibility to ensure clear capturing and monitoring the environment. It also uses Artificial Intelligence and Attention mechanism techniques to focus on only relevant data and critical areas where child is moving which improves faster detection and analyzing of the unsafe objects. Real-time camera alerts and notifications allows the system to adapt to rapid and quick changes in the environment such as detecting objects more accurately. It uses AI detection modules and Convolutional Neural Networks (CNN) to process images and videos from the camera. The Deep Learning models and networks are trained on the different predefined datasets to recognize various hazard objects like sharp objects and other potential risk objects at home. so that the hazard objects can be detected easily and learn and adapt to the environment which can easily generalize on unseen data even in complex and dangerous environments.
The figure (1) outlines the process of object detection for real-time hazard detection. Initially the system captures live videos from the monitoring camera. These inputs are then preprocessed using OpenCV library to enhance image quality and remove noisy and irrelevant data and prepare them for next process and analysis. The preprocessed images are fed into YOLO model which performs object detection draw boundary boxes and identify hazardous objects. Once the object is detected, the system assesses whether they pose an emergency situation. If the hazard is found then the system analyses the distance between the danger object and child and triggers the notification system sends a notification to the user alerting them about hazards and risks. The user when receives notification they take the necessary steps to mitigate the hazard. The final step involves in providing specific instructions on how to respond to protect the child. Once the necessary action is performed the system concludes the task and remains ready to detect new hazards ensuring a safe environment for child in real-time. The figure (2) illustrates the workflow of the Hazard Detection System for child safety that uses cameras and other platforms to monitor environment and detect hazards in real-time. The steps begin with collecting the data from camera by capturing the images and videos from the camera. The system integrates AI and Deep Learning models like YOLO and OpenCV to preprocess and detect the objects. The system captures dangerous objects in home like sharp objects, bath tubs and draw boundary boxes around the object. The system differs between safe and unsafe objects. If possess unsafe object then the system identifies and immediately send alert to the user. Figure (2) represents the Deep Learning models that are trained on the predefined datasets to detect whether the object is hazard or not. The model learns from the predefined datasets and generalizes the data to capture images and objects in more complex environments. This project can be also used in many other sectors, including traffic, manufacturing, healthcare and many other fields. This project outlines a real-time object detection capturing the images more accurately and efficiently in only one instance by using Object Detection technique like YOLO (You Only Look Once). The project leverages advanced technologies such as Deep Learning, Artificial Intelligence and object detection to create more efficient and robust innovative solutions for ensuring safety of the children. This innovative approach addresses the need of automate systems for parents in ensuring the child safety. Due to the professional responsibilities and household work it is difficult for the parents to balance both work and taking care of children This innovative solution of alert system help parents and care takers to do their work without any stress about children and their safety.
Equivalents
The present invention, an Object Detection and Hazard Alert System for Child Safety on Robots Using YOLO (You Only Look Once), implements advanced deep learning, object detection techniques and AI algorithms to facilitate real-time hazard detection and sends alert to the user aiming in child safety. While this innovation is implemented for child safety, the innovation can be also applied in various domains that uses automated systems for hazard detection in real-time. This project can be even implemented by using different types of integrated sensors and object detection techniques like Faster R-CNN to achieve similar goals and functionality. The application can use Robot integrated with camera and drones to implement and expand its capability in automated systems. , Claims:The scope of the invention is defined by the following claims:

Claim:
1. The Object Detection and Hazard Alert System for Child Safety on Robots Using YOLO progression comprising,
a) The System implements real-time hazard detection using multi-view images and live video stream data from various angles around the child to detect hazards and ensure child safety. Use different object detection techniques like YOLO to improve accuracy and immediate hazard detection.
b) The attention mechanism of images improves accuracy by prioritizing the key hazard object around child. These attention gates highlight the critical areas and obstacles, while reducing the irrelevant data processing.
c) The system improves accuracy by integrating data streams from the camera, that includes depth sensing, thermal imaging and light data. This adapts effectively and detects effectively in changing environment ensuring child safety in dynamic environments.
2. According to claim 1, the deep learning techniques, such as Convolutional Neural Networks and Object Detection techniques are used in real-time object detection and monitoring of data. This allows system to analyse hazardous situation and send alerts in real-time in various environments improving child safety.
3. As per claim 1, the attention mechanisms ensure accurate and efficient detection of hazards focusing on relevant data. The system can even handle large volumes of visual data accurately with focusing on required data. This implementation simplifies the detection and simplifies operation in various real-time scenarios.

Documents

Application Documents

# Name Date
1 202541071004-REQUEST FOR EARLY PUBLICATION(FORM-9) [25-07-2025(online)].pdf 2025-07-25
2 202541071004-FORM-9 [25-07-2025(online)].pdf 2025-07-25
3 202541071004-FORM FOR STARTUP [25-07-2025(online)].pdf 2025-07-25
4 202541071004-FORM FOR SMALL ENTITY(FORM-28) [25-07-2025(online)].pdf 2025-07-25
5 202541071004-FORM 1 [25-07-2025(online)].pdf 2025-07-25
6 202541071004-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-07-2025(online)].pdf 2025-07-25
7 202541071004-EVIDENCE FOR REGISTRATION UNDER SSI [25-07-2025(online)].pdf 2025-07-25
8 202541071004-EDUCATIONAL INSTITUTION(S) [25-07-2025(online)].pdf 2025-07-25
9 202541071004-DRAWINGS [25-07-2025(online)].pdf 2025-07-25
10 202541071004-COMPLETE SPECIFICATION [25-07-2025(online)].pdf 2025-07-25