Sign In to Follow Application
View All Documents & Correspondence

System/Method To Detect Real Time Abnormal Using Ai And Image Processing

Abstract: The invention focuses on real-time abnormal behavior detection in public spaces, particularly in college environments, using surveillance systems. By implementing behavior analysis modules, the system aims to identify various types of abnormal behavior, including intrusion, loitering, violence, and fall-down incidents. Leveraging anomaly detection techniques, the system provides timely and accurate insights, contributing to improved security and robustness. The goal is to automatically detect deviations from normal behavior patterns, ensuring a swift response to potential threats. Through a combination of interdisciplinary expertise in computer vision, machine learning, and data analysis, the invention aims to create an efficient and user-friendly solution that enhances safety in public places. The ethical considerations regarding privacy and the responsible use of surveillance technologies are integral to the implementation process. 3 Claims & 2 figures

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 November 2023
Publication Number
52/2023
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

MLR Institute of Technology
Laxman Reddy Avenue, Dundigal-500043

Inventors

1. Mr. K Vishwanath Reddy
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
2. Mr. B Vamshi Yadav
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
3. Mr. M Sudhansh Narayan
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043
4. Mr. Aarya Gothula
Department of Computer Science and Engineering – Artificial Intelligence and Machine Learning, MLR Institute of Technology, Laxman Reddy Avenue, Dundigal-500043

Specification

Description:Field of the Invention
The field of invention for human behavior detection encompasses the identification and analysis of human actions across domains like security, healthcare, education, marketing, and more. It involves utilizing technologies such as computer vision and machine learning to interpret gestures, movements, and patterns. This innovation improves safety, enhances healthcare monitoring, personalizes education, optimizes marketing, and advances understanding of human behavior's impact on various sectors.
Objective of this invention
The objective of the invention is to develop an effective and accurate system for detecting abnormal human behaviors. This system aims to enhance safety, security, and efficiency across various domains by utilizing advanced technologies such as neural networks, computer vision, and behavioral analysis. The invention seeks to provide timely identification and classification of abnormal actions through the analysis of human movements, gestures, and physiological signals. By achieving these objectives, the invention contributes to improved decision-making, early intervention, and a deeper understanding of human behavior in diverse settings such as surveillance, healthcare, education, and marketing.
Background of the Invention
With the increase in the number of anti-social activities that have been taking place, security has been given utmost importance lately. Many organizations have installed CCTVs for constant monitoring of people and their interactions. Since constant monitoring of data by humans to judge if the events are abnormal is a near impossible task as it requires a workforce and their constant attention. This creates a need to automate the same. Also, there is a need to show in which frame and which parts of it contain the unusual activity which aids the faster judgment of that unusual activity being abnormal.
The method involves generating a motion influence map for frames to represent the interactions that are captured in a frame. The main characteristic feature of the proposed motion influence map is that it effectively depicts the motion characteristics of the movement speed, movement direction, and size of the objects and their interactions within a frame sequence. It further extracts frames of high motion influence values and compares them with the testing frames to automatically detect global and local unusual activities.
For instance, CN112389448B is related to abnormal driving behaviors by considering both the state of the vehicle and the driver. The method involves several steps: it gathers real-time vehicle running information through a V2X vehicle-mounted terminal, determines the vehicle's lane and direction using V2X roadside devices or map data, evaluates the vehicle's running state based on its heading information, captures video of the driver using an onboard camera, analyzes the driver's behavior using an algorithm designed for detecting abnormal states, continuously monitors both the driver's actions and any interference by non-drivers, integrates all collected data for unified processing, judges the current state of vehicle operation, and establishes a system of graded early warnings and emergency measures based on the Protection Motivation Theory. This comprehensive approach increases the accuracy of detecting abnormalities and significantly enhances driving assistance safety.
Similarly, CN103679749Brelates to systemsthat describes an image processing method and device that employs motion target tracking. The method involves these steps: capturing multiple pairs of images (N = 2), recording the last frame as a reference, extracting the moving target's profile from each pair of images and labeling its coordinates on the reference frame as the objective contour, selecting the pair of images with the most distant moving target profile as the target frame, extracting an area image from the target frame and the reference frame where the objective contour is, and combining this area image with the objective contour in the reference frame. This method effectively tracks moving objects across multiple frames, reconstructs blocked background regions, and achieves the erasure of moving objects.
For instance, CN111881750A is related to an advanced crowd abnormality detection method based on an enhanced antagonistic network (GAN). By incorporating background removal, multi-scale optical flow, and a self-attention mechanism, the method predicts future frames more vividly and accurately detects abnormal crowd behavior. The approach involves removing the background to enhance image details, integrating time sequence information through multi-scale optical flow, and employing a self-attention mechanism to capture image geometry and distribution effectively. This improved GAN predicts future frames, yielding higher-quality predictions than traditional methods, consequently enhancing the accuracy of detecting crowd abnormality with implications for various surveillance and security applications.
Similarly, CN111881750A is related to detecting abnormal crowd events. This method involves several key steps: It starts by acquiring a video for analysis and extracting the skeletal posture of pedestrians from the video frames, thereby obtaining essential pedestrian skeleton information. For each pedestrian in a frame, a time sequence of their skeletal information is generated. By combining these time sequences from multiple pedestrians in a frame, a sequence group is formed. This group is then subjected to classification using a recurrent neural network, which is trained to identify abnormal events within the corresponding pedestrians.This approach revolutionizes crowd abnormal event detection. Extracting pedestrian skeletal information and employing recurrent neural networks, significantly improves the accuracy of detecting anomalies amidst crowds. This innovation holds immense potential for enhancing security, surveillance, and public safety systems. Its ability to efficiently process and interpret complex crowd behavior through skeletal analysis and neural network classification ensures its relevance in various real-world applications, ultimately leading to more effective and reliable abnormal event detection.
For instance, CN110135319B is related to a system for detecting abnormal human behaviors in videos. The method involves three key steps: extracting dynamic human skeleton joint points from videos using a neural network model, generating surface behavior features through an ST-GCN network, and utilizing an abnormal behavior classifier model to identify and classify the detected behavior. The system consists of a video monitoring module and a network model integration module. By combining advanced neural network techniques, this innovation accurately processes diverse human behaviors and substantial skeleton data. It automatically detects abnormal actions in video monitoring scenarios, enhancing the effectiveness of surveillance and ensuring prompt response to anomalies.
Summary of the Invention
The project is to develop a system that can detect abnormal things. It uses object detection and image classification to recognize abnormal things. Image classification will be used to identify the type of person, or vehicle. The system uses a pre-trained deep learning model to identify and highlight abnormal events and objects on road images. The abnormality detection system for roads uses computer vision and object detection methods to enhance road safety and traffic management. It identifies potential safety hazards on roads, such as accidents, pedestrians, and wrong-way driving, allowing for quicker responses and preventive measures. Detecting traffic congestion and abnormal traffic flow patterns helps optimize traffic management and reduce congestion. By processing road images in real time, the system provides instant feedback on abnormal events, enabling timely intervention.
Brief Description of Drawings
The invention will be described in detail with reference to the exemplary embodiments shown in the figures wherein:
Figure-1: Flowchart of object detection
Figure-2: Flow chart representing the basic architecture and workflow the developed prototype
Detailed Description of the Invention
In today's urban landscapes, road safety and effective traffic management are of paramount importance. The "Road Abnormality Detection using Computer Vision and Object Detection" project addresses these concerns by harnessing cutting-edge technologies to create a robust system that identifies and highlights abnormal events, objects, and behaviors on roadways. By leveraging the power of computer vision and deep learning-based object detection, this project aims to contribute to safer roads and smarter traffic management.
During the initial phases of development, the system's model undergoes rigorous training using an extensive and diverse dataset. Road safety and efficient traffic management are critical concerns in urban environments. To address these issues, the project aims to develop a sophisticated Road Abnormality Detection system utilizing computer vision and object detection techniques. By analyzing road images, the system will identify and highlight abnormal events, objects, and behaviors, providing valuable insights for timely intervention and improved road safety.
Training involves preparing a labeled dataset, selecting a pre-trained model, fine-tuning it on the dataset, and evaluating its performance. Detection entails loading the trained model, processing input images, running inference, and visualizing the results. This comprehensive process ensures accurate detection of abnormalities on road images, contributing to improved road safety and traffic management. Collect a diverse dataset of road images containing various scenarios, including normal and abnormal situations (accidents, congestion, pedestrians, etc.).
Annotate the dataset by labeling each object of interest in the images with bounding boxes. Annotations should include class labels (e.g., car, pedestrian) and potentially segmentation masks. Choose a suitable pre-trained object detection model from the TensorFlow Model Zoo, based on factors like accuracy and speed. Initialize the model using its architecture and weights.Use transfer learning by reusing the pre-trained model's convolutional layers and replacing the classification and detection heads.
Train the model on the annotated dataset using a loss function that penalizes the differences between predicted and ground truth bounding boxes and class labels.Iterate through multiple epochs to optimize the model's performance. Once trained, the model processes road images and generates predictions for detected objects, including bounding box coordinates, class labels, and confidence scores.
Detected objects are visually highlighted on the original images, allowing easy interpretation of the abnormalities Rapid detection of abnormal events such as accidents and unexpected congestion leads to quicker response times and potentially prevents accidents. The Model core focus is on training a deep learning model to detect various anomalies, including accidents, congestion, pedestrians, and more, in real-time road images. By harnessing the power of transfer learning, the pre-trained model is fine-tuned using a diverse dataset that encompasses a wide range of road scenarios. The trained model's effectiveness is validated through rigorous testing and performance evaluation.
The key features of the invention include the integration of real-time camera feeds for continuous road monitoring, an intuitive visualization interface that overlays bounding boxes on detected anomalies, and the potential for seamless integration with existing traffic management systems. The inventions adaptability to different road environments and the flexibility to identify various abnormal events make it a valuable tool for proactive decision-making and timely interventions. The anticipated impact of the project includes a significant reduction in accidents, improved traffic flow management, and data-driven insights for urban planning and infrastructure improvements. The project's ability to contribute to safer road environments and smarter urban ecosystems positions it as a crucial step towards achieving enhanced road safety and traffic efficiency.
ADVANTAGES OF THE PROPOSED MODEL
Leveraging advanced techniques like neural networks and multi-modal data analysis, the proposed system achieves higher accuracy in distinguishing abnormal behaviors from normal ones. By integrating various sources of data such as skeletal information, facial expressions, and physiological signals, the proposed system provides a more comprehensive view of human behavior, leading to more reliable detections.
The proposed system's real-time monitoring and analysis enable immediate responses to abnormal behaviors, reducing response time and potential risks. Incorporating sophisticated algorithms and multi-dimensional data, the proposed system minimizes false positive alerts, ensuring that detected anomalies are genuinely significant. The system's ability to adapt to different environments and scenarios enhances its effectiveness in diverse settings, while some existing systems might struggle to adapt to new situations. ,he proposed system's detailed behavioral analysis provides deeper insights into human actions, aiding in better understanding and decision-making. The proposed system's automation reduces the need for constant human monitoring, freeing up resources for other critical tasks. In healthcare and education, the system's multi-modal analysis leads to personalized interventions and services for individuals, a level of customization that some existing systems lack. , Claims:The scope of the invention is defined by the following claims:
Claims:
1. The system/method to detect real time abnormal using ai and image processing comprising:
a) A Human behavior detection unit to monitor people's movements, facial expressions, and other behaviors. The system also monitors the people's heart rate, blood pressure, and other physiological signals. The human behavior detection can be used to improve healthcare. it can be used to monitor patients' health or to detect signs of disease.
b) The Human behavior detection unit monitors people's eye movements, facial expressions, and other behaviors. It can be used in education field, to track student attention or to identify students who are struggling. This can be done by
c) The Human behavior detection monitors people's browsing history, social media activity, and other online behavior. It can be used in marketing to target ads to specific individuals or to measure the effectiveness of marketing campaigns.
2. According to claim 1, the human behavior detection can be used to improve our understanding of human behavior. This can be done by studying how people interact with each other and with their environment. This can be done by monitoring people's facial expressions, body language, and other nonverbal cues.
3. According to claim 1, the human behavior detection can be used to develop new treatments for mental health disorders. it can be used to track the progress of patients with depression or anxiety. This can be done by monitoring people's facial expressions, speech patterns, and other behaviors.

Documents

Application Documents

# Name Date
1 202341078531-REQUEST FOR EARLY PUBLICATION(FORM-9) [18-11-2023(online)].pdf 2023-11-18
2 202341078531-FORM-9 [18-11-2023(online)].pdf 2023-11-18
3 202341078531-FORM FOR STARTUP [18-11-2023(online)].pdf 2023-11-18
4 202341078531-FORM FOR SMALL ENTITY(FORM-28) [18-11-2023(online)].pdf 2023-11-18
5 202341078531-FORM 1 [18-11-2023(online)].pdf 2023-11-18
6 202341078531-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-11-2023(online)].pdf 2023-11-18
7 202341078531-EVIDENCE FOR REGISTRATION UNDER SSI [18-11-2023(online)].pdf 2023-11-18
8 202341078531-EDUCATIONAL INSTITUTION(S) [18-11-2023(online)].pdf 2023-11-18
9 202341078531-DRAWINGS [18-11-2023(online)].pdf 2023-11-18
10 202341078531-COMPLETE SPECIFICATION [18-11-2023(online)].pdf 2023-11-18