Abstract: HAR uses sensor technology uses Tensor flow, CNNs, Open CV and examine various human actions and motion patterns. The major goal of the invention is to automatically identify different human actions, starting with basic ones like walking, jogging, sitting and dancing, etc. HAR identifies the actions by cutting videos into clips or frames and converts 2D to 3D. Additionally, it uses many deep learning and machine learning techniques which are used to address these problems, such as recurrent neural networks (RNN), and convolutional neural networks (CNN). Due to its applications including surveillance and human-computer interaction, HAR has drawn a lot of attention. The Human activity recognition is a dynamic, quickly developing research area with a wide range of real-world applications. In order to develop more advanced HAR systems and enable more precise and reliable activity recognition across a variety of technical domains, the combination of sensor technology, machine learning algorithms, and deep learning techniques shows significant potential. 3 Claims 2 Figures
Description:Field of the Invention
The invention pertains to the field of robotics, specifically focusing on Human Activity Recognition (HAR). HAR is a technology that enhances human-robot interaction by enabling robots to comprehend and interpret human actions. This capability facilitates various applications such as gesture recognition, task assistance, imitation learning, assistive robotics, and rehabilitation. HAR empowers robots to respond appropriately, adapt their behaviors, and provide personalized assistance to humans. By improving communication, safety, and efficiency in human-robot interactions, HAR contributes to the development of more intuitive and effective robotics applications.
Background of the Invention
A few other Patent projects, such as Gesture Recognition, Emotion Recognition, and Sports Performance Analysis, have emerged in recent years, showcasing the continuous advancements in the field of technology. However, while these individual projects offer valuable insights and capabilities, they remain limited in their scope. In contrast, Human Activity Recognition (HAR) stands out as a remarkable innovation that combines the best features of these existing projects and extends their functionality even further.
U.S. Pat. No. 5,454,043 The invention includes a hand gesture recognition system that consists of: vector processing representing a hand that is represented in each image as a rotational vector calculated using a real-valued centroid, whereby the hand is sectored independently of pixel grid quantization; and recognition means for analyzing sequences of rotational vectors to recognize hand gestures.
US6437820B1 - Motion analysis system, This invention includes Motion analysis system for tracking the motion of objects in which One or more cameras are used to record a series of image frames in a motion analysis system that tracks the movement of one or more light-emitting markers connected to an object. Additionally, at least one light source in communication with the cameras is used to produce optical trigger signals. US9278255B2 System and method for activity recognition, This patent includes a method for the automatic recognition of human activity is presented, which entails the steps of breaking down human activity into the various fundamental component attributes required to carry out an activity and creating ontologies of fundamental component attributes using the various fundamental component attributes discovered during the decomposing step for each of the various targeted activities. The process also entails classifying a human-performed activity as one of a variety of different targeted activities based on how closely the sequence of fundamental component attributes obtained during the converting step matches at least a portion of the activity.
Summary of the Invention
Human Activity Recognition (HAR) involves interpreting human motion using computer and machine vision technology. It recognizes activities, gestures, or behaviors recorded by sensors and translates them into actionable commands. HAR has various applications in sports training, security, entertainment, healthcare, and more. It enables automation, prediction, and analysis of human behavior, eliminating manual input.
HAR from video enables autonomous vehicles to predict pedestrian behavior and improves tasks like training, dancing, and gaming. It has widespread use in human-robot interactions and virtual reality scenarios. HAR is a valuable technology that enhances multiple aspects of our lives through its ability to understand and interpret human activity.
Brief Description of Drawings
The invention will be described in detail with reference to the exemplary embodiments shown in the figures wherein:
Fig 1: Running the Clips/Frames from the UCF50 dataset into an Image Classifier
Fig 2: Brief working of proposed model
The document includes a set of visual representations in the form of flowcharts and mindmaps to provide a clear and concise understanding of the proposed model and its functionality.
Figure 1 showcases the process of running video clips or frames from the UCF50 dataset through an Image Classifier. This diagram visualizes the sequence of steps involved in classifying the images and demonstrates the workflow of the classification process.
Figure 2 illustrates the brief working of the proposed model, presenting a visual representation of the key steps, libraries, tools and processes involved in its operation. This figure serves as a visual aid to enhance comprehension and provide an overview of the model's functioning. The inclusion of these visual aids enhances the overall clarity and comprehensibility of the project, providing a visual representation of the concepts and processes involved in an easily understandable manner.
Detailed Description of the Invention
Start by creating a dataset of labeled videos. Each video contains examples of different actions that would be recognized. For example, if we want to recognize actions like walking, running, and jumping, we will provide a few example videos of people performing these actions so the system can learn from those actions.
To train a Human Activity Recognition (HAR) model, the first step is to create a data of labeled videos. Each video contains examples of different actions that the system needs to recognize, such as walking, running, and jumping. By providing labeled videos as training data, the system can learn from these actions and associate patterns with specific activity labels.
After obtaining the dataset of labeled videos, the next step is to preprocess the videos before training the Human Activity Recognition (HAR) model. This involves converting the videos into frames or clips, resizing them to a standardized size, and normalizing the pixel values. Video manipulation and frame extraction can be performed using libraries like OpenCV, which provides efficient tools for working with videos and extracting individual frames for further processing. Preprocessing the videos ensures consistency and prepares the data for training the HAR model.
After preprocessing the video frames or clips, the next step in Human Activity Recognition (HAR) is to extract relevant features that capture the necessary information for action recognition. This is typically done using pre-trained models available in libraries like TensorFlow. These models, such as C3D, I3D, or TSN, have been trained on large datasets and are capable of extracting meaningful features from video data. By leveraging these pre-trained models, the HAR system can obtain high-level representations of the video frames or clips, enabling accurate action recognition.
Advantages of Invention,
Human Activity Recognition (HAR) offers several advantages in various domains:
1. Enhanced Human-Robot Interaction: HAR enables robots to recognize and understand human gestures, actions, and behaviors, leading to improved communication and collaboration between humans and robots. This enhances the overall human-robot interaction experience
2. Healthcare Monitoring and Assistance: HAR plays a crucial role in healthcare settings by allowing robots with HAR capabilities to monitor patients' abnormal movements and activities. This helps healthcare professionals in assessing patients' conditions, providing timely interventions, and improving patient outcomes.
3. Personalized Assistance: HAR enables robots to provide personalized assistance and support to individuals based on their specific needs and preferences. By recognizing and understanding human actions, robots can offer tailored guidance, recommendations, and interventions to enhance the quality of life for individuals.
4. Immersive Gaming and Virtual Environments: HAR contributes to the development of realistic virtual characters in gaming and immersive environments. By accurately recognizing and interpreting human actions, HAR enables more immersive and interactive experiences, enhancing the realism and engagement of virtual worlds.
5. Healthcare Monitoring and Assistance: HAR plays a crucial role in healthcare settings by allowing robots with HAR capabilities to monitor patients' abnormal movements and activities. This helps healthcare professionals in assessing patients' conditions, providing timely interventions, and improving patient outcomes.
6. Personalized Assistance: HAR enables robots to provide personalized assistance and support to individuals based on their specific needs and preferences. By recognizing and understanding human actions, robots can offer tailored guidance, recommendations, and interventions to enhance the quality of life for individuals.
7. Immersive Gaming and Virtual Environments: HAR contributes to the development of realistic virtual characters in gaming and immersive environments. By accurately recognizing and interpreting human actions, HAR enables more immersive and interactive experiences, enhancing the realism and engagement of virtual worlds.
3 Claims 2 Figures , Claims:The scope of the invention is defined by the following claims:
Claims:
1. The proposed invention Human Activity Recognition using CNN & LSTM comprising the following feature:
a) The high level of accuracy in recognizing various actions, including but not limited to walking, running, jumping, and more.
b) The model aims to effectively process video frames or clips, enabling real-time monitoring and interaction capabilities.
c) The model demonstrates efficiency in terms of memory usage and processing time, ensuring optimal performance and resource utilization. It supports continuous learning and adaptation, allowing the model to incorporate new data or activities and improve its recognition capabilities over time.
2. As per claim 1, user privacy and data security are prioritized in the design and implementation of the model, ensuring the confidentiality and protection of sensitive information.
3. As per claim 1, the model is versatile and can be deployed in a wide range of real-world applications, including but not limited to surveillance and human-computer interaction scenarios.
| # | Name | Date |
|---|---|---|
| 1 | 202341067756-REQUEST FOR EARLY PUBLICATION(FORM-9) [10-10-2023(online)].pdf | 2023-10-10 |
| 2 | 202341067756-FORM FOR STARTUP [10-10-2023(online)].pdf | 2023-10-10 |
| 3 | 202341067756-FORM FOR SMALL ENTITY(FORM-28) [10-10-2023(online)].pdf | 2023-10-10 |
| 4 | 202341067756-FORM 1 [10-10-2023(online)].pdf | 2023-10-10 |
| 5 | 202341067756-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [10-10-2023(online)].pdf | 2023-10-10 |
| 6 | 202341067756-EVIDENCE FOR REGISTRATION UNDER SSI [10-10-2023(online)].pdf | 2023-10-10 |
| 7 | 202341067756-EDUCATIONAL INSTITUTION(S) [10-10-2023(online)].pdf | 2023-10-10 |
| 8 | 202341067756-DRAWINGS [10-10-2023(online)].pdf | 2023-10-10 |
| 9 | 202341067756-COMPLETE SPECIFICATION [10-10-2023(online)].pdf | 2023-10-10 |
| 10 | 202341067756-FORM-9 [28-10-2023(online)].pdf | 2023-10-28 |