Sign In to Follow Application
View All Documents & Correspondence

System For Enabling Natural And Immersive Interaction With Computing Device

Abstract: Disclosed is a system for enabling natural and immersive interaction with a computing device, comprising: an image capture module configured to receive input from a webcam; a preprocessing module for color detection and masking of the received input; a hand detection and tracking module for identifying and following hand movements; a landmark extraction module for determining key points of hand gestures; a calibration module for aligning the system with the user’s specific environment; a cursor control module for translating the tracked hand movements into cursor movements on the computing device; and a drawing interaction module for converting the cursor movements into drawing commands within a user interface of the computing device. Fig. 1 Drawings / FIG. 1 / FIG. 2 / FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 April 2024
Publication Number
23/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

MARWADI UNIVERSITY
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
SAUBHAGYA RAMESH VISHWAKARMA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
SIDDHARTH GAUTAM SINGH
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
ABHINANDAN KUMAR
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
AABHAS RAMESHCHANDRA MISHRA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
AKSHAY RANPARIYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
SIMRIN FATHIMA SYED
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
DR.MADHU SHUKLA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
VIPUL LADVA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Inventors

1. SAUBHAGYA RAMESH VISHWAKARMA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
2. SIDDHARTH GAUTAM SINGH
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
3. ABHINANDAN KUMAR
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
4. AABHAS RAMESHCHANDRA MISHRA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
5. AKSHAY RANPARIYA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
6. SIMRIN FATHIMA SYED
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
7. DR.MADHU SHUKLA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA
8. VIPUL LADVA
MARWADI UNIVERSITY, RAJKOT- MORBI HIGHWAY, AT GAURIDAD, RAJKOT – 360003, GUJARAT, INDIA

Specification

Description:Field of the Invention

The present disclosure generally relates to human-computer interaction systems. Particularly, the present disclosure relates to a system for enabling natural and immersive interaction with a computing device.
Background
The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
interactions with computing devices have evolved significantly over the years, from the early days of command-line interfaces to the current era of graphical user interfaces (GUIs). Initially, the primary methods for interacting with computers were through keyboards and mice, which, despite their widespread adoption, presented limitations in terms of natural and intuitive user experiences. The quest for more natural, intuitive, and immersive interaction methods has led to the exploration of various technologies, including touchscreens, voice recognition, and motion tracking. These advancements have enabled users to interact with computing devices in ways that are more aligned with human behaviors and expectations.
One area of particular interest has been the development of gesture-based interaction systems. These systems offer a way to interact with computing devices through hand gestures, eliminating the need for physical contact with a device. The appeal of gesture-based interactions lies in their ability to mimic natural human actions, making the interaction with digital content more intuitive. Gesture recognition involves capturing and interpreting human gestures via mathematical algorithms. To achieve this, various modules and technologies are employed, including image capture, preprocessing for specific features such as color detection, hand detection and tracking, landmark extraction for identifying gesture-specific keypoints, system calibration for environmental adaptation, cursor control for translating gestures into commands, and specialized interaction modules for specific tasks like drawing.
Despite the progress, challenges persist in the development of gesture-based interaction systems. The accurate detection and tracking of hand movements are complicated by factors such as varying lighting conditions, background clutter, and the diversity in hand sizes and shapes among different users. The efficiency of gesture recognition is heavily dependent on the system's ability to accurately identify and track hand movements in real-time. Additionally, the calibration of the system to work seamlessly across different environments and the translation of gestures into precise commands remain significant hurdles. These challenges affect the system's usability and the user's experience, often requiring sophisticated algorithms and models to address effectively.
Moreover, gesture-based systems necessitate robust preprocessing mechanisms to filter and mask irrelevant visual information, ensuring that only pertinent gesture-related data is processed. The precise extraction of landmarks or key points on the hand is critical for recognizing complex gestures and translating them into meaningful commands. Each module within a gesture-based interaction system must work cohesively to interpret user intentions accurately and provide a seamless interaction experience.
In light of the above discussion, there exists an urgent need for solutions that overcome the challenges associated with conventional interaction systems and techniques for enabling natural and immersive interaction with computing devices.

Summary
The present disclosure generally relates to human-computer interaction systems. Particularly, the present disclosure relates to a system for enabling natural and immersive interaction with a computing device.
The following presents a simplified summary of various aspects of this disclosure in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements nor delineate the scope of such aspects. Its purpose is to present some concepts of this disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The following paragraphs provide additional support for the claims of the subject application.
A system has been developed to enable natural and immersive interaction with computing devices through sophisticated recognition and tracking of hand gestures. This innovative system comprises a series of modules designed to capture, process, and interpret human hand movements, allowing for a highly intuitive user interface. The core of the system begins with an image capture module, which leverages a webcam to receive visual input. This input is then refined by a preprocessing module, which is adept at color detection and masking, ensuring that the relevant hand gestures are isolated from irrelevant background information. Following this preprocessing step, a hand detection and tracking module takes over, employing advanced techniques to identify and follow the movement of the user's hands through the captured video stream.
In an embodiment, the precision of hand movement capture is significantly enhanced by the image capture module's use of a high-resolution webcam. This enhancement allows for an exceptionally accurate detection of hand movements, which is crucial for the system's overall performance. The preprocessing module complements this by employing adaptive algorithms designed to effectively distinguish hand gestures from a variety of backgrounds and under different lighting conditions, ensuring consistent gesture recognition accuracy regardless of the environmental conditions.
In an embodiment, the system's ability to recognize and interpret hand gestures improves over time, thanks to the hand detection and tracking module's use of machine learning models. These models are trained on a wide array of hand movement data, allowing them to increasingly refine their gesture recognition capabilities as they are exposed to more user interactions.
In an embodiment, the landmark extraction module is particularly sophisticated, capable of identifying key points of hand gestures and distinguishing between multiple gestures simultaneously. This capability allows the system to interpret a wide range of commands from complex hand movements, significantly broadening the scope of user interaction.
In an embodiment, the calibration module is tailored to personalize the system according to the individual user's hand size, shape, and specific range of motion. This personalization ensures that the system can accurately interpret the intended gestures of any user, making the interaction experience more natural and intuitive.
In an embodiment, the cursor control module takes the recognized hand movements and translates them into cursor movements on the screen of the computing device. This module is adept at handling multi-dimensional movements, including depth, allowing for a more nuanced and three-dimensional interaction within the user's digital environment.
In an embodiment, the drawing interaction module further enhances the system's capabilities by allowing the cursor movements to be converted into drawing commands within a user interface. This module is designed to emulate the natural behavior of drawing tools by supporting varying levels of pressure and angle based on the detected hand movements, enabling a more authentic drawing experience.
In an embodiment, the versatility of the drawing interaction module is evident in its compatibility with a wide array of graphic design and digital art applications. This compatibility provides users with a broad range of creative functionalities, from simple sketching to complex graphic design tasks, all controlled through natural hand gestures.
In an embodiment, the system's comprehensive approach extends to a methodological framework for interacting with computing devices using natural hand gestures. This methodology encompasses capturing images through a webcam, preprocessing these images to highlight relevant hand gestures, and then detecting and tracking these movements. Following detection, the system extracts landmarks corresponding to key points of the gestures and calibrates itself to the user's specific environment. Finally, the recognized gestures are used to control a cursor and enable drawing interactions on the user interface, providing a seamless bridge between the user's natural hand movements and the digital response of the computing device.

Brief Description of the Drawings

The features and advantages of the present disclosure would be more clearly understood from the following description taken in conjunction with the accompanying drawings in which:
FIG. 1 illustrates a system for enabling natural and immersive interaction with a computing device, in accordance with the embodiments of the present disclosure;
FIG. 2 illustrates a method for interacting with a computing device using natural hand gestures, in accordance with the embodiments of the present disclosure; and
FIG. 3 illustrates a framework for landmarks of hand, in accordance with the embodiments of the present disclosure.

Detailed Description
In the following detailed description of the invention, reference is made to the accompanying drawings that form a part hereof, and in which is shown, by way of illustration, specific embodiments in which the invention may be practiced. In the drawings, like numerals describe substantially similar components throughout the several views. These embodiments are described in sufficient detail to claim those skilled in the art to practice the invention. Other embodiments may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined only by the appended claims and equivalents thereof.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
The present disclosure generally relates to human-computer interaction systems. Particularly, the present disclosure relates to a system for enabling natural and immersive interaction with a computing device.
Pursuant to the "Detailed Description" section herein, whenever an element is explicitly associated with a specific numeral for the first time, such association shall be deemed consistent and applicable throughout the entirety of the "Detailed Description" section, unless otherwise expressly stated or contradicted by the context.
FIG. 1 illustrates a system (100) for enabling natural and immersive interaction with a computing device, in accordance with the embodiments of the present disclosure. Specifically, the system (100) facilitates intuitive user interactions with the computing device through advanced image processing and gesture recognition techniques. This system (100) is designed to recognize and interpret human hand gestures, allowing users to interact with computing devices in a more natural and intuitive manner than traditional input methods such as keyboards and mice.
In an embodiment, the system (100) comprises an image capture module (102) configured to receive input from a webcam. This module (102) is responsible for capturing visual data from the user's environment. The captured visual data serves as the primary input for the system (100), enabling subsequent processing and interpretation of user gestures.
In another embodiment, a preprocessing module (104) is provided for color detection and masking of the received input. The preprocessing module (104) processes the captured visual data to detect specific colors and apply masking techniques. These processes are essential for isolating relevant features from the background, thereby facilitating more accurate gesture recognition.
In a further embodiment, a hand detection and tracking module (106) is included for identifying and following hand movements. This module (106) utilizes advanced algorithms to detect the presence and movement of the user's hands within the captured visual data. Once detected, the module (106) continuously tracks the movement of the hands, providing a dynamic input for the system (100).
In an additional embodiment, a landmark extraction module (108) is employed for determining key points of hand gestures. This module (108) analyzes the tracked hand movements to identify specific gestures based on predefined landmarks or key points. The identification of these landmarks is crucial for accurately interpreting the user's intended gestures.
In another embodiment, the system (100) incorporates a calibration module (110) for aligning the system (100) with the user’s specific environment. This module (110) calibrates the system (100) based on the unique characteristics of the user's environment, such as lighting conditions and spatial constraints. Calibration ensures that the system's (100) performance is optimized for each individual user, enhancing the accuracy and reliability of gesture recognition.
In a further embodiment, a cursor control module (112) is provided for translating the tracked hand movements into cursor movements on the computing device. This module (112) converts the identified hand movements and gestures into corresponding cursor movements, enabling the user to control the cursor on the computing device screen through natural hand gestures.
In an additional embodiment, the system (100) includes a drawing interaction module (114) for converting the cursor movements into drawing commands within a user interface of the computing device. This module (114) enables users to interact with drawing applications or any interface element that accepts drawing inputs, using hand gestures to control drawing actions directly. The integration of this module (114) expands the utility of the system (100), making it suitable for a wide range of interactive applications.
In an embodiment, the image capture module (102) of the system (100) includes a webcam configured to capture high-resolution images. This capability is crucial for accurately detecting hand movements. High-resolution images provide the necessary detail for the system (100) to discern subtle nuances in hand gestures, ensuring that even small or complex movements are captured with precision. By utilizing a high-quality webcam, the system (100) is equipped to perform detailed analysis of visual data, leading to more accurate and responsive gesture recognition. This enhancement is pivotal for applications requiring fine control or where gesture subtlety plays a key role in user interaction.
In another embodiment, the preprocessing module (104) of the system (100) employs adaptive algorithms. These algorithms are designed to distinguish hand gestures from varied backgrounds and lighting conditions effectively. By adapting to changes in the environment, the preprocessing module (104) ensures consistent performance regardless of external factors. This adaptability is achieved through the use of sophisticated image processing techniques that dynamically adjust parameters based on the input data. Such algorithms enable the system (100) to maintain high accuracy in gesture recognition even in challenging or changing conditions.
In a further embodiment, the hand detection and tracking module (106) utilizes machine learning models. These models are trained to improve the accuracy of gesture recognition over time. As the system (100) is exposed to more data, the machine learning models adapt, enhancing their ability to correctly identify and track hand movements. This learning capability allows the system (100) to become more intuitive with use, providing a personalized interaction experience that continuously evolves based on user behavior.
In an additional embodiment, the landmark extraction module (108) is capable of identifying and distinguishing multiple hand gestures simultaneously. This capability allows the system (100) to interpret complex interactions involving multiple gestures or hands. The module (108) employs advanced pattern recognition algorithms to analyze the hand movements, extracting key points that define each gesture. By recognizing multiple gestures concurrently, the system (100) enables a richer set of commands and interactions, facilitating more dynamic and multifaceted user engagement.
In another embodiment, the calibration module (110) is designed to personalize the system (100) to the user’s hand size, shape, and specific range of motion. This personalization ensures that the system (100) can accurately interpret gestures for each individual user. The calibration process involves capturing data on the user's hands and their movement patterns, which is then used to adjust the system's (100) parameters. This customization enhances the accuracy of gesture recognition, providing a tailored interaction experience that is responsive to the unique characteristics of the user.
In a further embodiment, the cursor control module (112) is adapted to support multi-dimensional movements, including depth, for three-dimensional interaction. This capability allows users to interact with their computing device in a more immersive and intuitive manner, moving beyond the limitations of traditional two-dimensional cursor control. By tracking and translating hand movements in three dimensions, the system (100) offers enhanced control and interaction possibilities, suitable for a wide range of applications, from gaming to professional design tools.
In an additional embodiment, the drawing interaction module (114) is further configured to support varying levels of pressure and angle based on the detected hand movements, emulating natural drawing tools. This functionality enriches the drawing experience, allowing users to express creativity with the same nuance and control as traditional art tools. The system (100) interprets variations in gesture intensity and angle to modulate the drawing output, enabling a range of artistic effects from delicate shading to bold strokes.
In another embodiment, the drawing interaction module (114) is compatible with various graphic design and digital art applications. This compatibility provides users with a wide range of creative functionalities, seamlessly integrating hand gesture input as a natural extension of the user's creative process. By supporting a diverse array of applications, the system (100) opens up new avenues for digital artistry, allowing users to explore different styles and techniques through intuitive, gesture-based interactions.
FIG. 2 illustrates a method 200 for interacting with a computing device using natural hand gestures, in accordance with the embodiments of the present disclosure. At step 202, capture images through a webcam, which serves as the initial step for recognizing hand gestures. At step 204, preprocess the captured images to detect and mask colors, enhancing feature isolation. At step 206, detect and track hand movements from the preprocessed images, enabling gesture identification. At step 208, extract landmarks corresponding to key points of the hand gestures for detailed analysis. At step 210, calibrate the system for the user’s specific environment, optimizing performance. At step 212, control a cursor on the computing device based on the tracked hand movements for navigation. At step 214, enable drawing interactions on a user interface of the computing device through the controlled cursor movements, facilitating creative tasks.
FIG. 3 illustrates a framework for landmarks of hand, in accordance with the embodiments of the present disclosure. FIG. 3 depicts a hand with 21 labeled points, ranging from the wrist to the fingertips. Each finger has four points denoting the metacarpophalangeal joint (MCP), the proximal interphalangeal joint (PIP), the distal interphalangeal joint (DIP), and the fingertip. The thumb, due to its unique structure, has an additional carpometacarpal joint (CMC) landmark instead of a DIP. The labels correspond to each point's anatomical terminology, and lines between points may indicate the skeletal connections or the vectors used to calculate joint angles and finger positioning. Such frameworks are essential in developing hand-tracking software, which has applications in virtual reality, augmented reality, robotics, sign language recognition, and human-computer interaction. The precise identification of these landmarks allows for the creation of a digital skeleton of the hand, which can mimic real-world movements and provide a natural interface for various technologies.

Example embodiments herein have been described above with reference to block diagrams and flowchart illustrations of methods and apparatuses. It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including hardware, software, firmware, and a combination thereof. For example, in one embodiment, each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations can be implemented by computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus create means for implementing the functions specified in the flowchart block or blocks.
Throughout the present disclosure, the term ‘processing means’ or ‘microprocessor’ or ‘processor’ or ‘processors’ includes, but is not limited to, a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
The term “non-transitory storage device” or “storage” or “memory,” as used herein relates to a random access memory, read only memory and variants thereof, in which a computer can store data or software for any duration.
Operations in accordance with a variety of aspects of the disclosure is described above would not have to be performed in the precise order described. Rather, various steps can be handled in reverse order or simultaneously or not at all.
While several implementations have been described and illustrated herein, a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein may be utilized, and each of such variations and/or modifications is deemed to be within the scope of the implementations described herein. More generally, all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific implementations described herein. It is, therefore, to be understood that the foregoing implementations are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, implementations may be practiced otherwise than as specifically described and claimed. Implementations of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.

Claims

I/We claims:

A system (100) for enabling natural and immersive interaction with a computing device, comprising: an image capture module (102) configured to receive input from a webcam; a preprocessing module (104) for color detection and masking of the received input; a hand detection and tracking module (106) for identifying and following hand movements; a landmark extraction module (108) for determining key points of hand gestures; a calibration module (110) for aligning the system with the user’s specific environment; a cursor control module (112) for translating the tracked hand movements into cursor movements on the computing device; and a drawing interaction module (114) for converting the cursor movements into drawing commands within a user interface of the computing device.
The system (100) of claim 1, wherein the image capture module (102) includes a webcam configured to capture high-resolution images to accurately detect hand movements.
The system (100) of claim 1, wherein the preprocessing module (104) employs adaptive algorithms to distinguish hand gestures from varied backgrounds and lighting conditions.
The system (100) of claim 1, wherein the hand detection and tracking module (106) utilizes machine learning models to improve the accuracy of gesture recognition over time.
The system (100) of claim 1, wherein the landmark extraction module (108) is capable of identifying and distinguishing multiple hand gestures simultaneously.
The system (100) of claim 1, wherein the calibration module (110) is designed to personalize the system to the user’s hand size, shape, and specific range of motion.
The system (100) of claim 1, wherein the cursor control module (112) is adapted to support multi-dimensional movements, including depth, for three-dimensional interaction.
The system (100) of claim 1, wherein the drawing interaction module (114) is further configured to support varying levels of pressure and angle based on the detected hand movements, emulating natural drawing tools.
The system (100) of claim 1, wherein the drawing interaction module (114) is compatible with various graphic design and digital art applications to provide a wide range of creative functionalities.
A method (200) for interacting with a computing device using natural hand gestures, comprising: capturing images through a webcam; preprocessing the captured images to detect and mask colors; detecting and tracking hand movements from the preprocessed images; extracting landmarks corresponding to key points of the hand gestures; calibrating the system for the user’s environment; controlling a cursor on the computing device based on the tracked hand movements; and enabling drawing interactions on a user interface of the computing device through the controlled cursor movements.

SYSTEM FOR ENABLING NATURAL AND IMMERSIVE INTERACTION WITH COMPUTING DEVICE

Disclosed is a system for enabling natural and immersive interaction with a computing device, comprising: an image capture module configured to receive input from a webcam; a preprocessing module for color detection and masking of the received input; a hand detection and tracking module for identifying and following hand movements; a landmark extraction module for determining key points of hand gestures; a calibration module for aligning the system with the user’s specific environment; a cursor control module for translating the tracked hand movements into cursor movements on the computing device; and a drawing interaction module for converting the cursor movements into drawing commands within a user interface of the computing device.

Fig. 1

Drawings
/
FIG. 1

/
FIG. 2


/
FIG. 3

, Claims:I/We claims:

A system (100) for enabling natural and immersive interaction with a computing device, comprising: an image capture module (102) configured to receive input from a webcam; a preprocessing module (104) for color detection and masking of the received input; a hand detection and tracking module (106) for identifying and following hand movements; a landmark extraction module (108) for determining key points of hand gestures; a calibration module (110) for aligning the system with the user’s specific environment; a cursor control module (112) for translating the tracked hand movements into cursor movements on the computing device; and a drawing interaction module (114) for converting the cursor movements into drawing commands within a user interface of the computing device.
The system (100) of claim 1, wherein the image capture module (102) includes a webcam configured to capture high-resolution images to accurately detect hand movements.
The system (100) of claim 1, wherein the preprocessing module (104) employs adaptive algorithms to distinguish hand gestures from varied backgrounds and lighting conditions.
The system (100) of claim 1, wherein the hand detection and tracking module (106) utilizes machine learning models to improve the accuracy of gesture recognition over time.
The system (100) of claim 1, wherein the landmark extraction module (108) is capable of identifying and distinguishing multiple hand gestures simultaneously.
The system (100) of claim 1, wherein the calibration module (110) is designed to personalize the system to the user’s hand size, shape, and specific range of motion.
The system (100) of claim 1, wherein the cursor control module (112) is adapted to support multi-dimensional movements, including depth, for three-dimensional interaction.
The system (100) of claim 1, wherein the drawing interaction module (114) is further configured to support varying levels of pressure and angle based on the detected hand movements, emulating natural drawing tools.
The system (100) of claim 1, wherein the drawing interaction module (114) is compatible with various graphic design and digital art applications to provide a wide range of creative functionalities.
A method (200) for interacting with a computing device using natural hand gestures, comprising: capturing images through a webcam; preprocessing the captured images to detect and mask colors; detecting and tracking hand movements from the preprocessed images; extracting landmarks corresponding to key points of the hand gestures; calibrating the system for the user’s environment; controlling a cursor on the computing device based on the tracked hand movements; and enabling drawing interactions on a user interface of the computing device through the controlled cursor movements.

SYSTEM FOR ENABLING NATURAL AND IMMERSIVE INTERACTION WITH COMPUTING DEVICE

Documents

Application Documents

# Name Date
1 202421033150-OTHERS [26-04-2024(online)].pdf 2024-04-26
2 202421033150-FORM FOR SMALL ENTITY(FORM-28) [26-04-2024(online)].pdf 2024-04-26
3 202421033150-FORM 1 [26-04-2024(online)].pdf 2024-04-26
4 202421033150-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [26-04-2024(online)].pdf 2024-04-26
5 202421033150-EDUCATIONAL INSTITUTION(S) [26-04-2024(online)].pdf 2024-04-26
6 202421033150-DRAWINGS [26-04-2024(online)].pdf 2024-04-26
7 202421033150-DECLARATION OF INVENTORSHIP (FORM 5) [26-04-2024(online)].pdf 2024-04-26
8 202421033150-COMPLETE SPECIFICATION [26-04-2024(online)].pdf 2024-04-26
9 202421033150-FORM-9 [07-05-2024(online)].pdf 2024-05-07
10 202421033150-FORM 18 [08-05-2024(online)].pdf 2024-05-08
11 202421033150-FORM-26 [13-05-2024(online)].pdf 2024-05-13
12 202421033150-FORM 3 [13-06-2024(online)].pdf 2024-06-13
13 202421033150-RELEVANT DOCUMENTS [09-10-2024(online)].pdf 2024-10-09
14 202421033150-POA [09-10-2024(online)].pdf 2024-10-09
15 202421033150-FORM 13 [09-10-2024(online)].pdf 2024-10-09
16 202421033150-FER.pdf 2025-09-18
17 202421033150-FORM-8 [21-11-2025(online)].pdf 2025-11-21
18 202421033150-FORM-26 [21-11-2025(online)].pdf 2025-11-21
19 202421033150-FER_SER_REPLY [21-11-2025(online)].pdf 2025-11-21
20 202421033150-DRAWING [21-11-2025(online)].pdf 2025-11-21
21 202421033150-CORRESPONDENCE [21-11-2025(online)].pdf 2025-11-21
22 202421033150-COMPLETE SPECIFICATION [21-11-2025(online)].pdf 2025-11-21
23 202421033150-CLAIMS [21-11-2025(online)].pdf 2025-11-21
24 202421033150-ABSTRACT [21-11-2025(online)].pdf 2025-11-21

Search Strategy

1 202421033150_SearchStrategyNew_E_3150E_06-03-2025.pdf