Abstract: An aim-based sports training assistive device, comprising of a training platform 101 placed horizontally on a ground surface, a plurality of pressure sensors for monitoring a user’s stance, weight distribution, and balance, a pair of straps 102 to be worn around the user’s ankles, a panel 103 mounted vertically via rod 104 with a rack-and-pinion assembly 105, an AI (artificial intelligence) camera 106 integrated with a near-infrared (NIR) sensor and a plurality of infrared (IR) sensors to track facial features, body posture, and gaze movement of the user while training, a multi-modal simulation unit 107 comprises of fans 107a mounted on L-shaped adjustable links 107b via ball-and-socket joints 107c for wind simulation, LED (Light Emitting Diode) display panels 107d for dynamic lighting simulation, and a speaker module 108 to emit ambient noises, a 3D (three-dimensional) holographic projection unit 109 to project virtual assistants.
Description:FIELD OF THE INVENTION
[0001] The present invention relates to an aim-based sports training assistive device that is capable of monitoring user stress, anxiety, and confidence levels to provide accurate mental state tracking and projects virtual assistants and dynamic training targets tailored to the user’s skill and stress, enabling personalized and effective training.
BACKGROUND OF THE INVENTION
[0002] The need for sports training assistance has grown significantly with the increasing demands of competitive performance and the pursuit of excellence in athletics. Athletes today require more than just physical practice they need structured guidance, personalized training, and continuous performance evaluation. A sports training assistant helps bridge this gap by providing expert support, tracking progress, correcting techniques, and ensuring that training aligns with the athlete’s specific goals. Whether it’s for beginners learning the basics or professionals fine-tuning their skills, such assistance enhances efficiency, reduces the risk of injury, and boosts motivation.
[0003] Traditional methods of sports training rely on a coach's personal observation, experience, and manual tracking of an athlete’s performance. While these methods have produced many successful athletes, they come with several limitations. One major drawback is the lack of real-time data and objective analysis, which can lead to delayed feedback and missed opportunities for immediate correction. Traditional approaches also struggle to provide personalized training for each athlete, especially in team settings, leading to a one-size-fits-all program that do not address individual strengths or weaknesses. Additionally, the manual nature of record-keeping and performance tracking is time-consuming and prone to human error.
[0004] EP2623166A1 relates to a training device usable in both indoor and outdoor training sessions, linked to a communication network capable of connecting multiple remote users during shared training sessions. Said device comprises, among other elements, a training means which presents a variable resistance against a physical force applied by a user; a monitoring and communication unit of data regarding the physical condition of the user and the mechanical conditions of the training means; a simulation and control unit; and a plurality of sensors for acquiring information on the physical condition of the user, on the mechanical conditions of the training means and on the user's environment.
[0005] WO2014011060A1 relates to a sports training apparatus for enhancing a users' playing technique of ball sports such as cricket. The apparatus comprises a frame with two front spaced legs and a rear spaced leg; a flexible net attached to the frame and extending across the space formed between each of the front spaced legs and the rear spaced leg to form two converging net panels, wherein the net is reinforced along each edge of the net and horizontally along a vertical plane of convergence of the two net panels from a top edge to a bottom edge of the net to facilitate rebound of the ball when driven into the net; and a ball suspended on a line, the line attached to the frame at a swivel configured to facilitate rotation of the line about the line attachment point.
[0006] Conventionally, many devices are available in the market that assists the user in aim-based sports training. However, the devices mentioned in the prior arts are lacks in projecting virtual assistants and dynamic training targets based on the user’s skill level and stress condition. In addition, the developed devices are incapable of projecting virtual assistants and dynamic training targets based on the user’s skill level and stress condition to provide personalized and effective training experiences.
[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a device that requires to be capable of monitoring user stress, anxiety, and confidence levels during training to prevent burnout and optimize learning efficiency. In addition, the developed devices also need to be capable of verifying the presence of essential safety gear to ensure user safety.
OBJECTS OF THE INVENTION
[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.
[0009] An object of the present invention is to develop a device that is capable of projecting virtual assistants and dynamic training targets based on the user’s skill level to provide personalized and effective training experiences.
[0010] Another object of the present invention is to develop a device that is capable of monitoring user stress, anxiety, and confidence levels during training for accurate and continuous mental state tracking to prevent burnout and optimize learning efficiency.
[0011] Yet, another object of the present invention is to develop a device that is capable of verifying the presence of essential safety gear to ensure user safety by preventing accidents and injuries caused by the absence of protective gears.
[0012] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0013] The present invention relates to an aim-based sports training assistive device that is capable of continuously monitoring the user stress, anxiety, and confidence levels during training to optimize learning and prevent burnout, while also verifying the presence of essential safety gear to ensure user safety and prevent accidents.
[0014] According to an embodiment of the present invention, an aim-based sports training assistive device, comprising of a training platform placed horizontally on a ground surface, a plurality of pressure sensors embedded with the platform for monitoring a user’s stance, weight distribution, and balance, a pair of straps provided with the platform and worn around the user’s ankles, each embedded with IMU (Inertial Measurement Unit) sensors and gyroscopes to detect foot alignment and body tilt, a panel mounted vertically in front of the platform on a rod with a rack-and-pinion assembly, an AI (artificial intelligence) camera integrated with a near-infrared (NIR) sensor and a plurality of infrared (IR) sensors provided on an upper surface of the panel to track facial features, body posture, and gaze movement of the user while training, the camera employs a facial recognition module to authenticate the user upon stepping on the platform and retrieve a user profile including historical performance data, posture maps, gaze deviation records, and environmental response metrics.
[0015] According to another embodiment of the present invention, the device further comprises of a multi-modal simulation unit provided with the panel and comprises of fans mounted on L-shaped adjustable links via ball-and-socket joints for wind simulation, LED (Light Emitting Diode) display panels for dynamic lighting simulation, and a speaker module to emit ambient noises, a 3D (three-dimensional) holographic projection unit mounted above the analysis module to project virtual assistants, a microcontroller configured within the device for actuating and controlling all integrated components and sensors, a user-interface is inbuilt in a computing unit accessed by the user to input personal and medical and personal information, the user-interface provides session-wise performance comparisons, highlighting key performance indicators including gaze stability, stress levels, accuracy, and posture consistency, an ultrasonic sensor is provided on the panel to detect height of the user to adjust the height of the panel, so that the camera aligns precisely with the user's eye level for accurate gaze tracking and posture analysis and a battery is associated with the device for supplying power to electrical and electronically operated components associated with the device.
[0016] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of an aim-based sports training assistive device.
DETAILED DESCRIPTION OF THE INVENTION
[0018] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
[0019] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.
[0020] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0021] The present invention relates to develop an aim-based sports training assistive device that projects virtual assistants and adaptive training targets based on the user’s skill level and stress condition for personalized training, while also verifying the presence of essential safety gear to ensure user safety and prevent accidents.
[0022] Referring to Figure 1, the aim-based sports training assistive device is illustrated, comprises of a training platform 101, a pair of straps 102 provided with the platform 101, a panel 103 mounted vertically in front of the platform 101 on a rod 104 with a rack-and-pinion assembly 105, an AI (artificial intelligence) camera 106 provided on an upper surface of the panel 103, a multi-modal simulation unit 107 mounted on the platform 101 comprises of fans 107a mounted on L-shaped adjustable links 107b via ball-and-socket joints 107c and LED (Light Emitting Diode) display panels 107d, a speaker module 108 mounted on the panel 103 and a 3D (three-dimensional) holographic projection unit 109 mounted above the panel 103.
[0023] The device discloses herein includes a training platform 101 placed horizontally on a ground surface. The platform 101 is constructed from durable, non-slip materials such as high-strength polymer composites or reinforced rubber to ensure stability and user safety during use. The platform 101 is designed to withstand varied environmental conditions and repetitive physical impact, providing a reliable and secure base for diverse training activities. Its surface texture is engineered to offer optimal grip, minimizing the risk of slips or falls.
[0024] A user is required to activate the device manually by pressing a button installed on the platform 101 and linked with an inbuilt microcontroller associated with the device. The button is a type of switch that is internally connected with the device via multiple circuits that upon pressing by the user, the circuits get closed and starts conduction of electricity that tends to activate the device and vice versa.
[0025] Upon activation of the device, the microcontroller activates an inbuilt
communication module for establishing a wireless connection between the
microcontroller and a computing unit that is inbuilt with a user-interface and accessed by the user for enabling the user to give input personal and medical and personal information, along with specifying difficulty level for practicing aim based sports. The user interacts with the interface through a touch screen, keyboard, or other input methods available on the computing unit. The computing unit mentioned herein includes, but not limited to smartphone, laptop, tablet.
[0026] The communication module mentioned herein includes, but not limited to Wi-Fi (Wireless Fidelity) module, Bluetooth module, GSM (Global System for Mobile Communication) module. The communication module used in the device is preferably the Wi-Fi module. The Wi-Fi module enables wireless communication by transmitting and receiving data over radio frequencies using IEEE 802.11 protocols. It connects to a network via an access point, converting digital data into radio signals. The module processes TCP/IP protocols for data exchange, interfaces with microcontrollers through UART/SPI, and ensures encrypted communication using WPA/WPA2 security standards for
secure and efficient wireless connectivity.
[0027] Additionally, the microcontroller signals to a plurality of pressure sensors embedded with the platform 101 for monitoring a user’s stance, weight distribution, and balance. The pressure sensors consist of a sensing element that converts mechanical force or pressure into an electrical signal. Most commonly, these sensors use materials or structures that change their electrical properties such as resistance, capacitance, or voltage when pressure is applied. For example, piezoresistive sensors rely on materials whose electrical resistance changes when compressed, while capacitive sensors detect changes in capacitance caused by the deformation of flexible plates under pressure. When a user stands on the platform 101, the pressure from their body weight deforms the sensor’s sensing element. This deformation alters the electrical characteristics, producing a measurable electrical output proportional to the applied force. The sensor’s circuitry then converts this raw signal into a digital or analog value that the microcontroller read and analyze to monitor weight distribution and balance.
[0028] The microcontroller configured within the device for actuating and controlling all integrated components and sensors, the microcontroller receives and processes real-time sensor data to generate adaptive training routines based on comparison with historical performance.
[0029] An ultrasonic sensor is provided on a panel 103 to detect height of the user, in accordance to which the microcontroller regulates actuation of a rack-and-pinion assembly 105 to adjust the height of the panel 103, so that the camera 106 aligns precisely with the user's eye level for accurate gaze tracking and posture analysis. The ultrasonic sensor on the panel 103 detects the user’s height by emitting high-frequency sound waves toward the user and measuring the time it takes for the echoes to bounce back from the user’s body. When the sensor sends out these sound pulses, they travel through the air until they hit an object such as the user’s head and reflect back to the sensor. By calculating the time delay between sending the pulse and receiving its echo, the sensor determines the distance between itself and the user’s head. This distance measurement is then used by the microcontroller to precisely adjust the height of the panel 103 via the rack-and-pinion assembly 105. By raising or lowering the panel 103, the camera 106 is aligned exactly with the user’s eye level, enabling accurate gaze tracking and posture analysis.
[0030] The pair of straps 102 are provided with the platform 101 and worn around the user’s ankles. The straps 102 are made of high-quality, flexible materials such as reinforced nylon or neoprene to ensure both durability and comfort. These straps 102 feature adjustable fasteners, like Velcro or buckles, to securely fit users of varying sizes while allowing ease of movement. The straps 102 are designed to withstand repeated use and resist wear and tear, the straps 102 provide stable support during training sessions without causing discomfort or restricting circulation.
[0031] Further, the microcontroller signals IMU (Inertial Measurement Unit) sensors and gyroscopes embedded in the straps 102 to detect foot alignment and body tilt. The Inertial Measurement Unit (IMU) sensors comprises a combination of accelerometers, gyroscopes, and sometimes magnetometers, all embedded within a compact module. The accelerometers measure linear acceleration forces acting on the foot in three axes (X, Y, and Z), while gyroscopes detect angular velocity how fast the foot is rotating around these axes. By combining data from both sensors, the IMU provides precise information about the foot’s position, motion, and orientation in space.
[0032] When the user moves or shifts their stance, the gyroscopes detect changes in rotational angles, such as tilting forward, backward, or sideways. By processing this sensor data, the microcontroller calculates the exact foot alignment.
[0033] The platform 101 is also installed with a rod 104 configured with the rack-and-pinion assembly 105. The rack-and-pinion assembly 105 consists of two main components: a rack and a motorized pinion, where the rack is attached with a panel 103 and is carved with plurality of teeth. The pinion gear is meshed with the teeth of the rack and coupled with a motor, where based on the detected height of the user, the motor receives a signal from the microcontroller to rotate which results in transition of the rack along with the panel 103.
[0034] A small circular gear that meshes with the rack’s teeth. When the pinion gear rotates, it moves the rack linearly either up or down along the rod 104. This conversion of rotational motion from the pinion into linear motion of the rack enables smooth, controlled vertical adjustment of the panel 103 position. By turning the pinion either manually or via a motor controlled by the microcontroller the panel 103 is raised or lowered to accommodate users of different heights or to position the display optimally for viewing.
[0035] An AI (artificial intelligence) camera 106 integrated with a near-infrared (NIR) sensor and a plurality of infrared (IR) sensors provided on an upper surface of the panel 103 to track facial features, body posture, and gaze movement of the user while training. The artificial intelligence-based camera 106 captures images of the user’s body. The camera 106 comprises of an image capturing arrangement including a set of lenses that captures multiple images of the body, and the captured images are stored within memory of the camera 106 in form of an optical data.
[0036] The camera 106 also comprises of a processor that is integrated with artificial intelligence protocols, such that the processor processes the optical data and extracts the required data from the captured images. The extracted data is further converted into digital pulses and bits and are further transmitted to the microcontroller.
[0037] The NIR sensor includes an emitter that projects near-infrared light—light just beyond the visible spectrum—onto the user’s face and eyes, along with a detector that captures the reflected light. Because NIR light is invisible to the human eye and can penetrate slightly beneath the skin surface, it reveals fine details such as facial contours and eye reflections without causing discomfort. The plurality of IR sensors, arranged across the panel 103, emit infrared light in various directions and detect how this light reflects off. These IR sensors consist of photodiodes or phototransistors that convert the reflected infrared light into electrical signals.
[0038] Together, the NIR and IR sensors collect spatial and reflective data, which is processed to construct a map of the user’s face and body posture. This mapping enables to accurately identify facial landmarks (such as eyes, nose, mouth), detect subtle movements in expressions, and analyze the alignment and orientation of the user’s body.
[0039] For gaze tracking, the NIR sensor focuses on detecting reflections from the eyes, allowing to determine the precise direction of the user’s gaze. The electrical signals from these sensors are then transmitted to the microcontroller which interpret the data to monitor the user’s posture, facial expressions, and eye movement continuously and in real time.
[0040] The camera 106 employs a facial recognition module to authenticate the user upon stepping on the platform 101 and retrieve a user profile including historical performance data, posture maps, gaze deviation records, and environmental response metrics. The facial recognition module in the camera 106 works by capturing clear images or video of the user’s face as they step onto the platform 101. It first identifies the face within the image by detecting key facial features such as the eyes, nose, and mouth, along with their positions relative to each other. Using these unique features, the module creates a detailed map or template of the user’s face. This template is then compared to a stored database of user profiles to find a match.
[0041] Further, the camera 106 is configured to verify the presence of essential safety gear, and display alerts if gear is missing. The camera 106 captures images or video of the user and analyses them to detect key safety items, such as helmets, gloves, or protective straps 102, based on their shapes, colors, or distinctive features. If the camera 106 identifies that any required safety gear is missing, the microcontroller immediately triggers an alert, which is visual, audible, or both, to notify the user.
[0042] Additionally, a multi-modal simulation unit 107 provided with the platform 101 to realistically replicate real-world conditions for enhanced user immersion and situational practice. The multi-modal simulation unit 107 comprises of fans 107a mounted on L-shaped adjustable links 107b via ball-and-socket joints 107c for wind simulation, LED (Light Emitting Diode) display panels 107d for dynamic lighting simulation, and a speaker module 108 to emit ambient noises, each controlled based on real-time user performance and adaptability.
[0043] The fans 107a in the multi-modal simulation unit 107 generate airflow to simulate wind, enhancing the realism of the training environment. When powered on, the fan blades rotate at high speeds, pushing air forward to create a controlled breeze or gust effect. The speed and intensity of the airflow is adjusted to mimic different wind conditions, ranging from gentle breezes to strong winds. These fans 107a are mounted on L-shaped adjustable links 107b, which provide both stability and flexibility. The L-shaped links 107b are connected to the unit via ball-and-socket joints 107c, which allow multi-directional movement and rotation of the fans 107a.
[0044] The joint consists of a spherical “ball” component that fits snugly inside a hollow, cup-shaped “socket.” This design permits the ball to rotate freely within the socket, enabling movement along multiple axes—such as up and down, side to side, and rotational twisting. The joint’s structure provides a wide range of motion while maintaining stability and support.
[0045] For dynamic lighting simulation, the LED (Light Emitting Diode) display panels 107d function by using an array of tiny semiconductor light sources—LEDs—that emit light when an electric current passes through them. These panels 107d are capable of producing a wide range of colors and brightness levels by adjusting the intensity of red, green, and blue (RGB) LEDs. For dynamic lighting simulation, the LED panel change color, brightness, and pattern in real time, simulating various lighting conditions such as sunrise, sunset, ambient indoor light, or even rapid flashes to mimic movement or environmental changes. The LED panel responds to control signals from the microcontroller which coordinates the timing and pattern of the light output.
[0046] Further, to emit ambient noises the speaker consists of: a driver (cone), voice coil, magnet, suspension, frame, and terminals. When an electrical signal is sent to the voice coil, it generates a magnetic field that interacts with the permanent magnet, causing the voice coil and the attached diaphragm (cone) to move. This movement creates sound waves by displacing air, which are emitted as auditory signals. The speaker components work together to produce sound based on the frequency and amplitude of the electrical signal, enabling it to emit ambient noises. Additionally, the simulation unit 107 autonomously varies wind intensity, lighting conditions, and ambient noise levels to replicate varying environmental scenarios and progressively challenge the user.
[0047] A 3D (three-dimensional) holographic projection unit 109 mounted above the panel 103 to project virtual assistants, and dynamic training targets with varying distance and motion, adjusted based on the user’s skill and stress level. The holographic projection unit 109 creates three-dimensional image by utilizing principles of light diffraction and interference
which begins with a coherent light source splits into two beams which
illuminates the recording medium. When these beams intersect, they create an
interference pattern that encodes the light's amplitude and phase information
on a medium like holographic film. To visualize the hologram, this recorded
pattern is illuminated again with coherent light, recreating a light field that
mimics the original object’s light field, allowing viewers to see a 3D image of
virtual assistants, and dynamic training targets.
[0048] Upon detection of high physiological arousal or mental fatigue, the microcontroller is configured to reduce training difficulty, introduce calming music, and provide guided breathing prompts. Additionally, the user-interface provides session-wise performance comparisons, highlighting key performance indicators including gaze stability, stress levels, accuracy, and posture consistency.
[0049] The straps 102 are further integrated with physiological sensors including heart rate variability (HRV) sensors, skin conductance sensors, and galvanic skin response (GSR) sensors to monitor user stress, anxiety, and confidence levels during training. The HRV sensors track the variation in time intervals between consecutive heartbeats, which reflects the balance between the sympathetic and parasympathetic nervous systems; lower HRV typically indicates higher stress or anxiety, while higher HRV is associated with relaxation and confidence. Skin conductance sensors and GSR sensors measure changes in the electrical conductance of the skin, which fluctuate based on sweat gland activity controlled by the sympathetic nervous system. When a user experiences stress or anxiety, sweat production increases, raising skin conductance and providing real-time data on emotional arousal.the camera 106 is configured to verify the presence of essential safety gear, and display alerts if gear is missing.
[0050] Lastly, a battery is installed within the device which is connected to the microcontroller that supplies current to all the electrically powered components that needs an amount of electric power to perform their functions and operation in an efficient manner. The battery utilized here, is preferably a dry battery which is made up of Lithium-ion material that gives the device a long-lasting as well as an efficient DC (Direct Current) current which helps every component to function properly in an efficient manner. As the device is battery operated and do not need any electrical voltage for functioning. Hence the presence of battery leads to the portability of the device i.e., user is able to place as well as moves the device from one place to another as per the requirements.
[0051] The present invention works best in the following manner, where the platform 101 to provide the stable and secure base for physical activities. The platform 101 integrates pressure sensors that convert mechanical forces into electrical signals to monitor user stance, weight distribution, and balance. The height detection is achieved through the ultrasonic sensor that emits high-frequency sound waves, measuring the time delay of echoes reflected from the user to adjust the panel 103 height via the rack-and-pinion assembly 105, aligning the camera 106 with the user’s eye level for accurate gaze tracking and posture analysis. The straps 102 worn around the ankles contain IMU sensors combining accelerometers and gyroscopes to detect foot alignment, motion, and orientation. The physiological sensors embedded in the straps 102, including heart rate variability (HRV) sensors, skin conductance sensors, and galvanic skin response (GSR) sensors, monitor stress, anxiety, and confidence by measuring heart rate intervals and skin electrical conductance related to sweat gland activity. The AI camera 106 with near-infrared (NIR) and infrared (IR) sensors tracks facial features, posture, and gaze, while the facial recognition module authenticates the user and retrieves historical data. The camera 106 also verifies presence of essential safety gear and triggers alerts if missing. The multi-modal simulation unit 107 with adjustable fans 107a, LED panels 107d, and speakers to replicate environmental conditions, and the 3D holographic projection unit 109 to display virtual assistants and dynamic training targets, adapting based on user skill and stress levels.
[0052] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) An aim-based sports training assistive device, comprising:
i) a training platform 101 placed horizontally on a ground surface;
ii) a plurality of pressure sensors embedded with the platform 101 for monitoring a user’s stance, weight distribution, and balance;
iii) a pair of straps 102 provided with the platform 101 and worn around the user’s ankles, each embedded with IMU (Inertial Measurement Unit) sensors and gyroscopes to detect foot alignment and body tilt;
iv) a panel 103 mounted vertically in front of the platform 101 on a rod 104 with a rack-and-pinion assembly 105;
v) an AI (artificial intelligence) camera 106 integrated with a near-infrared (NIR) sensor and a plurality of infrared (IR) sensors provided on an upper surface of the panel 103 to track facial features, body posture, and gaze movement of the user while training;
vi) a multi-modal simulation unit 107 provided with the platform 101 to realistically replicate real-world conditions for enhanced user immersion and situational practice;
vii) a 3D (three-dimensional) holographic projection unit 109 mounted above the panel 103 to project virtual assistants, and dynamic training targets with varying distance and motion, adjusted based on the user’s skill and stress level; and
viii) a microcontroller configured within the device for actuating and controlling all integrated components and sensors, the microcontroller receives and processes real-time sensor data to generate adaptive training routines based on comparison with historical performance.
2) The device as claimed in claim 1, wherein a user-interface is inbuilt in a computing accessed by the user to input personal and medical and personal information, along with specifying difficulty level for practicing aim-based sports.
3) The device as claimed in claim 1, wherein the straps 102 are further integrated with physiological sensors including heart rate variability (HRV) sensors, skin conductance sensors, and galvanic skin response (GSR) sensors to monitor user stress, anxiety, and confidence levels during training.
4) The device as claimed in claim 1, wherein the camera 106 employs a facial recognition module to authenticate the user upon stepping on the platform 101 and retrieve a user profile including historical performance data, posture maps, gaze deviation records, and environmental response metrics.
5) The device as claimed in claim 1, wherein the multi-modal simulation unit 107 comprises of fans 107a mounted on L-shaped adjustable links 107b via ball-and-socket joints 107c for wind simulation, LED (Light Emitting Diode) display panels 107d for dynamic lighting simulation, and a speaker module 108 to emit ambient noises, each controlled based on real-time user performance and adaptability.
6) The device as claimed in claim 1, wherein the camera 106 is configured to verify the presence of essential safety gear, and display alerts if gear is missing.
7) The device as claimed in claim 1, wherein the microcontroller is configured to reduce training difficulty, introduce calming music, and provide guided breathing prompts upon detection of high physiological arousal or mental fatigue.
8) The device as claimed in claim 1, wherein the user-interface provides session-wise performance comparisons, highlighting key performance indicators including gaze stability, stress levels, accuracy, and posture consistency.
9) The device as claimed in claim 1, wherein an ultrasonic sensor is provided on the panel 103 to detect height of the user, in accordance to which the microcontroller regulates actuation of the rack-and-pinion assembly 105 to adjust the height of the panel 103, so that the camera 106 aligns precisely with the user's eye level for accurate gaze tracking and posture analysis.
10) The device as claimed in claim 1, wherein the simulation unit 107 autonomously varies wind intensity, lighting conditions, and ambient noise levels to replicate varying environmental scenarios and progressively challenge the user.
| # | Name | Date |
|---|---|---|
| 1 | 202521061680-STATEMENT OF UNDERTAKING (FORM 3) [27-06-2025(online)].pdf | 2025-06-27 |
| 2 | 202521061680-REQUEST FOR EXAMINATION (FORM-18) [27-06-2025(online)].pdf | 2025-06-27 |
| 3 | 202521061680-REQUEST FOR EARLY PUBLICATION(FORM-9) [27-06-2025(online)].pdf | 2025-06-27 |
| 4 | 202521061680-PROOF OF RIGHT [27-06-2025(online)].pdf | 2025-06-27 |
| 5 | 202521061680-POWER OF AUTHORITY [27-06-2025(online)].pdf | 2025-06-27 |
| 6 | 202521061680-FORM-9 [27-06-2025(online)].pdf | 2025-06-27 |
| 7 | 202521061680-FORM FOR SMALL ENTITY(FORM-28) [27-06-2025(online)].pdf | 2025-06-27 |
| 8 | 202521061680-FORM 18 [27-06-2025(online)].pdf | 2025-06-27 |
| 9 | 202521061680-FORM 1 [27-06-2025(online)].pdf | 2025-06-27 |
| 10 | 202521061680-FIGURE OF ABSTRACT [27-06-2025(online)].pdf | 2025-06-27 |
| 11 | 202521061680-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [27-06-2025(online)].pdf | 2025-06-27 |
| 12 | 202521061680-EVIDENCE FOR REGISTRATION UNDER SSI [27-06-2025(online)].pdf | 2025-06-27 |
| 13 | 202521061680-EDUCATIONAL INSTITUTION(S) [27-06-2025(online)].pdf | 2025-06-27 |
| 14 | 202521061680-DRAWINGS [27-06-2025(online)].pdf | 2025-06-27 |
| 15 | 202521061680-DECLARATION OF INVENTORSHIP (FORM 5) [27-06-2025(online)].pdf | 2025-06-27 |
| 16 | 202521061680-COMPLETE SPECIFICATION [27-06-2025(online)].pdf | 2025-06-27 |
| 17 | Abstract.jpg | 2025-07-11 |