Abstract: A clinical image analysis and monitoring device, comprising a platform 101 supporting primary and secondary X-Ray boards 102, 103 for receiving medical images, an artificial intelligence-based imaging unit 106 with a laser displacement sensor for analysing abnormalities and user height, an extendable rods 107 with motorized ball and socket joints 108 for adjusting board 102, 103 height and viewing angle, a two-axis motorized gimbal with an infrared eye-tracking sensor for board 102, 103 orientation, a plurality of LEDs 110 on a motorized slider 111 for dynamic illumination, a laser projection unit 112 on a guiding rail 113 to outline abnormalities, a multi-sectioned chamber 114 storing reference images retrieved via motorized gripping means 115, motorized clippers 116 with a bar car hood assembly 117 to extend reference images, a holographic projection unit 118 for 3D visuals, and a voice-enabled speaker unit 121 and microphone 122 for audio feedback.
Description:FIELD OF THE INVENTION
[0001] The present invention relates to a clinical image analysis and monitoring device that is capable of assisting in examine, assessment, and interpretation of medical images in healthcare environments. Additionally, the device also enables improved visualization, diagnostic support, and user interaction to facilitate efficient clinical decision-making.
BACKGROUND OF THE INVENTION
[0002] In clinical practice, the accurate analysis of medical images such as X-rays, CT scans, and MRI reports plays a vital role in diagnosing and managing a wide range of medical conditions. Radiologists and healthcare providers depend on clear visualization, precise measurements, and effective interpretation of these images to make informed decisions regarding patient care. However, current methods often lack intelligent support systems that assist in identifying anomalies or highlighting regions of interest. Challenges such as improper image alignment, poor lighting conditions, limited adaptability to different user requirements, and the absence of integrated diagnostic aids results in slower evaluations and increased chances of diagnostic oversight. As clinical environments grow more demanding, there is an urgent need for systems that not only support but enhance the diagnostic process through automation, intelligent analysis, and user-centric interactivity.
[0003] Traditionally, medical images are reviewed on static display panels, conventional light boxes, or basic digital viewers that offer limited functionality beyond simple visualization. These setups do not account for variations in user posture, height, or preference, nor do they provide dynamic image manipulation or guided diagnostic assistance. In many cases, the review process remains largely manual, relying on the clinician’s experience, spatial judgment, and physical annotations. This not only places a cognitive burden on practitioners but also limits the reproducibility and consistency of diagnoses across different users and facilities. Moreover, the absence of integrated reference comparison tools and real-time feedback mechanisms restricts the ability to verify findings efficiently or explain them clearly to patients or colleagues. These shortcomings underscore the need for an advanced system that automates key tasks, enhances image interactivity, and provides intelligent diagnostic guidance in a seamless and accessible manner.
[0004] US11610306B2 discloses about a medical image analysis method which includes: reading an original medical image; performing image classification and object detection on the original medical image to generate a first classification result and a plurality of object detection results by a plurality of complementary artificial intelligence (AI) models; performing object feature integration and transformation on a first detection result and a second detection result among the object detection results to generate a transformation result by a features integration and transformation module; and performing machine learning on the first classification result and the transformation result to generate an image interpretation result by a machine learning module and display the image interpretation result.
[0005] US20240087725A1 discloses about systems and methods are provided for automatically marking locations within a radiograph of one or more dental pathologies, anatomies, anomalies or other conditions determined by automated image analysis of the radiograph by a number of different machine learning models. Image annotation data may be generated based at least in part on obtained results associated with output of the multiple machine learning models, where the image annotation data indicates at least one location in the radiograph and an associated dental pathology, restoration, anatomy or anomaly detected at the at least one location by at least one of the machine learning models. A number of different pathologies may be identified and their locations marked within a single radiograph image.
[0006] Conventionally, many devices are disclosed that aim to support the viewing and interpretation of medical images; however, most of these systems focus primarily on static image presentation with limited interactivity or diagnostic intelligence. Such devices often lack the ability to adjust dynamically to individual user ergonomics, provide real-time anomaly detection, or support guided image comparisons using stored references.
[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a device that not only facilitates accurate and efficient interpretation of medical images but also offers adaptive visualization, intelligent diagnostic assistance, and user-specific interaction capabilities. In addition, the developed device also needs to be capable of dynamically securing and adjusting medical images, analyzing them, and enhancing visibility of abnormalities.
OBJECTS OF THE INVENTION
[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.
[0009] An object of the present invention is to develop a device that facilitates accurate assessment and enhanced visualization of medical images for improved diagnostic outcomes.
[0010] Another object of the present invention is to develop a device that adapts to individual user ergonomics and preferences by dynamically adjusting image position, angle, and visibility based on real-time interaction.
[0011] Another object of the present invention is to enable automated detection and highlighting of abnormalities within medical images, thereby assisting medical professionals in early and accurate diagnosis.
[0012] Yet another object of the present invention is to improve clinical workflow efficiency by reducing manual effort, minimizing interpretation time, and allowing interactive feedback through audio, visual, and spatial interfaces.
[0013] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.
SUMMARY OF THE INVENTION
[0014] The present invention relates to a clinical image analysis and monitoring device that is capable of analysing, interpreting, and displaying medical images in an enhanced and user-responsive manner, facilitating accurate identification of abnormalities, assisting medical professionals in diagnosis, and enabling user interaction for real-time visualization and control.
[0015] According to an embodiment of the present invention, a clinical image analysis and monitoring device, comprises a platform supporting a primary and a secondary X-Ray board configured to receive medical images, a sensing module including a proximity sensor and a LiDAR sensor installed on the platform for detecting user presence and scanning image dimensions, a microcontroller linked with the sensing module for activating a plurality of motorized toggle clamps with soft gripper pads positioned along the boards to secure the medical images, an artificial intelligence-based imaging unit coupled with a laser displacement sensor for detecting user height and analysing the images for abnormal patterns, a pair of extendable rods positioned between the platform and the boards through motorized ball and socket joints for adjusting board height and viewing angle, a two-axis motorized gimbal assembly attached to the boards and integrated with an infrared eye-tracking sensor for adjusting board orientation based on user gaze, a plurality of LEDs mounted on a two-axis motorized slider at the rear of each board for illuminating specific regions of the image based on user focus or diagnostic highlights.
[0016] According to another embodiment of the present invention, the device further comprises of a laser projection unit installed on each board through a motorized guiding rail for dynamically projecting outlines of abnormalities as analyzed by the imaging unit, a multi-sectioned chamber positioned above the boards having dedicated sections for storing various reference images, a motorized gripping means for retrieving reference images from the chamber and placing them on the vacant board, a pair of motorized clippers mounted laterally on the board integrated with a bar car hood assembly for gripping and extending reference medical images for comparison, a holographic projection unit mounted on the platform for projecting three-dimensional visuals of analyzed medical images with diagnostic highlights and treatment recommendations, a touch interactive display screen on an extendable pole for displaying anatomical models and AI-based suggestions, a speaker and a microphone integrated with a voice recognition module for receiving voice commands and providing audio feedback, a light sensor for glare detection enabling the microcontroller to regulate board orientation, and a display panel enhanced with voice narration and real-time diagnostics based on the interpreted image content.
[0017] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a clinical image analysis and monitoring device.
DETAILED DESCRIPTION OF THE INVENTION
[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.
[0020] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.
[0021] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0022] The present invention relates to a clinical image analysis and monitoring device that is capable of analyzing medical images, dynamically adjusting display orientation based on user interaction, and enhancing diagnostic accuracy and real-time visual aids, thereby improving clinical workflows, enabling personalized image interpretation, and supporting efficient medical decision-making.
[0023] Referring to Figure 1, an isometric view of a clinical image analysis and monitoring device is illustrated, comprising a platform 101 supporting a primary X-Ray board 102 and a secondary X-Ray board 103, a sensing module 104 installed on the platform 101, a plurality of motorized toggle clamps 105 with soft gripper pads arranged on upper and lower portions of each of the board 102, 103, an artificial intelligence-based imaging unit 106 integrated on the platform 101, a pair of extendable rods 107 installed in between the platform 101 and each of the board 102,103 via motorized ball and socket joints 108, a two-axis motorized gimbal assembly 109 coupled to the X-ray boards 102, 103.
[0024] Figure 1 further illustrates a plurality of LEDs (Light Emitting Diode) 110 mounted on a two-axis motorized slider 111 on rear side of each of the X-Ray board 102, 103, a laser projection unit 112 installed on each of the board 102,103 via a motorized guiding rail 113, a multi-sectioned chamber 114 is located above the platform 101, a gripping means 115 arranged with each section of the chamber 114, a pair of motorized clippers 116 mounted on lateral sides of the X-Ray board 102, 103 and integrated with a bar car hood assembly 117, a holographic projection unit 118 is mounted on the platform 101, a touch interactive display screen 119 mounted on an extendable pole 120, a speaker unit 121 and a microphone 122 installed on the platform 101.
[0025] The device disclosed herein, include a platform 101 supporting a primary X-Ray board 102 and a secondary X-Ray board 103, where at least one of the X-Ray boards 102, 103 is configured to receive medical images for analyzing. The platform 101 is internally structured using a lightweight aluminium alloy frame for durability and corrosion resistance with polymer-composite panels to reduce weight and vibration. Internal compartments of the platform 101 secure microcontroller, actuators, and wiring, while vibration-dampening mounts stabilize mounted components. The surface of the platform 101 is coated with anti-static, medical-grade epoxy resin to ensure hygiene.
[0026] For initiating functionality of the device, a user manually presses a push-button installed on the platform 101. The push button serves as the primary means for turning the device on and off. The push button is typically made from polycarbonate. When push button is pressed to switch on the device it allows current to flow. This sends a signal to the device's microcontroller, instructing it to activate the device. The microcontroller then powers up the device, enabling them to function.
[0027] On activation, the microcontroller further activates a sensing module 104 including a proximity sensor and a LiDAR (Light Detection and Ranging) sensor installed on the platform 101 for scanning images to determine dimensions. the proximity sensor is internally configured to emit a focused infrared beam toward the surface of the medical image placed on the X-Ray board 102, 103. When the beam is reflected back, a photodiode within the sensor detects the change in reflection intensity, which varies with the image’s presence and distance. This data is transmitted to the microcontroller, which processes it to confirm image placement. While, the LiDAR sensor emits rapid laser pulses toward the surface of the medical image. These pulses reflect back to an internal photodetector, and the sensor calculates the time taken for each pulse to return TOF (Time-of-Flight). This data is processed to generate a precise 3D (three-dimensional) map of the image’s surface and dimensions.
[0028] The microcontroller uses this information to process the image's exact size and orientation, and activate a plurality of motorized toggle clamps 105 with soft gripper pads arranged on upper and lower portions of each of the board 102, 103 for securing the medical images on the board 102, 103 and adjust boards 102, 103 for optimal viewing and analysis. The motorized toggle clamps 105 consist of a compact electric motor linked to a toggle linkage housed within each clamp unit. When activated by the microcontroller, the motor drives the linkage into a locking position, pressing the soft gripper pads against the medical image to hold it firmly in place.
[0029] An artificial intelligence-based imaging unit 106 coupled with a laser displacement sensor is installed on the platform 101 for determining height of the user, present in close proximity to the platform 101. The imaging unit 106 includes a high-resolution camera module integrated with a dedicated artificial intelligence (AI) processor. Internally, the imaging unit 106 employs convolutional neural networks (CNNs) trained on vast datasets of annotated medical images to detect abnormalities such as fractures, tissue damage, or structural deformities. The imaging unit 106 captures real-time images from the X-Ray boards 102, 103, preprocesses them through noise reduction and contrast enhancement-based machine learning protocol, and then analyzes the data using multi-layered models to generate diagnostic insights and treatment suggestions, which are displayed or projected accordingly.
[0030] Furthermore, the imaging unit 106 employs multiple machine learning protocols to analyze medical images, identify abnormalities such as fractures or tissue irregularities, and generate a primary diagnostic report with treatment recommendations. To access the diagnostic reports, a user interface is installed in a computing unit wirelessly linked to the microcontroller which enables users, doctors and concerned individuals to access the reports along with enables them to share medical images, and allowing patients to view a simplifies diagnostic report, results and track progress.
[0031] Coupled with this imaging unit 106, the laser displacement sensor operates by emitting a focused laser beam toward the user standing near the device. The time it takes for the laser to reflect back is measured by an internal photodetector, allowing calculation of the user's exact height and position. This data is fed to the AI imaging unit 106, which then send this data to the microcontroller, then the microcontroller actuates a pair of extendable rods 107 installed in between the platform 101 and each of the board 102, 103 via motorized ball and socket joints 108 for adjusting height and viewing angle of the X-Ray boards 102, 103.
[0032] The extendable rods 107 are operated by an internal pneumatic unit that uses compressed air to control extension and retraction. Each rod 107 is composed of telescopic segments guided within a sealed cylinder. When the imaging unit 106 and laser displacement sensor determine the required height or angle adjustment, the microcontroller signals a solenoid valve to release pressurized air into the cylinder. This air pressure pushes the internal piston, extending the rod 107 smoothly. For retraction, air is directed to the opposite chamber, drawing the rod 107 back. Integrated pressure sensors and flow regulators ensure precise, stable motion and maintain accurate positioning without jerks.
[0033] The motorized ball and socket joints 108 comprise a spherical ball mounted within a socket housing, allowing multi-directional rotation. Internally, it integrates two or more miniature servo motors aligned along orthogonal axes (typically X and Y) connected to the base of the ball. When the microcontroller receives input from the imaging unit 106, it sends precise signals to these motors, which rotate the ball incrementally to adjust the angle and orientation of the attached X-Ray board 102, 103. The laser displacement sensor and a capacitive position sensor works in conjunction with the imaging unit 106 to automatically adjust the extendable rods 107 and socket joints 108 to optimized the viewing angle and eliminate glare detected by an integrated light sensor.
[0034] The light sensor operates using a photodiode that changes its electrical resistance based on the intensity of ambient light. When light strikes the sensor, it generates a current proportional to the brightness level. This signal is sent to the microcontroller, which interprets the data to determine if there is glare or insufficient lighting on the medical image. Based on which the microcontroller regulates operation of a pair of gimbal assembly 109 to ensure correct orientation with respect to the user.
[0035] A pair of two-axis motorized gimbal assembly 109 coupled to the X-Ray boards 102, 103 via the extendable rods 107 integrated with an infrared eye-tracking sensor configured to adjust the board’s tilt and angle in response to the user’s eye movements. The two-axis motorized gimbal assembly 109 consists of dual servo motors aligned on horizontal and vertical axes, enabling precise rotational control of the X-Ray boards 102, 103. When activated by the microcontroller, the motors adjust the board’s tilt and angle independently along each axis. Internal position sensors provide real-time feedback to the microcontroller, which uses it to maintain stability and accuracy during movement. This allows dynamic repositioning of the boards 102, 103 based on user interaction or diagnostic requirements, ensuring optimal ergonomic alignment.
[0036] Integrated into the gimbal assembly 109, the infrared eye-tracking sensor works by emitting invisible infrared light toward the user's eyes. A high-speed camera detects the reflected light from the cornea and calculates the user’s gaze direction using vector mapping protocols. This data is processed to determine where the user is looking on the image. The gimbal then adjusts the board’s angle to center the visual content in the user’s line of sight, enhancing comfort and diagnostic precision.
[0037] Further, a plurality of LEDs (Light Emitting Diodes) 110 mounted on a two-axis motorized slider 111 on rear side of each of the X-Ray board 102, 103 configured to dynamically illuminate specific regions of the medical image. The LEDs 110 function by allowing electrical current to pass through a semiconductor material, typically gallium-based, which emits photons as a response. These LEDs 110 are mounted behind the X-Ray boards 102, 103 and are configured to dynamically illuminate specific regions of the medical image. Controlled by the microcontroller, they adjust brightness and focus, ensuring optimal visibility. An integrated LDR (Light Dependent Resistor) senses ambient light levels, enabling to dynamically adjust LED intensity for consistent image clarity under varying lighting conditions.
[0038] These LEDs 110 are mounted on the two-axis motorized slider 111, which allows precise spatial positioning over the X-Ray board 102, 103 surface. The slider 111 consists of two perpendicular linear rails, each driven by stepper motors using lead screw. The LEDs 110 are fixed on a carriage that moves horizontally and vertically along these axes. Based on real-time input from the AI imaging unit 106 or infrared eye-tracking sensor, the microcontroller activates the motors to reposition the LEDs 110 over regions of interest. Integrated position encoders ensure smooth and accurate movement, thereby linking targeted illumination directly to user interaction or diagnostic cues.
[0039] Simultaneously, once the medical image is analyzed by the imaging unit 106, the microcontroller enables a laser projection unit 112 installed on each of the board 102, 103 via a motorized guiding rail 113 to outline abnormalities onto the image, thus enhancing visibility. The laser projection unit 112 operates by emitting a concentrated laser beam through a precision lens assembly, controlled by an internal galvanometer which directs the beam to trace outlines and highlight abnormalities on the medical image. The imaging unit 106 analyzes the image and transmits coordinate data to the laser projection unit 112, which then projects these outlines with high accuracy. The intensity and focus of the laser are modulated for clarity, ensuring that projected markings are sharp, safe, and visually distinct.
[0040] While, the motorized guiding rail 113 enables linear motion across the surface of the X-Ray board 102, 103. This guiding rail 113 includes a motor-driven carriage that moves the laser unit horizontally along a precision track, typically using lead screws. Stepper motors receive directional inputs from the microcontroller, which references the imaging unit’s coordinates to align the projection unit 112 with specific target regions. Limit switches and position encoders are integrated to ensure accurate travel and repeatability. This combination allows the laser projection to dynamically move and mark various zones on the image with real-time precision.
[0041] A multi-sectioned chamber 114 is installed above the platform 101 to carry the medical images. Each section dedicated towards storage of different reference images that are retrieved by a gripping means 115 which are arranged with each section of the chamber 114 to position the reference image onto vacant X-Ray board 102, 103. Each section is equipped with individual sliding trays and embedded RFID for image identification. When a specific reference is required, the microcontroller accesses storage data and identifies the appropriate section. An internal sensor confirms the presence of the image, and the microcontroller triggers retrieval through the gripping means 115.
[0042] Linked to the chamber 114, the gripping means 115 includes gripper units with soft silicon pads, comprises miniature linear actuators attached on a motorized track rail which are configured to retrieve the reference image from the chamber 114. When activated by the microcontroller, the rail moves the gripping means 115 laterally to the selected chamber 114 section. The actuators extend to gently grasp the reference image using controlled pressure, preventing damage. Once secured, the gripping means 115 retracts and transports the image to the vacant X-Ray board 102, 103 for comparison. Sensors ensure precise placement, allowing seamless alignment with the primary image for enhanced diagnostic clarity.
[0043] A pair of motorized clippers 116 mounted on lateral sides of the X-Ray boards 102, 103 and integrated with a bar car hood assembly 117 for gripping and extending the reference medical image for effective display, thus facilitating the user to understand the abnormalities. The motorized clippers 116 consist of compact electric motors connected to pivoting clip arms with padded gripping surfaces. When a command is issued by the microcontroller, the motors rotate to open or close the clip arms, allowing secure attachment or release of reference medical images. The clippers 116 are guided by torque sensors to apply uniform pressure, preventing image damage. Position sensors confirm clip alignment, and the motor allows synchronized movement for gripping large or wide-format images, ensuring stable display throughout diagnostic procedures or comparisons.
[0044] The bar car hood assembly 117 functions as an extendable support to hold and stretch the reference medical image for clear visibility. Internally, it includes a retractable rod operated by miniature stepper motors and guided along linear rails. Upon receiving the signal from the microcontroller, the bar hood extends outward, pulling the attached reference image taut without creases. Tension sensors and guided rollers ensure smooth extension, maintaining proper alignment with the X-Ray board 102, 103. This assembly enhances the display of reference visuals, enabling side-by-side analysis with the primary medical image.
[0045] A holographic projection unit 118 is mounted on the platform 101 for projecting 3D visuals of the medical image related to the abnormalities and recommended treatments. The projection unit 118 comprises a laser light source, spatial light modulators (SLMs), and beam-splitting optics to create interference patterns that form volumetric holograms in mid-air or on a transparent display surface. The microcontroller processes 3D image data from the imaging unit 106 and converts it into holographic patterns. These patterns are dynamically projected, allowing users to view and interact with realistic, depth-enhanced models of abnormalities and treatment plans from multiple angles.
[0046] Additionally, a touch interactive display screen 119 mounted on an extendable pole 120 which is configured to display interactive spatial anatomical models and diagnostic suggestions. The touch interactive display screen 119 operates using capacitive touch technology integrated with a high-resolution LED panel. When the user touches the screen, the conductive properties of the human finger alter the local electrostatic field. This change is detected by a grid of capacitive sensors layered beneath the display surface, which is processed by a touch controller chip. The microcontroller interprets these inputs to navigate menus, access diagnostic models, or manipulate anatomical visuals. The display screen 119 is mounted on an extendable pole 120 which operated by a pneumatic unit in a similar way as of extendable rods 107 for ergonomic positioning and is capable of rendering real-time 2D/3D graphics and interactive medical content. The display screen 119 is also configured to display content based on the interpretation of the medical image and is further enhance with a voice narration via a speaker unit 121.
[0047] The speaker unit 121 and a microphone 122 integrated with a voice recognition module which are configured to provide audio feedback and accept voice commands for device interaction. The speaker unit 121 operates using an electromagnetic coil attached to a diaphragm. When an electrical audio signal passes through the coil, it creates a varying magnetic field that interacts with a permanent magnet, causing the diaphragm to move and generate sound waves. In this device, the speaker unit 121 is used to deliver voice feedback, diagnostic summaries, or treatment instructions. Linked with the microphone 122 and voice recognition module, it forms a two-way voice interaction means, enabling hands-free operation and improving accessibility for patients and medical personnel.
[0048] The microphone 122 functions using a MEMS (Micro-Electro-Mechanical Systems) technology. Sound waves from the user's voice cause a diaphragm within the microphone 122 to vibrate, altering the capacitance between the diaphragm and a back plate. This change is converted into an electrical signal, which is amplified and processed by a voice recognition module. The processed data is then relayed to the microcontroller to interpret voice commands for controlling various device functions.
[0049] The present invention works best in the following manner, where the platform 101 as disclosed in the invention, supports the primary and secondary X-Ray boards 102, 103 for image mounting. Upon activation, the proximity sensor detects the user presence, activating the LiDAR sensor to scan the dimensions of the inserted medical image. This data is processed by the microcontroller, which subsequently actuates the motorized toggle clamps 105 with soft gripper pads to secure the image onto the X-Ray board 102, 103. Simultaneously, the AI-based imaging unit 106 captures and analyzes the image with the help of the laser displacement sensor, while determining the user’s height to initiate automatic adjustment of the extendable rods 107 connected to each board 102, 103 through the motorized ball and socket joints 108. This ensures the boards 102, 103 align with the user's eye level. The integrated infrared eye-tracking sensor mounted with the two-axis motorized gimbal assembly 109 allows real-time board angle adjustments based on eye movement.
[0050] In continuation, the LEDs 110 arranged on the motorized slider 111 behind the board 102, 103 dynamically illuminate specific regions of the image based on AI analysis and user interaction. Following this, the laser projection unit 112 mounted on the motorized guiding rail 113 outlines abnormalities directly onto the medical image. If comparison is needed, the multi-sectioned chamber 114 located above the platform 101 releases the reference image using gripping means 115 equipped with soft silicone pads and linear actuators to place it onto the vacant board 102, 103. The motorized clippers 116 on the board’s sides grip the reference image and extend it using the bar car hood assembly 117 for side-by-side analysis. Additionally, the holographic projection unit 118 generates 3D visuals of abnormalities and potential treatments for enhanced understanding. Diagnostic results are displayed on the interactive touch screen mounted on the extendable pole 120, while the integrated speaker unit 121 and microphone 122 allows voice-controlled interaction and delivers audio feedback. The integrated light sensor ensures optimal lighting by adjusting the display orientation or LED brightness to eliminate glare.
[0051] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A clinical image analysis and monitoring device, comprising:
a) a platform 101 supporting a primary X-Ray board 102 and a secondary X-Ray board 103, wherein at least one of the X-Ray boards 102, 103 is configured to receive medical images intended for analyzing;
b) a sensing module 104 including a proximity sensor and a LiDAR (Light Detection and Ranging) sensor installed on the platform 101 for scanning the images to determine dimensions, wherein a microcontroller is linked with the sensing module 104 for processing the dimensions to selectively activate a plurality of motorized toggle clamps 105 with soft gripper pads, arranged on upper and lower portions of each of the board 102, 103, for securing the medical images;
c) an artificial intelligence-based imaging unit 106 coupled with a laser displacement sensor is installed on the platform 101 for determining height of a user, present in close proximity to the platform 101, wherein a pair of extendable rods 107 installed in between platform 101 and each of the board 102, 103, via motorized ball and socket joints 108, for adjusting height and viewing angle of the X-ray boards 102, 103;
d) a pair two-axis motorized gimbal assembly 109 coupled to the X-ray boards 102, 103 via the extendable rods 107, integrated with an infrared eye-tracking sensor, configured to adjust the board’s tilt and angle in response to the user’s eye movements, wherein a plurality of LEDs (Light Emitting Diode) 110 mounted on a two-axis motorized slider 111 on rear side of each of the X-Ray board 102, 103, configured to dynamically illuminate specific regions of the medical image;
e) a laser projection unit 112 installed on each of the board 102, 103 via a motorized guiding rail 113, wherein the medical image is analyzed by the imaging unit 106 for enabling the laser projection unit 112 to project outlines of abnormalities onto the image, thus enhancing visibility, wherein a multi-sectioned chamber 114 is located above the platform 101, each section dedicated towards storage of different reference images that are retrieved by a gripping means 115 arranged with each section of the chamber 114, to position the reference image onto vacant X-Ray board 102, 103; and
f) a pair of motorized clippers 116 mounted on lateral sides of the X-Ray board 102, 103, and integrated with a bar car hood assembly 117 for gripping and extending the reference medical image, for effective display, thus facilitating the user to understand abnormalities, wherein a holographic projection unit 118 is mounted on the platform 101 for projecting three-dimensional visuals of the medical image, relating to the abnormalities, and recommended treatments.
2) The device as claimed in claim 1, wherein the AI-based imaging unit 106 employs multiple machine learning protocols to analyse medical images, identify abnormalities such as fractures or tissue irregularities, and generate a primary diagnostic report with treatment recommendations.
3) The device as claimed in claim 1, wherein a user interface is installed in a computing unit wirelessly linked to the microcontroller, enabling users, doctors and concerned individuals to access diagnostic reports, share medical images, and consult remotely, and allowing patients to view simplified diagnostic report, results and track progress.
4) The device as claimed in claim 1, wherein the laser displacement sensor and capacitive position sensor work in conjunction with the imaging unit 106 to automatically adjust the extendable rods 107 and motorized ball and socket joints 108 to optimize the viewing angle and eliminate glare detected by an integrated light sensor.
5) The device as claimed in claim 1, wherein the gripping means 115 includes gripper units with soft silicone pads and miniature linear actuators, attached on a motorized track rail, configured to retrieve the reference image from the chamber 114.
6) The device as claimed in claim 1, wherein a touch interactive display screen 119 mounted on an extendable pole 120, configured to display interactive spatial anatomical models and diagnostic suggestions.
7) The device as claimed in claim 1, wherein a speaker unit 121 and a microphone 122 integrated with a voice recognition module, configured to provide audio feedback and accept voice commands for device interaction.
8) The device as claimed in claim 7, wherein the display screen 119 is configured to display content based on the interpretation of the medical image and is further enhanced with a voice narration via the speaker unit 121.
9) The device as claimed in claim 1, wherein a light sensor is installed on the board 102, 103 for detecting glare on the medical image, based on which the microcontroller regulates operation of the gimbal assembly 109 to ensure correct orientation with respect to the user.
10) The device as claimed in claim 1, wherein the LEDs 110 are coupled with a LDR (Light Dependent Resistor) for maintaining ideal illumination.
| # | Name | Date |
|---|---|---|
| 1 | 202521059430-STATEMENT OF UNDERTAKING (FORM 3) [20-06-2025(online)].pdf | 2025-06-20 |
| 2 | 202521059430-REQUEST FOR EXAMINATION (FORM-18) [20-06-2025(online)].pdf | 2025-06-20 |
| 3 | 202521059430-REQUEST FOR EARLY PUBLICATION(FORM-9) [20-06-2025(online)].pdf | 2025-06-20 |
| 4 | 202521059430-PROOF OF RIGHT [20-06-2025(online)].pdf | 2025-06-20 |
| 5 | 202521059430-POWER OF AUTHORITY [20-06-2025(online)].pdf | 2025-06-20 |
| 6 | 202521059430-FORM-9 [20-06-2025(online)].pdf | 2025-06-20 |
| 7 | 202521059430-FORM FOR SMALL ENTITY(FORM-28) [20-06-2025(online)].pdf | 2025-06-20 |
| 8 | 202521059430-FORM 18 [20-06-2025(online)].pdf | 2025-06-20 |
| 9 | 202521059430-FORM 1 [20-06-2025(online)].pdf | 2025-06-20 |
| 10 | 202521059430-FIGURE OF ABSTRACT [20-06-2025(online)].pdf | 2025-06-20 |
| 11 | 202521059430-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-06-2025(online)].pdf | 2025-06-20 |
| 12 | 202521059430-EVIDENCE FOR REGISTRATION UNDER SSI [20-06-2025(online)].pdf | 2025-06-20 |
| 13 | 202521059430-EDUCATIONAL INSTITUTION(S) [20-06-2025(online)].pdf | 2025-06-20 |
| 14 | 202521059430-DRAWINGS [20-06-2025(online)].pdf | 2025-06-20 |
| 15 | 202521059430-DECLARATION OF INVENTORSHIP (FORM 5) [20-06-2025(online)].pdf | 2025-06-20 |
| 16 | 202521059430-COMPLETE SPECIFICATION [20-06-2025(online)].pdf | 2025-06-20 |
| 17 | 202521059430-FORM-26 [25-06-2025(online)].pdf | 2025-06-25 |
| 18 | Abstract.jpg | 2025-07-04 |