Sign In to Follow Application
View All Documents & Correspondence

Medical Imaging Analysis System For Robotic Surgery

Abstract: The medical imaging analysis system (110) is named “Chitrasa® system” and may be integrated with any robotic surgical system (100). Chitrasa is designed to aid the surgeons/doctors in making crucial decisions at the time of performing surgeries. A graphical processor (112) coupled to a user interface device (116) is provided in the surgeon console (106). The graphical processor (112) is operably connected to a server (114), which is configured to store a database containing information about patients and their relevant medical scans. The server (114) may be remote or local to the graphical processor (112). The graphical processor (112) is configured to extract a relevant data (128) based on the received input, from the database (124) and parse the extracted relevant data (128). A 3D model of the parsed data is rendered using the user interface device (116). Further, segmentation and manipulation of an organ is performed. Then, the position and orientation of the robotic surgical instruments on the manipulated segmented organ.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 July 2023
Publication Number
30/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SUDHIR SRIVASTAVA INNOVATIONS PRIVATE LIMITED
3rd Floor, 404-405, iLabs Info Technology Centre, Phase III, Udyog Vihar, Gurugram, Haryana, India – 122016

Inventors

1. SRIVASTAVA, Sudhir Prem
3rd Floor, 404-405, iLabs Info Technology Centre, Phase III, Udyog Vihar, Gurugram, Haryana, India – 122016
2. SRIVASTAVA, Vishwajyoti Pascual
3rd Floor, 404-405, iLabs Info Technology Centre, Phase III, Udyog Vihar, Gurugram, Haryana, India – 122016
3. DWIVEDI, Aviral
3rd Floor, 404-405, iLabs Info Technology Centre, Phase III, Udyog Vihar, Gurugram, Haryana, India – 122016
4. KUMAR, Naveen
3rd Floor, 404-405, iLabs Info Technology Centre, Phase III, Udyog Vihar, Gurugram, Haryana, India – 122016

Specification

DESC:TECHNICAL FIELD
[0001] The present disclosure generally relates to a field of immersive technology applications in medical devices, and more particularly, the disclosure relates to a system for medical imaging analysis for diagnosis of disease and surgical path planning in real-time robotic surgery applications.
BACKGROUND
[0002] This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described below. This disclosure is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not just as an admissions of prior art.
[0003] Robotically assisted surgical systems have been adopted worldwide to replace conventional surgical procedures to reduce number of extraneous tissue(s) that may be damaged during surgical or diagnostic procedures, thereby reducing patient recovery time, patient discomfort, prolonged hospital tenure, and particularly deleterious side effects. In robotically assisted surgeries, the surgeon typically operates a hand controller/ master controller/ surgeon input device at a surgeon console to seamlessly capture and transfer complex actions performed by the surgeon giving the perception that the surgeon is directly articulating surgical tools/ surgical instruments to perform the surgery. The surgeon operating on the surgeon console may be located at a distance from a surgical site or may be located within an operating theatre where the patient is being operated.
[0004] The robotic assisted surgical systems may comprise of multiple robotic arms aiding in conducting robotic assisted surgeries. The robotic assisted surgical system utilizes a sterile adapter/ a sterile barrier to separate a non-sterile section of the multiple robotic arms from a mandatory sterile robotic surgical tools/ surgical instrument attached to one end of the multiple robotic arms. The sterile adaptor/ sterile barrier may include a sterile plastic drape that envelops the multiple robotic arms and the sterile adaptor/ sterile barrier that operably engages with the sterile robotic surgical tools/ surgical instrument in the sterile field.
[0005] In robotic assisted surgeries, the surgeons use medical imaging analysis in making crucial decisions while operating. One of the main challenges is that the existing medical imaging systems are designed for radiologists and the generated reports are used in surgeries by surgeons/doctors who may not have deep knowledge of radiology. It becomes a tedious task for the surgeons/doctors to plan the treatment of the patient. Another challenge is that the traditional approaches do not support diagnosis and surgical path planning in virtual reality. Further, another challenge is that the traditionally available DICOM processors do not provide real-time 3D data.
[0006] In light of the aforementioned challenges there is a need for providing a medical imaging system to solve the above-mentioned problems so that the surgeons/doctors can analyze the medical results and plan a surgical path without having deep knowledge of radiology.
SUMMARY OF THE DISCLOSURE
[0007] Some or all of the above-mentioned problems related to obtaining kinematics of a robotic cart in a multi-arm robotic surgical system are proposed to be addressed by certain embodiments of the present disclosure.
[0008] In an aspect, an embodiment of the present disclosure provides a medical imaging analysis system for a multi-arm robotic surgical system comprising one or more robotic arms each coupled to a robotic surgical instrument at its distal end, whereby the one or more robotic arms are arranged along an operating table, the system comprising: a user interface device configured to receive an input from an operator and display a perspective projection of a 3D model; and a graphical processor coupled to the user interface device and configured to: extract a relevant data based on the received input, from a database stored on a server operably connected to the graphical processor, wherein the server is configured to store a database including at least one of a diagnostic scan and patient details for one or more patients; parse the extracted relevant data; render a 3D model of the parsed data using the user interface device; perform segmentation of an organ from the rendered 3D model; manipulate the segmented organ based on another input received from the operator and render the manipulated organ on the user interface device; and receive the actual position and orientation of the robotic surgical instruments from a master controller; and map the received position and orientation of the robotic surgical instruments on the manipulated segmented organ; wherein the organ segmentation enables the surgeon to pick a particular organ from the rendered 3D model.
[0009] Optionally, the user interface device is a graphical user interface.
[00010] Optionally, the graphical processor can be located anywhere in the operating room or remote.
[00011] Optionally, the relevant data may comprise of any diagnostic scan out of the available scans related to a particular patient.
[00012] Optionally, the parsing comprises of steps: acquiring data by collecting DICOM files from various sources like hospital PACS servers, CDs, Pen drives, or local storage; extracting metadata from each DICOM file; organizing the DICOM files into slices of the same anatomical region based on the metadata; sorting the slices in a correct volumetric order to ensure proper representation of the anatomy; extracting of relevant metadata for each slice from the DICOM files; validating the integrity of the metadata; and storing and managing the data.
[00013] Optionally, the metadata for each slice comprises the information of patient demographics, imaging modality, acquisition parameters, and image orientation.
[00014] Optionally, the storing of the data is done as per the established volume standards.
[00015] Optionally, the volume standards in medical imaging refer to the established specifications and guidelines for representing and storing volumetric data like CT, MRI etc.
[00016] Optionally, the volume standards comprises of the steps: selecting suitable format of data for storing volumetric data; representing volumetric data; collecting metadata related to the volumetric data; storing the compressed volumetric data; defining protocols for transferring the volumetric data between different systems and software platforms; validating the volumetric data; checking interoperability among different medical imaging devices, software, and healthcare institutions; and updating the volume standards.
[00017] Optionally, the rendering of a 3D model can be done by ray marching based volumetric rendering.
[00018] Optionally, the ray marching based volumetric rendering comprises of the steps: preparing volume data to acquire scan data; generating a ray from a viewpoint through the 3D volume; sampling voxel values at each step in regular interval; mapping intensity values of voxels to color and opacity values for visualization; combining the color and opacity values from the sampled voxels using a composting operation; enhancing the 3D visualization; and rendering a 3D output model.
[00019] Optionally, the organ segmentation comprises of the steps: preprocessing scan images; selecting a region of interest; thresholding by setting a threshold value; selecting a seed region to initiate the segmentation; expanding the segmented region by region growing; detecting an edge in the scan images; and refining the organ segmentation by adjusting the shape and size of the segmented regions.
[00020] Optionally, the rendering of the 3D output model is based on deep learning techniques.
[00021] Optionally, the thresholding utilizes advanced machine learning algorithms.
[00022] Optionally, the system (110) offers a comprehensive suite of tools to be used by the surgeon, the tools include measurement tools for cut, angles, path planning, port placement, assistance, comparison, and automatic segmentation tools for identifying malign parts within an image.
[00023] Other embodiments, systems, methods, apparatus aspects, and features of the invention will become apparent to those skilled in the art from the following detailed description, the accompanying drawings, and the appended claims. It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[00024] The summary above, as well as the following detailed description of the disclosure, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will understand that the drawings are not to the scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
Figure 1 illustrates an example implementation of a multi arm teleoperated robotic surgical system which can be used with one or more features in accordance with an embodiment of the disclosure;
Figure 2 illustrates a perspective view of a robotic arm with a tool interface, in accordance with an embodiment of the invention;
Figure 3 illustrates a multi arm teleoperated robotic surgical system with an articulating tip of the robotic surgical instrument being inserted into the patient body lying on the operating table, in accordance with an embodiment of the invention;
Figure 4(a) illustrates a DICOM series parsing performed using information of patient scans, slicing the scans, and again 3D construction, in accordance with an embodiment of the invention;
Figure 4(b) illustrates a flow chart of DICOM series parsing to support multi-modality image files, in accordance with an embodiment of the invention;
Figure 5(a) illustrates 3D reconstruction for volume rendering using either Saggital, Axial, or Coronal techniques, in accordance with an embodiment of the invention;
Figure 5(b) illustrates a flow chart of 3D reconstruction for volume rendering in accordance with an embodiment of the invention;
Figure 6 illustrates a flowchart of ray marching-based volume rendering for CT scans in accordance with an embodiment of the invention;
Figure 7 illustrates a ray marching-based volume rendering for CT scans using the four sampling techniques, in accordance with an embodiment of the invention;
Figure 8 illustrates a flow chart of CT organ segmentation, in accordance with an embodiment of the invention; and
Figure 9 illustrates a direct volume rendering, in accordance with an embodiment of the invention.
DETAILED DESCRIPTION OF THE DISCLOSURE
[00025] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended, such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as illustrated therein being contemplated as would normally occur to one skilled in the art to which the disclosure relates.
[00026] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the disclosure and are not intended to be restrictive thereof. Throughout the patent specification, a convention employed is that in the appended drawings, like numerals denote like components.
[00027] Reference throughout this specification to “an embodiment”, “another embodiment”, “an implementation”, “another implementation” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrase “in an embodiment”, “in another embodiment”, “in one implementation”, “in another implementation”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[00028] The terms “comprises”, “comprising”, or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures proceeded by “comprises... a” does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or additional devices or additional sub-systems or additional elements or additional structures.
[00029] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The device, system, and examples provided herein are illustrative only and not intended to be limiting.
[00030] The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Further, the term sterile barrier and sterile adapter denotes the same meaning and may be used interchangeably throughout the description.
[00031] Embodiments of the disclosure will be described below in detail with reference to the accompanying drawings.
[00032] Figure 1 illustrates an example implementation of a multi arm teleoperated robotic surgical system which can be used with one or more features in accordance with an embodiment of the disclosure. Specifically, figure 1 illustrates the multi arm teleoperated robotic surgical system (100) having four robotic arms (102a), (102b), (102c), (102d) mounted on four robotic arm carts around an operating table (104). The four-robotic arms (102a), (102b), (102c), (102d) as depicted in figure 1 are for illustration purposes and the number of robotic arms may vary depending upon the type of surgery. The four robotic arms (102a), (102b), (102c), (102d) are arranged along the operating table (104) and may also be arranged in different manner but not limited to the robotic arms (102a), (102b), (102c), (102d) arranged along the operating table (104). The robotic arms (102a), (102b), (102c), (102d) may be separately mounted on the four robotic arm carts or the robotic arms (102a), (102b), (102c), (102d) mechanically and/ or electronically connected with each other or the robotic arms (102a), (102b), (102c), (102d) connected to a central body (not shown) such that the robotic arms (102a), (102b), (102c), (102d) branch out of a central body (not shown). Further, the multi arm teleoperated robotic surgical system (100) may include a surgeon console system (106), a vision cart (108), a medical imaging analysis system (110) comprising of a graphical processor (112), a server (114), a user interface device (116), and an input device (118) like a mouse, pen drive etc., and a robotic surgical instrument accessory table (122). The server (114) is configured to store a database (124) including at least one of a diagnostic scan and patient details for one or more patients. The graphical processor (112) is configured to extract a relevant data (128) based on the received input, from the database (124). The surgeon console system (106) comprises a master controller (126). Further the robotic surgical system (100) may include other suitable equipments for supporting functionality of the robotic components.
[00033] Also, the surgeon/operator/user may be based at a remote location. Then the surgeon console system (106) may be located in any room other than the robotic surgery environment, or the surgeon console system (106) may be operated from a remote location. The communication between the surgeon console system (106) and the robotic surgical system (100) may be either wired or wireless and may be implemented.
[00034] The medical imaging analysis system (110) is named “Chitrasa® system” and is integrated with SSI Mantra robotic surgical system (100). "Chitrasa" word is derived from the Sanskrit words "Chitra" meaning image and "Rasa" meaning essence. Chitrasa is designed to provide the essence of medical images for in depth analysis and surgical path planning to the doctors. Thus, the Chitrasa assists the surgeons/doctors in making crucial decisions at the time of performing surgeries. As shown in figure 1, the graphical processor (112) is provided in the surgeon console (106). The graphical processor (112) is operably connected to the server (114), which is configured to store a database containing information about patients and their relevant medical scans. The connection between the server (114) and the graphical processor (112) may be either wired or wireless. The server (114) may be remote or local to the graphical processor (112). The Chitrasa system (110) operates along with the robotic surgical system (100) to show the position of a robotic surgical instrument in patient space at real-time mapping.
[00035] Figure 2 illustrates a perspective view of a robotic arm with a tool interface assembly in accordance with an embodiment of the invention. The tool interface assembly (200) is the main component for performing the robotic surgery on a patient. The robotic arm (202) as shown in Figure 2 is for the illustration purpose only and other robotic arms with different configurations, degree of freedom (DOF) and shapes may be used.
[00036] The tool interface assembly (200) comprises of an actuator assembly (206) mounted on a guiding mechanism and capable of linearly moving along the guiding mechanism. The guiding mechanism depicted is a guide rail (208). The movement of the actuator assembly (206) along the guide rail (208) is controlled by the surgeon with the help of controllers on the surgeon console system (106) as shown in figure 1. A sterile adapter assembly (210) is releasably mounted on the actuator assembly (206) to separate a non-sterile part of the robotic arm from a sterile robotic surgical tool assembly (212). A locking mechanism (not shown) is provided to releasably lock and unlock the sterile adapter assembly (210) with the actuator assembly (206). The sterile adapter assembly (210) detachably engages from the actuator assembly (206) which drives and controls the sterile robotic surgical instrument in a sterile field. In another embodiment, the robotic surgical tool assembly (212) also may releasably lock/ unlock or engage/disengage with the sterile adapter assembly (210) by means of a push button (214).
[00037] The robotic surgical tool assembly (212) includes a shaft (216) and end effector (120). The end effector (120) may comprise of a robotic surgical instrument or may be configured to attach a robotic surgical instrument. Further, the end effector (120) may include a functional mechanical degree of freedom, such as jaws that open or close, or a knife that translates along a path. The robotic surgical tool assembly (212) may also contain stored (e.g., on a semiconductor memory inside the instrument) information that may be permanent or may be updatable by the robotic surgical system (100).
[00038] A cannula gripper (218) is provided on the tool interface assembly (200) and is configured to grip a cannula (220) which receives the shaft (216) through an opening (not shown). The cannula gripper (218) is detachably attached to one end of the tool interface assembly (200). Alternatively, the cannula gripper (218) may have a circular body for receiving the cannula (220) and comprise of grooves (not shown) to grip the cannula (220) at a stationary position. The cannula gripper (218) may be affixed at a mount (222) of the tool interface assembly (200) and may be configured to grip or secure the cannula (220) such that cannula (220) is stable while performing robotic surgical operations.
[00039] The end effector (120) may be a robotic surgical instrument like a forceps, a needle, etc., associated with one or more surgical tasks or an endoscope/ultrasound probe and the like. Some robotic surgical instruments further provide an articulated support (sometimes referred to as a "wrist") for the robotic surgical tool assembly (212) such that the position and orientation of the robotic surgical tool assembly (212) may be manipulated with one or more mechanical degrees of freedom in relation to the shaft (216). The robotic surgical instrument may have an articulating tip.
[00040] Figure 3 illustrates a multi arm teleoperated robotic surgical system (100) with an articulating tip of the robotic surgical instrument (120) being inserted into the patient body lying on the operating table (104). The position of articulating tip of the robotic surgical instrument (120) is superimposed in the Chitrasa system and fused with CT medical imaging map. The position of articulating tip of the robotic surgical instrument (120) is determined based on the port position and the input sent from surgeon console (106). The final position of the robotic surgical instrument (120) may be derived and superimposed to find the position of articulating tip of the robotic surgical instrument (120) in patient space. This allows to monitor the movement of articulating tip of the robotic surgical instrument (120) inside the patient body.
[00041] The medical sector relies heavily on diagnostic scans not limited to computerized tomography (CT) and magnetic resonance imaging (MRI) scans for diagnosis. The CT and MRI scans allow the doctors to analyze and study the internal parts of the body. The doctors and surgeons rely upon CT and MRI scans to help diagnose tumors and internal bleeding or check for internal damage. The CT and MRI scans are extremely important during surgical procedures as well. The CT scans show bones and organs, as well as detailed anatomy of glands and blood vessels. The CT scans are taken shortly before surgery to confirm the location of a tumor and establish the location of the internal organs. The CT and MRI scans are essentially a two-dimensional (2D) medium of information.
[00042] Medical imaging analysis is a field that involves the application of advanced techniques and algorithms to extract valuable information from medical images like CT, MRI, Ultrasound, PET etc. It plays a crucial role in healthcare by aiding in disease diagnosis, treatment planning, monitoring, and robotic surgical path planning. Through image processing, computer vision, and artificial intelligence, medical imaging analysis enables the accurate identification of abnormalities, quantitative measurements of important features, and automated analysis. It has revolutionized the field by providing healthcare professionals with powerful tools for comprehensive image interpretation, leading to improved patient outcomes and more effective medical interventions.
[00043] The medical imaging techniques produce the reports which can be better understood by a radiologist. However, the doctors/surgeons who provide treatment/surgery may not have deep knowledge of radiology. Also, during surgery, when the robotic surgical tool has to traverse inside the patient body, a proper path or trajectory planning is required.
[00044] The three main types of simulated/digital realities can be virtual reality, augmented reality, and mixed reality. The virtual reality is a simulated environment that is independent of the actual surroundings around the user. The user may wear the virtual reality headset that provides the user with a completely immersive experience. A simulated world is projected in virtual reality lenses which is substantially cut off and independent from the real world and environment. The advantage of having a virtual reality simulation is that an extended reality user has control over all the aspects of the environment. The surroundings, holographic projections, and the interactions the user can have with these holographic projections can be determined and controlled by the extended reality user. The virtual reality is an immersive experience which may give the user a feeling as if he/she is present in a simulated environment.
[00045] Another type of immersive technology is the augmented reality. In the augmented reality, holographic projections are placed while keeping the surroundings the same as the actual one. Yet another type of immersive technology is the mixed reality. The mixed reality is the merging of real and virtual worlds to produce new environments and visualizations, where physical and digital objects co-exist and interact in real time. The holographic projections interact with the surroundings and the object in them. For example, in mixed reality, a holographic object can be placed on a table as an actual object. It will recognize the table as a solid body and will not pass through it. In mixed reality, the holographic projections and the surroundings are interdependent. It makes holograms interactive that co-exist with the surroundings.
[00046] The graphical processor (112) of the medical imaging analysis system (110) receives patient scans/images from a database stored in the server (114). The patient scans/images are in the format “. DICOM”. DICOM stands for Digital Imaging and Communications in Medicine. DICOM is a widely used standard in the medical imaging field. It defines a set of rules and protocols for the secure and efficient exchange of medical images and related information between different healthcare systems and devices. DICOM enables seamless communication between various medical imaging devices such as CT scanners, MRI machines, ultrasound machines, and picture archiving and communication systems (PACS). This standard ensures interoperability and consistency in medical imaging, allowing healthcare professionals to access, share, and analyze patient images efficiently, leading to improved diagnosis and better patient care.
[00047] DICOM parsing pipeline for radiology data processing involves data collection from hospital PACS servers in the hospital. This data includes DICOM (Digital Imaging and Communications in Medicine) files containing medical images. In addition to data from hospital servers, the system also supports loading DICOM files from external sources like CDs, Pen drives, or local storage on the system. The collected DICOM files are processed to extract and organize the relevant imaging data. The files are arranged in the correct volumetric order to ensure accurate and coherent representation of the patient's anatomy. From the input axial slices, the pipeline generates sagittal and coronal slices to provide a comprehensive view of the patient's anatomy in different planes. The processed volumetric data is used to create a 3D volume rendering using advanced techniques like Direct Volume Rendering (DVR). This technique enables a realistic visualization of the patient's anatomy in 3D.
[00048] The medical imaging analysis system (110) includes a volume rendering transfer function editor that allows users to manipulate the visualization parameters at runtime. The transfer function can be adjusted to highlight specific anatomical structures such as bones, soft tissues, lungs, liver, etc., in different segmented ways. The medical imaging analysis system (110) enables users to perform segmentation of specific anatomical structures, such as vascular structures, and export them as separate images for further analysis and documentation. The pipeline employs an advanced Chitra Rachna Rendering algorithm that enables the 3D reconstruction of medical images in Virtual Reality (VR) and Augmented Reality (AR) environments. Additionally, the 3D visualization can be displayed on specialized 3D monitors for detailed examination and diagnosis.
[00049] Figure 4(a) illustrates a DICOM series parsing performed using information of patient scans, slicing the scans, and again 3D construction, in accordance with an embodiment of the invention. Figure 4(b) illustrates a DICOM series parsing flow to support multi-modality image files, in accordance with an embodiment of the invention. DICOM Series Parsing is the process of extracting and organizing medical imaging data from DICOM files into coherent series for further analysis and visualization. DICOM (Digital Imaging and Communications in Medicine) is the standard format used in the healthcare industry to store and transmit medical images and associated information. The DICOM Series Parsing involves the following steps. In step (402) data acquisition is performed by collecting DICOM files from various sources, such as hospital PACS servers, CDs, Pen drives, or local storage. These files may contain various types of medical images, such as CT scans, MRI scans, X-rays, etc.
[00050] In step (404), DICOM File is identified. Each DICOM file contains metadata, including information about the patient, study, series, and instance. The parsing process begins by identifying and extracting this metadata from each DICOM file. In step (406) involved series grouping. The DICOM files are organized into series based on their study and series-specific metadata. A series is a collection of related DICOM files representing different slices or views of the same anatomical region. For example, a CT scan of the head may have multiple series for axial, sagittal, and coronal views.
[00051] The step (408) involves sorting of slices obtained in step (406). Within each series, the DICOM slices are sorted in the correct volumetric order to ensure proper representation of the anatomy. DICOM files often include metadata that can be used to determine the spatial position of each slice relative to others. Metadata is extracted in step (410). Relevant metadata from the DICOM files is extracted for each series, such as patient demographics, imaging modality, acquisition parameters, and image orientation. This information is crucial for accurate interpretation and analysis of the medical images. Error handling and validation is performed in step (412). During the parsing process, error handling and validation are important to ensure data integrity and consistency. This may involve checking for missing or corrupted DICOM files, validating the integrity of metadata, and handling any issues that may arise during parsing.
[00052] The step (414) is about data storage and management. The parsed DICOM series and associated metadata are typically stored in a structured database or file system for easy retrieval and efficient management. Proper indexing DICOM Series Parsing is a crucial step in medical image processing and analysis. Once the series is properly parsed and organized, it can be used for various applications, such as 3D visualization, image analysis, diagnosis, treatment planning, and research. and organization of the data facilitates quick access to specific studies or series when needed. Overall, the DICOM parsing pipeline provides a comprehensive and powerful tool for processing, analyzing, and visualizing radiology data, facilitating accurate diagnoses and treatment planning for patients.
[00053] Figure 5(a) illustrates 3D reconstruction for volume rendering using either Saggital, Axil, or Coronal techniques, in accordance with an embodiment of the invention. Figure 5(b) illustrates 3D reconstruction for volume rendering in accordance with an embodiment of the invention. Volume standards in medical imaging refer to the established specifications and guidelines for representing and storing volumetric data, such as 3D medical images acquired through CT (Computed Tomography), MRI (Magnetic Resonance Imaging), or other imaging modalities. These standards ensure interoperability, data consistency, and accurate interpretation across different medical imaging systems and software.
[00054] The steps involved in volume standards are as follows. In step (502), a suitable format for data is selected for storing volumetric data. Common volume formats in medical imaging include DICOM (Digital Imaging and Communications in Medicine) and Neuroimaging Informatics Technology Initiative (NIfTI). DICOM is widely used in the medical industry, providing comprehensive metadata along with image data, making it suitable for archiving and communication. NIfTI is specifically designed for neuroimaging studies and is popular in neuroscience research. The step (504) involves representation of volumetric data. The volumetric data consists of a series of 2D slices that are acquired along different axes (axial, sagittal, and coronal). Volume standards define the spatial orientation, voxel dimensions, and origin of the data to ensure consistent representation and accurate rendering of the 3D volume.
[00055] The step (506) involves collection of metadata and attributes. Volume standards include specific metadata and attributes that describe various aspects of the volumetric data, such as patient demographics, imaging parameters, acquisition technique, modality, and study details. This information is essential for proper interpretation and clinical decision-making. Volumetric reconstruction is carried out in the step (508). If the series represents volumetric data, such as CT or MRI scans, the parsing process involves reconstructing the 3D volume by aligning and stacking the sorted slices properly. This results in a coherent 3D representation of the patient's anatomy. The images are compressed and stored in step (510). Volume standards define protocols for transferring volumetric data between different systems and software platforms in step (512). This enables seamless exchange of medical images and ensures that the data is accurately interpreted and displayed regardless of the receiving system.
[00056] Quality assurance and validation is performed in step (514). Volume standards undergo rigorous testing and validation to ensure data integrity, accuracy, and compatibility with different imaging systems and software. Compliance with these standards is critical for maintaining high-quality medical imaging practices. In step (516), interoperability is checked. One of the primary goals of volume standards is to promote interoperability among different medical imaging devices, software, and healthcare institutions. Interoperable volume data allows seamless sharing and collaboration between healthcare professionals and researchers. Volume standards are continuously updated and revised in step (518) to keep pace with advancements in medical imaging technology and evolving clinical needs. New versions of standards may address emerging challenges and incorporate improved data representation techniques. Adhering to volume standards is essential for ensuring the accuracy, consistency, and efficiency of medical imaging practices. These standards facilitate effective communication and exchange of medical images, ultimately contributing to improved patient care and medical research.
[00057] Direct Volume Rendering (DVR) is extensively used in medical imaging analysis to visualize the DICOM Data and explore complex volumetric datasets obtained from modalities like CT, MRI, and PET. DVR enables accurate and real-time 3D visualization, allowing healthcare professionals to study internal structures, detect anomalies, and plan treatments more effectively. This technique enhances the understanding of anatomical structures, aids in tumor detection and characterization, assists in cardiovascular evaluation, and supports preoperative planning. By providing interactive and detailed representations of medical data, DVR improves diagnostic accuracy, facilitates surgical navigation, and enhances medical training and research, leading to better patient outcomes and advancements in healthcare.
[00058] DVR can be done using the following techniques: 1. Splatting: Splatting involves projecting 3D data onto a 2D image plane by blending the contributions of individual volume elements (voxels) onto the pixel. This technique is computationally efficient and allows for real-time rendering of volumetric data. 2. Shear-Warp: Shear-Warp is a DVR technique that decomposes the 3D volume into a set of 2D parallel slices. These slices are then sheared and warped to align with the viewing plane, and then blended to generate the final 3D image. 3. Texture-based volume rendering: This technique involves mapping the 3D volume data to a 3D texture, which is then rendered onto 2D polygons. The texture is manipulated based on the viewing angle and lighting conditions to generate the final 3D visualization. 4. Ray casting: Ray casting involves tracing rays from the viewer's perspective into the volume and accumulating the contributions of voxels along the ray path to generate the final image. This technique allows for accurate rendering of volume data but can be computationally intensive. 5. Depth peeling: Depth peeling is a technique used to render transparent or semi-transparent structures in the volume. It involves rendering the volume multiple times, peeling away layers at different depths to create a more detailed and realistic visualization. 6. Raymarching: Raymarching is a sophisticated Direct Volume Rendering (DVR) technique that enables accurate and flexible visualization of volumetric data. Unlike traditional ray casting, which samples voxels at fixed intervals, raymarching steps through the volume along the ray direction until certain conditions are met. This method allows for adaptive sampling, meaning it adjusts the step size based on the varying density of the volume, resulting in improved rendering efficiency and higher image quality. GPU-based volume rendering can be used along with ray marching technique. With the advancements in graphics processing units (GPUs), modern DVR techniques leverage the parallel processing power of GPUs to accelerate the rendering process, enabling real-time and interactive visualization of large volumetric datasets. These techniques offer various advantages and trade-offs, and the choice of the DVR technique depends on the specific application and requirements of the visualization task. Raymarching is particularly suitable for complex volumetric datasets with irregular structures, as it efficiently captures fine details and provides realistic renderings with smooth shading and lighting effects.
[00059] Figure 6 illustrates a flowchart of ray marching-based volume rendering for CT scans, in accordance with an embodiment of the invention. Ray marching-based volume rendering is a technique used to create 3D visualizations of volumetric data, such as CT (Computed Tomography) scans. It involves marching along rays through the 3D volume data and accumulating the color and opacity values of the voxels encountered along the rays. This process is repeated for each pixel in the output image to generate a final 3D rendering of the CT scan. Ray marching-based volume rendering for CT scans is performed as per the steps. In step (602), preparation of volume data is performed to acquire the CT scan data, which consists of a stack of 2D slices representing the cross-sectional images of the patient's anatomy. The data needs to be converted into a 3D volume representation, where each voxel (3D pixel) corresponds to a specific location in space and has an associated intensity value.
[00060] The step (604) is about ray generation. For each pixel in the output image, a ray is generated from the viewpoint through the 3D volume. The rays are cast from the camera (viewpoint) through the image plane and into the 3D volume data. Ray marching is performed in step (606). The ray marching process involves stepping along each ray and sampling the voxel values at regular intervals (step size). At each sample point, the color and opacity (alpha) values of the voxel are accumulated. In step (608), a transfer function is used to map the intensity values of the voxels to color and opacity values for visualization. The transfer function defines how different tissue types in the CT scan are represented with varying colors and opacities. In step (610), composting is performed. As the ray marches through the volume, the color and opacity values from the sampled voxels are combined using a compositing operation, such as the alpha blending technique. This compositing process calculates the final color and opacity for each pixel based on the accumulated values along the ray.
[00061] Shading and lighting is done in step (612) to enhance the 3D visualization, shading and lighting techniques can be applied to the volume rendering. Shading helps to add depth and realism to the image, while lighting effects can highlight specific structures in the volume. In step (614), output is rendered. After ray marching through all the rays and compositing the voxel values, the final 3D rendering of the CT scan is produced. The output image represents a 2D projection of the 3D volume, showing the internal structures and tissues within the patient's anatomy. Ray marching-based volume rendering is a computationally intensive process, but it provides detailed and realistic 3D visualizations of CT scans, allowing healthcare professionals to better understand and analyze the patient's anatomy for diagnosis, treatment planning, and research purposes. Figure 7 illustrates a ray marching-based volume rendering for CT scans using the four sampling techniques as explained in figure 6.
[00062] Figure 8 illustrates a flowchart of CT organ segmentation, in accordance with an embodiment of the invention. The CT organ segmentation is a process of identifying and delineating specific organs or anatomical structures from a CT scan. It is an essential step in medical image analysis and plays a crucial role in medical diagnosis, treatment planning, and research. Organ segmentation involves several steps to accurately identify and outline the regions of interest. The steps involved in CT organ segmentation are as follows. The step (802) involves image preprocessing. The first step is to preprocess the CT scan images to improve their quality and enhance the contrast between different tissues. Preprocessing techniques may include noise reduction, intensity normalization, and filtering to remove artifacts and improve image clarity.
[00063] Step (804) involves selection of region of interest. The radiologist or medical professional selects the region of interest in the CT scan that contains the organ to be segmented. This step helps to focus the segmentation process on the specific area of interest, reducing computational complexity. Thresholding is done in step (806). Thresholding is a fundamental technique used in CT organ segmentation. It involves setting a threshold value to distinguish the voxels representing the organ of interest from surrounding tissues based on their intensity values. Voxels above the threshold are considered part of the organ, while those below are ignored.
[00064] Thresholding can be done using any one or a combination of below explained four techniques. In the first technique, a seed point or seed region may be manually or automatically selected in step (808) to initiate the segmentation process. The algorithm then grows the region based on the similarity of neighboring voxels. In the second technique, region growing algorithms expand the segmented region from the seed point by iteratively adding neighboring voxels that meet certain intensity or texture criteria in step (810). This process continues until no more voxels can be added to the region. In the third technique of step (812), an edge is detected. In cases where organ boundaries are not well-defined in the CT scan, edge detection techniques may be employed to identify sharp transitions in voxel intensities, indicating organ edges. In the fourth technique of step (814), morphological operations, such as erosion and dilation, can be applied to refine the organ segmentation by adjusting the shape and size of the segmented regions. Any one or combination of techniques shown in (808), (810), (812), and (814) can be used for thresholding.
[00065] After this, post processing and refinement is performed in step (816). After the initial segmentation, post processing steps are applied to refine the results and correct any inaccuracies. These steps may include removing isolated voxels, filling gaps in the segmentation, and smoothing the organ contours. CT organ segmentation is a complex task that may involve manual interaction and automatic algorithms. Accuracy and robustness are essential factors in achieving reliable segmentations, as they directly impact clinical decisions and treatment planning. Advances in machine learning and deep learning techniques have also contributed to improving the accuracy and efficiency of CT organ segmentation.
[00066] According to an embodiment, the rendering of the 3D output model is based on deep learning techniques. Figure 9 illustrates a direct volume rendering, in accordance with an embodiment of the invention. Further, the thresholding utilizes advanced machine learning algorithms. Also, the system (110) offers a comprehensive suite of tools to be used by the surgeon, the tools include measurement tools for cut, angles, path planning, port placement, assistance, comparison, and automatic segmentation tools for identifying malign parts within an image.
[00067] The present disclosure provides many advantages. It provides multimodality image compatibility and a ChitraRachna volume rendering algorithm. ChitraRachna helps to stack 2D slices into volumes and render them using direct volume rendering principles. This algorithm incorporates ambient color and global illumination-based lighting, resulting in vivid and realistic 3D reconstructions of medical images. Chitrasa effortlessly reads and accepts DICOM images from PACS servers, as well as images from CDs/DVDs and flash drives, making it highly versatile and compatible with various medical imaging systems.
[00068] Chitrasa is specifically designed with surgeons in mind, it eliminates complex steps typically required by traditional DICOM viewers. The intuitive interface streamlines the workflow, making it easy and efficient for doctors to navigate and utilize the features of the Chitrasa. It provides a virtual reality and augmented reality compatibility. The Chitrasa® ChitraRachna algorithm enables fast rendering of images, allowing for viewing on virtual reality headsets and superimposing on augmented reality displays. This immersive experience provides surgeons with a deeper understanding of patient anatomy during surgical planning.
[00069] Chitrasa provides advanced annotation tools. It offers a comprehensive suite of advanced annotation tools, including marking, measuring, shape manipulation (addition, subtraction, merging), blending, cutting, duplication, and precise distance, volume, and angle measurements. These tools facilitate seamless annotation and presurgical planning, empowering surgeons with efficient and accurate data analysis. Chitrasa provides a real-time Instrument mapping technique. It seamlessly integrates with the SSI Mantra surgical robotics system, providing real-time mapping of instrument positions within the patient's anatomy. This integration enables surgeons to plan better and analyze trajectories and pathways around critical vessels and arteries, enhancing surgical precision and safety.
[00070] Chitrasa combines advanced medical imaging capabilities with surgical planning tools, bringing new possibilities to the operating room. With its user-friendly interface, cutting-edge visualization, and integration with surgical robotics, Chitrasa empowers surgeons to make informed decisions and deliver superior patient care. It follows Volume Standards to support 3D Volume Reconstruction as well as processing for DICOM Data. It is designed to communicate to surgeons directly about the patient’s physical condition by enhancing Medical Imaging using AI and Automation technology. It also provides virtual reality analysis and augmentation superimposition analysis which helps doctors to better understand the case complexity and craft perfect approach. Chitrasa can help surgeons during the surgery. It can superimpose AR volumetric image on top of the patient which can exponentially enhance surgeons’ visual awareness of the real time surgery situation.
[00071] The mixed reality DICOM viewer is capable of loading DICOM files from CD, Pen drive, Local Disks PACS Servers and supporting CT, CT Angio, MRI, Ultrasound, PET, and other popular modalities. DICOM files can be visualized in real time using 3D volume rendering technology with its UNITY 3D graphic rendering engines and NVIDIA RTX 2060 GPU for parallel processing graphics. Advanced computer vision algorithms stored in the memory of the graphical processor allow segmentation of major organs and vesicular structures. A CNN (convolutional neural network) model is used to segment and differentiate many types of cancer cells. The mixed reality DICOM viewer is provided the ability to export 3D Printable formats like “. STL” for physical study. The mixed reality DICOM viewer is designed for real-time surgical planning and capable of showing SSI mantra instrument positions in real time. The DICOM viewer may have advanced annotations and surgical planning tools for supporting visualization and superimposition of patient scans on physical patient or a room and analyze the scans.
[00072] The foregoing description of exemplary embodiments of the present disclosure has been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the disclosure and its practical application, to thereby enable others skilled in the art to best utilize the disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions, substitutions of equivalents are contemplated as circumstance may suggest or render expedient but is intended to cover the application or implementation without departing from the spirit or scope of the claims of the present disclosure.
[00073] Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.
[00074] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the apparatus in order to implement the inventive concept as taught herein.

List of reference numerals:
Sr. No.
Component Reference Numeral(s)
1 Multi-arm robotic surgical system 100
2 Robotic arms 102a, 102b, 102c, 102d, 102e
3 Operating table 104
4 Surgeon console system 106
5 Vision cart 108
6 Medical imaging analysis system 110
7 Graphical processor 112
8 Server 114
9 User interface device 116
10 Input device 118
11 Surgical instrument accessory table 122
12 Master controller 126


,CLAIMS:1. A medical imaging analysis system (110) for a multi-arm robotic surgical system (100) comprising one or more robotic arms (102a), (102b), (102c), (102d) each coupled to a robotic surgical instrument at its distal end, whereby the one or more robotic arms (102a), (102b), (102c), (102d) are arranged along an operating table (104), the system (110) comprising:
a user interface device (116) configured to receive an input from a surgeon and display a perspective projection of a 3D model; and
a graphical processor (112) coupled to the user interface device (116) and configured to:
extract a relevant data (128) based on the received input, from a database (124) stored on a server (114) operably connected to the graphical processor (112), wherein the server (114) is configured to store a database (124) including at least one of a diagnostic scan and patient details for one or more patients;
parse the extracted relevant data (128);
render a 3D model of the parsed data using the user interface device (116);
perform segmentation of an organ from the rendered 3D model;
manipulate the segmented organ based on another input received from the surgeon and render the manipulated organ on the user interface device (116); and
receive the actual position and orientation of the robotic surgical instruments from a master controller (126); and
map the received position and orientation of the robotic surgical instruments on the manipulated segmented organ;
wherein the organ segmentation enables the surgeon to pick a particular organ from the rendered 3D model.
2. The medical imaging analysis system (110) as claimed in claim 1, wherein the user interface device (116) is a graphical user interface.
3. The medical imaging analysis system (110) as claimed in claim 1, wherein the graphical processor (112) can be located anywhere in the operating room or remote.
4. The medical imaging analysis system (110) as claimed in claim 1, wherein the relevant data (128) may comprise of any diagnostic scan out of the available scans related to a particular patient.
5. The medical imaging analysis system (110) as claimed in claim 1, wherein the parsing comprises of steps:
acquiring data by collecting DICOM files from various sources like hospital PACS servers, CDs, Pen drives, or local storage;
extracting metadata from each DICOM file;
organizing the DICOM files into slices of the same anatomical region based on the metadata;
sorting the slices in a correct volumetric order to ensure proper representation of the anatomy;
extracting of relevant metadata for each slice from the DICOM files;
validating the integrity of the metadata; and
storing and managing the data.
6. The medical imaging analysis system (110) as claimed in claim 1, wherein the
metadata
in claim 1, wherein the storing of the data is done as per the established for each slice comprises the information of patient demographics, imaging modality, acquisition parameters, and image orientation.
7. The medical imaging analysis system (110) as claimed volume standards.
8. The medical imaging analysis system (110) as claimed in claim 1, wherein the volume standards in medical imaging refer to the established specifications and guidelines for representing and storing volumetric data like, CT, MRI etc.
9. The medical imaging analysis system (110) as claimed in claim 1, wherein the volume standards comprises of the steps:
selecting suitable format of data for storing volumetric data;
representing volumetric data;
collecting metadata related to the volumetric data;
storing the compressed volumetric data;
defining protocols for transferring the volumetric data between different systems and software platforms;
validating the volumetric data;
checking interoperability among different medical imaging devices, software, and healthcare institutions; and
updating the volume standards.
10. The medical imaging analysis system (110) as claimed in claim 1, wherein the rendering a 3D model can be done by ray marching based volumetric rendering.
11. The medical imaging analysis system (110) as claimed in claim 1, wherein the ray marching based volumetric rendering comprises of the steps:
preparing volume data to acquire scan data;
generating a ray from a viewpoint through the 3D volume;
sampling voxel values at each step in regular interval;
mapping intensity values of voxels to color and opacity values for visualization;
combining the color and opacity values from the sampled voxels using a composting operation;
enhancing the 3D visualization; and
rendering of a 3D output model.
12. The medical imaging analysis system (110) as claimed in claim 1, wherein the organ segmentation comprises of the steps:
preprocessing scan images;
selecting a region of interest;
thresholding by setting a threshold value;
selecting a seed region to initiate the segmentation;
expanding the segmented region by region growing;
detecting an edge in the scan images; and
refining the organ segmentation by adjusting the shape and size of the segmented regions.
13. The medical imaging analysis system (110) as claimed in claim 11, wherein the rendering of the 3D output model is based on deep learning techniques.
14. The medical imaging analysis system (110) as claimed in claim 12, wherein the thresholding utilizes advanced machine learning algorithms.
15. The medical imaging analysis system (110) as claimed in claim 11, wherein the system (110) offers a comprehensive suite of tools to be used by the surgeon, the tools include measurement tools for cut, angles, path planning, port placement, assistance, comparison, and automatic segmentation tools for identifying malign parts within an image.

Documents

Application Documents

# Name Date
1 202311050594-STATEMENT OF UNDERTAKING (FORM 3) [27-07-2023(online)].pdf 2023-07-27
2 202311050594-PROVISIONAL SPECIFICATION [27-07-2023(online)].pdf 2023-07-27
3 202311050594-POWER OF AUTHORITY [27-07-2023(online)].pdf 2023-07-27
4 202311050594-FORM 1 [27-07-2023(online)].pdf 2023-07-27
5 202311050594-FIGURE OF ABSTRACT [27-07-2023(online)].pdf 2023-07-27
6 202311050594-DRAWINGS [27-07-2023(online)].pdf 2023-07-27
7 202311050594-DECLARATION OF INVENTORSHIP (FORM 5) [27-07-2023(online)].pdf 2023-07-27
8 202311050594-RELEVANT DOCUMENTS [01-08-2023(online)].pdf 2023-08-01
9 202311050594-MARKED COPIES OF AMENDEMENTS [01-08-2023(online)].pdf 2023-08-01
10 202311050594-FORM 13 [01-08-2023(online)].pdf 2023-08-01
11 202311050594-AMENDED DOCUMENTS [01-08-2023(online)].pdf 2023-08-01
12 202311050594-Proof of Right [03-08-2023(online)].pdf 2023-08-03
13 202311050594-Others-110823.pdf 2023-10-03
14 202311050594-GPA-110823.pdf 2023-10-03
15 202311050594-Correspondence-110823.pdf 2023-10-03
16 202311050594-PA [12-05-2024(online)].pdf 2024-05-12
17 202311050594-FORM28 [12-05-2024(online)].pdf 2024-05-12
18 202311050594-FORM FOR SMALL ENTITY [12-05-2024(online)].pdf 2024-05-12
19 202311050594-EVIDENCE FOR REGISTRATION UNDER SSI [12-05-2024(online)].pdf 2024-05-12
20 202311050594-ASSIGNMENT DOCUMENTS [12-05-2024(online)].pdf 2024-05-12
21 202311050594-8(i)-Substitution-Change Of Applicant - Form 6 [12-05-2024(online)].pdf 2024-05-12
22 202311050594-ENDORSEMENT BY INVENTORS [26-06-2024(online)].pdf 2024-06-26
23 202311050594-DRAWING [26-06-2024(online)].pdf 2024-06-26
24 202311050594-COMPLETE SPECIFICATION [26-06-2024(online)].pdf 2024-06-26
25 202311050594-Others-100724.pdf 2024-07-12
26 202311050594-GPA-100724.pdf 2024-07-12
27 202311050594-Correspondence-100724.pdf 2024-07-12
28 202311050594-MSME CERTIFICATE [15-07-2024(online)].pdf 2024-07-15
29 202311050594-FORM28 [15-07-2024(online)].pdf 2024-07-15
30 202311050594-FORM-9 [15-07-2024(online)].pdf 2024-07-15
31 202311050594-FORM 18A [15-07-2024(online)].pdf 2024-07-15
32 202311050594-Request Letter-Correspondence [06-08-2024(online)].pdf 2024-08-06
33 202311050594-Power of Attorney [06-08-2024(online)].pdf 2024-08-06
34 202311050594-FORM28 [06-08-2024(online)].pdf 2024-08-06
35 202311050594-Form 1 (Submitted on date of filing) [06-08-2024(online)].pdf 2024-08-06
36 202311050594-Covering Letter [06-08-2024(online)].pdf 2024-08-06
37 202311050594-FER.pdf 2024-10-16
38 202311050594-MARKED COPY [25-10-2024(online)].pdf 2024-10-25
39 202311050594-Information under section 8(2) [25-10-2024(online)].pdf 2024-10-25
40 202311050594-FORM 3 [25-10-2024(online)].pdf 2024-10-25
41 202311050594-CORRECTED PAGES [25-10-2024(online)].pdf 2024-10-25
42 202311050594-POA [12-02-2025(online)].pdf 2025-02-12
43 202311050594-MARKED COPIES OF AMENDEMENTS [12-02-2025(online)].pdf 2025-02-12
44 202311050594-FORM 13 [12-02-2025(online)].pdf 2025-02-12
45 202311050594-GPA-120325.pdf 2025-03-17
46 202311050594-Correspondence-120325.pdf 2025-03-17
47 202311050594-FER_SER_REPLY [28-03-2025(online)].pdf 2025-03-28
48 202311050594-DRAWING [28-03-2025(online)].pdf 2025-03-28
49 202311050594-CORRESPONDENCE [28-03-2025(online)].pdf 2025-03-28
50 202311050594-COMPLETE SPECIFICATION [28-03-2025(online)].pdf 2025-03-28
51 202311050594-CLAIMS [28-03-2025(online)].pdf 2025-03-28
52 202311050594-ABSTRACT [28-03-2025(online)].pdf 2025-03-28

Search Strategy

1 SearchHistory_202311050594E_04-09-2024.pdf