Abstract: A system for performing a registration for cranial navigation is disclosed. The system includes first tracker(s) and second tracker(s). The system also includes a processing subsystem which includes an input module (80) which receives visual media and predefined scan (s). The processing subsystem also includes an input processing module (110) which identifies a predefined pattern associated with at least one of the first tracker(s) and the second tracker(s), obtains at least one of first tracker coordinate(s) and second tracker coordinate(s) of at least one of the first tracker(s) and the second tracker(s) respectively in a first predefined coordinate system, obtains first fiducial coordinate(s) of the fiducial point(s) in the first predefined coordinate system, obtains second fiducial coordinate(s) of the fiducial point(s) in a second predefined coordinate system, obtains a correlation between the first predefined coordinate system and the second predefined coordinate system, and performs the registration for the cranial navigation. FIG. 1
Claims:1. A system (10) for performing a registration for cranial navigation, wherein the system (10) comprises:
one or more first trackers (20) fastened to a denture (30) attached to an upper jaw of a patient undergoing a surgery of at least one part of head of the patient based on the cranial navigation, wherein the denture (30) comprises one or more fiducial points (40) located at a predefined region on the denture (30);
one or more second trackers (50) fastened to the head of the patient; and
a processing subsystem (60) hosted on a server (70), and configured to execute on a network to control bidirectional communications among a plurality of modules comprising:
an input module (80) configured to:
receive one or more visual media captured in real-time via a visual media capturing device (90), wherein the one or more visual media comprises a first illustration of at least one of the patient, the one or more first trackers (20), the one or more second trackers (50), and the one or more fiducial points (40); and
receive one or more predefined scans captured via a predefined scanning device (100), wherein the one or more predefined scans comprises a second illustration of at least one of the patient and the one or more fiducial points (40),
wherein the visual media capturing device (90) and the predefined scanning device (100) are operatively coupled to the processing subsystem (60); and
an input processing module (110) operatively coupled to the input module (80), wherein the input processing module (110) is configured to:
identify a predefined pattern associated with at least one of the one or more first trackers (20) and the one or more second trackers (50) using a predefined identification technique, upon receiving the one or more visual media;
obtain at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers (20) and the one or more second trackers (50) respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media;
obtain one or more first fiducial coordinates of the one or more fiducial points (40) in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers (20);
obtain one or more second fiducial coordinates of the one or more fiducial points (40) in a second predefined coordinate system based on the one or more predefined scans;
obtain a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points (40); and
perform the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained.
2. The system (10) as claimed in claim 1, wherein the one or more first trackers (20) are fastened to the denture (30) via a first predefined fastening mechanism, wherein the one or more second trackers (50) are fastened to the head via a second predefined fastening mechanism.
3. The system (10) as claimed in claim 1, wherein the one or more fiducial points (40) comprises at least one of one or more three-dimensional ballpoints and one or more holes.
4. The system (10) as claimed in claim 1, wherein the predefined identification technique comprises an optical pattern recognition technique.
5. The system (10) as claimed in claim 1, wherein the predefined criteria comprises a constant distance between the one or more first trackers (20) and the one or more fiducial points (40).
6. The system (10) as claimed in claim 1, wherein the processing subsystem (60) comprises a registration transfer module (170) operatively coupled to the input processing module (110), wherein the registration transfer module (170) is configured to:
obtain a first transformation relation between the one or more first tracker coordinates and the one or more second tracker coordinates by performing a second predefined computation on the one or more first tracker coordinates and the one or more second tracker coordinates upon performing the registration for the cranial navigation;
obtain a second transformation relation among the one or more first tracker coordinates with each other by performing a third predefined computation on the one or more first tracker coordinates upon obtaining the first transformation relation or performing the registration; and
share the location information obtained based on the registration, from the one or more first trackers (20) to the one or more second trackers (50), and among the one or more first trackers (20) from each other using the first transformation relation, and the second transformation relation respectively, during the surgery when the patient along with the one or more first trackers (20) are covered.
7. The system (10) as claimed in claim 6, wherein the processing subsystem (60) comprises an accuracy recovering module (180) operatively coupled to the registration transfer module (170), wherein the accuracy recovering module (180) is configured to recover an accuracy of the registration upon implementing a predefined accuracy recovering technique on one or more recovery fiducial points of the one or more fiducial points (40) when the one or more second trackers (50) are dislocated relative to a predefined location of the patient.
8. A method (230) for performing a registration for cranial navigation, wherein the method (230) comprises:
fastening one or more first trackers to a denture attached to an upper jaw of a patient undergoing a surgery of at least one part of head of the patient based on the cranial navigation, wherein the denture comprises one or more fiducial points located at a predefined region on the denture; (240)
fastening one or more second trackers to the head of the patient; (250)
receiving, by an input module (80), one or more visual media captured in real-time via a visual media capturing device, wherein the one or more visual media comprises a first illustration of at least one of a patient, the one or more first trackers, the one or more second trackers, and the one or more fiducial points; (260)
receiving, by the input module (80), one or more predefined scans captured via a predefined scanning device, wherein the one or more predefined scans comprises a second illustration of at least one of the patient and the one or more fiducial points; (270)
identifying, by an input processing module (110), a predefined pattern associated with at least one of the one or more first trackers and the one or more second trackers using a predefined identification technique, upon receiving the one or more visual media; (280)
obtaining, by the input processing module (110), at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers and the one or more second trackers respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media; (290)
obtaining, by the input processing module (110), one or more first fiducial coordinates of the one or more fiducial points in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers; (300)
obtaining, by the input processing module (110), one or more second fiducial coordinates of the one or more fiducial points in a second predefined coordinate system based on the one or more predefined scans; (310)
obtaining, by the input processing module (110), a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points; and (320)
performing, by the input processing module (110), the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained (330).
9. The method (230) as claimed in claim 8, comprises:
obtaining, by a registration transfer module (170), a first transformation relation between the one or more first tracker coordinates and the one or more second tracker coordinates by performing a second predefined computation on the one or more first tracker coordinates and the one or more second tracker coordinates upon performing the registration for the cranial navigation;
obtaining, by the registration transfer module (170), a second transformation relation among the one or more first tracker coordinates with each other by performing a third predefined computation on the one or more first tracker coordinates upon obtaining the first transformation relation or performing the registration; and
sharing, by the registration transfer module (170), the location information obtained based on the registration, from the one or more first trackers to the one or more second trackers, and among the one or more first trackers from each other using the first transformation relation, and the second transformation relation respectively, during the surgery when the patient along with the one or more first trackers are covered.
10. The method (230) as claimed in claim 8, comprises recovering, by an accuracy recovering module (180), an accuracy of the registration upon implementing a predefined accuracy recovering technique on one or more recovery fiducial points of the one or more fiducial points when the one or more second trackers are dislocated relative to a predefined location of the patient.
Dated this 17th day of March 2022
Signature
Jinsu Abraham
Patent Agent (IN/PA-3267)
Agent for the Applicant , Description:FIELD OF INVENTION
[0001] Embodiments of a present disclosure relate to cranial navigation, and more particularly to a system and method for performing a registration for the cranial navigation.
BACKGROUND
[0002] Cranial navigation refers to a procedure of attempting to localize or determine a position of a target inside of one or more parts of a head of a patient undergoing surgery. Image-guided surgery is used for performing cranial navigation. Further, registration refers to the matching of the preoperative image data to the current patient position, so that the surgeon virtually sees both the current situation and the imaging datasets overlapped and may then navigate on both. There are multiple approaches implemented to support registration for cranial navigation.
[0003] However, such multiple approaches involve manual intervention as some of them use a probe for locating fiducial points in a camera coordinate system as a part of the registration process. Also, such multiple approaches are prone to human errors and lack precision and accuracy as a user may not exactly place the probe on the fiducial points which were already been detected in the radiographic scans. Also, there is a possibility that during the surgery, mistakenly the patient might get slipped, and hence movement of the patient may dislocate a tracker or the fiducial points, thereby hampering the accuracy of the registration. This leads to repeating the entire process of the registration from the beginning, to support the cranial navigation to proceed with the surgery, thereby making such multiple approaches led reliable and less efficient.
[0004] Hence, there is a need for an improved system and method for performing a registration for cranial navigation which addresses the aforementioned issues.
BRIEF DESCRIPTION
[0005] In accordance with one embodiment of the disclosure, a system for performing a registration for cranial navigation is provided. The system includes one or more first trackers fastened to a denture attached to an upper jaw of a patient undergoing a surgery of at least one part of head of the patient based on the cranial navigation. The denture includes one or more fiducial points located at a predefined region on the denture. The system also includes one or more second trackers fastened to the head of the patient. Further, the system also includes a processing subsystem hosted on a server, The processing subsystem is configured to execute on a network to control bidirectional communications among a plurality of modules. The processing subsystem includes an input module. The input module is configured to receive one or more visual media captured in real-time via a visual media capturing device. The one or more visual media includes a first illustration of at least one of the patient, the one or more first trackers, the one or more second trackers, and the one or more fiducial points. The input module is also configured to receive one or more predefined scans captured via a predefined scanning device. The one or more predefined scans include a second illustration of at least one of the patient and the one or more fiducial points. The visual media capturing device and the predefined scanning device are operatively coupled to the processing subsystem. The processing subsystem also includes an input processing module operatively coupled to the input module. The input processing module is configured to identify a predefined pattern associated with at least one of the one or more first trackers and the one or more second trackers using a predefined identification technique, upon receiving the one or more visual media. The input processing module is also configured to obtain at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers and the one or more second trackers respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media. Further, the input processing module is also configured to obtain one or more first fiducial coordinates of the one or more fiducial points in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers. Furthermore, the input processing module is also configured to obtain one or more second fiducial coordinates of the one or more fiducial points in a second predefined coordinate system based on the one or more predefined scans. Furthermore, the input processing module is also configured to obtain a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points. Furthermore, the input processing module is also configured to perform the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained.
[0006] In accordance with another embodiment, a method for performing a registration for cranial navigation is provided. The method includes fastening one or more first trackers to a denture attached to an upper jaw of a patient undergoing a surgery of at least one part of head of the patient based on the cranial navigation, wherein the denture includes one or more fiducial points located at a predefined region on the denture. The method also includes fastening one or more second trackers to the head of the patient. Further, the method also includes receiving one or more visual media captured in real-time via a visual media capturing device, wherein the one or more visual media includes a first illustration of at least one of a patient, the one or more first trackers, the one or more second trackers, and the one or more fiducial points. Furthermore, the method also includes receiving one or more predefined scans captured via a predefined scanning device, wherein the one or more predefined scans include a second illustration of at least one of the patient and the one or more fiducial points. Furthermore, the method also includes identifying a predefined pattern associated with at least one of the one or more first trackers and the one or more second trackers using a predefined identification technique, upon receiving the one or more visual media. Furthermore, the method also includes obtaining at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers and the one or more second trackers respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media. Furthermore, the method also includes obtaining one or more first fiducial coordinates of the one or more fiducial points in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers. Furthermore, the method also includes obtaining one or more second fiducial coordinates of the one or more fiducial points in a second predefined coordinate system based on the one or more predefined scans. Furthermore, the method also includes obtaining a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points. Furthermore, the method also includes performing the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained.
[0007] To further clarify the advantages and features of the present disclosure, a more particular description of the disclosure will follow by reference to specific embodiments thereof, which are illustrated in the appended figures. It is to be appreciated that these figures depict only typical embodiments of the disclosure and are therefore not to be considered limiting in scope. The disclosure will be described and explained with additional specificity and detail with the appended figures.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure will be described and explained with additional specificity and detail with the accompanying figures in which:
[0008] FIG. 1 is a schematic representation of a system for performing a registration for cranial navigation in accordance with an embodiment of the present disclosure;
[0009] FIG. 2 is a block diagram representation of an exemplary embodiment of a processing subsystem of the system of FIG. 1 in accordance with an embodiment of the present disclosure;
[0010] FIG. 3 is a schematic representation of an exemplary embodiment of the system for performing the registration for the cranial navigation of FIG. 1 in accordance with an embodiment of the present disclosure;
[0011] FIG. 4 is a block diagram of a registration computer or a registration server in accordance with an embodiment of the present disclosure; and
[0012] FIG. 5 is a flow chart representing steps involved in a method for performing a registration for cranial navigation in accordance with an embodiment of the present disclosure.
[0013] Further, those skilled in the art will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present disclosure so as not to obscure the figures with details that will be readily apparent to those skilled in the art having the benefit of the description herein.
DETAILED DESCRIPTION
[0014] For the purpose of promoting an understanding of the principles of the disclosure, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the disclosure is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the disclosure as would normally occur to those skilled in the art are to be construed as being within the scope of the present disclosure.
[0015] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more devices or sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other devices, sub-systems, elements, structures, components, additional devices, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.
[0016] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this disclosure belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.
[0017] In the following specification and the claims, reference will be made to a number of terms, which shall be defined to have the following meanings. The singular forms “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0018] Embodiments of the present disclosure relate to a system for performing a registration for cranial navigation. As used herein, the term “cranial navigation” refers to a procedure of attempting to localize or determine a position of a target inside of one or more parts of a head of a patient undergoing surgery. In one embodiment, the target may include at least one of a tumor, an implant, a puncture, and the like. Basically, the cranial navigation comes under an image-guided surgery. As used herein, the term “image-guided surgery” is defined as any surgical procedure where the surgeon uses tracked surgical instruments in conjunction with preoperative or intraoperative images in order to directly or indirectly guide the procedure. Also, as used herein, the term “registration” refers to the matching of the preoperative image data to the current patient position, so that the surgeon virtually sees both the current situation and the imaging datasets overlapped and may then navigate on both. The system described hereafter in FIG. 1 is the system for performing the registration for the cranial navigation.
[0019] FIG. 1 is a schematic representation of a system (10) for performing a registration for cranial navigation in accordance with an embodiment of the present disclosure. Basically, in an embodiment, the registration for the cranial navigation may have to be done in order to obtain location information relative to at least one part of head of a patient undergoing surgery of the corresponding at least one part for helping a surgeon to perform the surgery easily. Thus, in order to do so, the system (10) includes one or more first trackers (20) fastened to a denture (30) attached to an upper jaw of the patient undergoing the surgery of at least one part of the head of the patient based on the cranial navigation. As used herein, the term “denture” refers to a removable plate or frame holding one or more artificial teeth. In one embodiment, the at least one part of the head may include a brain, an eye, an ear, a nose, a throat, a skull, a face, or the like. Also, the denture (30) includes one or more fiducial points (40) located at a predefined region on the denture (30). As used herein, the term “fiducial points” refers to markers positioned on a part of the body of the patient non-invasively which gets detected in a computed tomography (CT) scan and a magnetic resonance imaging (MRI) scan, wherein the corresponding part is undergoing surgery. In one exemplary embodiment, the one or more fiducial points (40) may include at least one of one or more three-dimensional (3-D) ballpoints, one or more holes, and the like.
[0020] In one embodiment, the one or more first trackers (20) may be fastened to the denture (30) via a first predefined fastening mechanism. As used herein, the term “tracker” refers to an element that possesses a pattern that can be tracked or detected by a pattern-recognizing device. In one embodiment, the first predefined fastening mechanism may include a mechanism that uses one or more tools such as, but not limited to, nuts, bolts, springs, clamps, adhesives, a mouth guard, and the like for fastening one or more parts together. In one embodiment, the one or more tools may be composed of a predefined material such as, but not limited to, resin. The system (10) also includes one or more second trackers (50) fastened to the head of the patient. In one embodiment, the one or more second trackers (50) may be fastened to the head via a second predefined fastening mechanism. In one exemplary embodiment, the second predefined fastening mechanism may include fastening the one or more second trackers (50) to the head of the patient via at least one head frame (55). In such embodiment, the one or more second trackers (50) may be fastened to the corresponding at least one head frame (55) using the first predefined fastening mechanism. In one exemplary embodiment, the at least one head frame (55) may include SUGITA® head frame, MAYFIELD® head frame, or the like. Further, upon fastening the one or more first trackers (20) and the one or more second trackers (50), the system (10) may have to obtain the location information with the help of the corresponding one or more first trackers (20) and the one or more second trackers (50). Thus, the system (10) also includes a processing subsystem (60) hosted on a server (as shown in FIG. 2).
[0021] FIG. 2 is a block diagram representation of an exemplary embodiment of the processing subsystem (60) of the system (10) of FIG. 1 in accordance with an embodiment of the present disclosure. As the processing subsystem (60) is hosted on the server (70), in one embodiment, the server (70) may include a cloud server. In another embodiment, the server (70) may include a local server. The processing subsystem (60) is configured to execute on a network (not shown in FIG. 1) to control bidirectional communications among a plurality of modules. In one embodiment, the network may include a wired network such as a local area network (LAN). In another embodiment, the network may include a wireless network such as Wi-Fi, Bluetooth, Zigbee, near field communication (NFC), infra-red communication (RFID), or the like. The processing subsystem (60) includes an input module (80). The input module (80) is configured to receive one or more visual media captured in real-time via a visual media capturing device (90). The one or more visual media includes a first illustration of at least one of the patient, the one or more first trackers (20), the one or more second trackers (50), and the one or more fiducial points (40). In one embodiment, the one or more visual media may include at least one of an image, a video, and the like. Moreover, as used herein, the term “first illustration” refers to a stereo view, a 3-D picture view, a view needed for range imagining, or the like of an external view of the item whose image is been captured. In one embodiment, the visual media capturing device (90) may include a stereoscopic vision camera. As used herein, the term “stereoscopic vision camera” is defined as a type of camera with two or more lenses with a separate image sensor or film frame for each lens.
[0022] The input module (80) is also configured to receive one or more predefined scans captured via a predefined scanning device (100). The one or more predefined scans include a second illustration of at least one of the patient and the one or more fiducial points (40). In one embodiment, the one or more predefined scans may include at least one of the CT scan and the MRI scan. Moreover, as used herein, the term “second illustration” refers to the internal view of an item whose image is been captured. In an embodiment, when the item corresponds to a human body, then the internal view may correspond to a view of a brain, a skull, bones, and the like. In one embodiment, the predefined scanning device (100) may include at least one of a CT scanner and an MRI scanner. The visual media capturing device (90) and the predefined scanning device (100) are operatively coupled to the processing subsystem (60). In one exemplary embodiment, the one or more predefined scans may be captured by the predefined scanning device (100) and may be stored in one or more storage devices such as, but not limited to, a digital versatile disc (DVD), a pen drive, or the like. Then, the corresponding one or more storage devices may be used to provide the one or more predefined scans to the corresponding processing subsystem (60).
[0023] The processing subsystem (60) also includes an input processing module (110) operatively coupled to the input module (80). The input processing module (110) is configured to identify a predefined pattern associated with at least one of the one or more first trackers (20) and the one or more second trackers (50) using a predefined identification technique, upon receiving the one or more visual media. In one embodiment, the predefined pattern may include an arrangement of black and white colored patterns in a first predefined order, one or more colors other than black and white arranged in a second predefined order, or the like. In one embodiment, the predefined identification technique may include an optical pattern recognition technique. As used herein, the term “an optical pattern recognition technique” is defined as a technique that uses the stereoscopic vision camera to detect and track specially marked objects. Basically, in an embodiment, the optical pattern recognition technique may use an image processing technique. In one embodiment, the optical pattern recognition technique may include an Infrared (IR)-based tracking technique. In another embodiment, the optical pattern recognition technique may include a video-based tracking technique. In yet another embodiment, the optical pattern recognition technique may include a reflection marker balls-based tracking technique, a light emitting diode (LED)-based tracking technique, or the like.
[0024] Upon identifying the predefined pattern, a location of the corresponding predefined pattern needs to be obtained. Thus, the input processing module (110) is also configured to obtain at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers (20) and the one or more second trackers (50) respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media. In one embodiment, the one or more first tracker coordinates and the one or more second tracker coordinates may be one or more 3-D coordinates with a value to x-y-z coordinates in a predefined 3-D plane. In one specific embodiment, the first predefined coordinate system may include a camera coordinate system in a camera space. As used herein, the term “camera coordinate system” is defined as a coordinate system in the camera space.
[0025] Similarly, the input processing module (110) is also configured to obtain one or more first fiducial coordinates of the one or more fiducial points (40) in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers (20). In one embodiment, the one or more fiducial coordinates may include one or more 3-D coordinates. Moreover, in an embodiment, the predefined criteria may include a constant distance between the one or more first trackers (20) and the one or more fiducial points (40). Thus, as a distance between the one or more first trackers (20) and the one or more fiducial points (40) is constant and known, upon obtaining the location of the at least one of the one or more first trackers (20) and the one or more second trackers (50), a location of the one or more fiducial points (40) may also be obtained.
[0026] Subsequently, the input processing module (110) is also configured to obtain one or more second fiducial coordinates of the one or more fiducial points (40) in a second predefined coordinate system based on the one or more predefined scans. In one embodiment, the one or more second fiducial coordinates may include one or more 3-D coordinates. Also, in a specific embodiment, the second predefined coordinate system may include a scanned image coordinate system. As used herein, the term “scanned image coordinate system” is defined as a coordinate system in a predefined space used for obtaining a location of objects in the predefined scanning device space. Upon obtaining a location of the one or more fiducial points (40) in the first predefined coordinate system and the second predefined coordinate system, a relation between the first predefined coordinate system and the second predefined coordinate system may be obtained. Thus, the input processing module (110) is also configured to obtain a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points (40). In one embodiment, the first predefined computation may include performing one or more arithmetic operations, one or more algebraic operations, one or more trigonometric operations, one or more transformation operations, or the like.
[0027] Upon obtaining the correlation between the first predefined coordinate system and the second predefined coordinate system, the registration for the cranial navigation may happen. Thus, the input processing module (110) is also configured to perform the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained. In one embodiment, the location information relative to the at least one part of the head of the patient may include proper positioning of the at least one part of the head of the patient so that the surgeon can easily navigate to a location within the at least one part of the head of the patient which is supposed to undergo surgery. Thus, in an embodiment, when the surgeon is trying to navigate the corresponding location, the surgeon may use a probe having the predefined pattern. The predefined pattern may be tracked via the visual media capturing device (90) with the one or more first tracker coordinates in the first predefined coordinate system as a reference. Further, the correlation obtained by the input processing module (110) may be used to obtain a location of the probe in the second predefined coordinate system when the one or more predefined scans captured via the predefined scanning device (100) are displayed on a screen. As the one or more predefined scans show the internal view, when the surgeon moves the probe, a corresponding movement is observed in the one or more predefined scans and a corresponding internal view to which the probe is pointing is also observed. Thus, the surgeon can easily navigate to the location within the at least one part of the head of the patient which is supposed to undergo surgery via the probe.
[0028] Basically, upon registration, a next step is to initiate a performing of the surgery. However, during the surgery, the head of the patient is covered, and along with the head, the one or more first trackers (20) may also get covered. Now, because of this, the registration may get hampered. Thus, in order to overcome this, as the one or more second trackers (50) are still visible to the visual media capturing device (90), in one exemplary embodiment, the processing subsystem (60) may also include a registration transfer module (as shown in FIG. 3) operatively coupled to the input processing module (110). The registration transfer module may be configured to obtain a first transformation relation between the one or more first tracker coordinates and the one or more second tracker coordinates by performing a second predefined computation on the one or more first tracker coordinates and the one or more second tracker coordinates upon performing the registration for the cranial navigation. The registration transfer module may also be configured to obtain a second transformation relation among the one or more first tracker coordinates with each other by performing a third predefined computation on the one or more first tracker coordinates upon obtaining the first transformation relation or performing the registration. Further, the registration transfer module may also be configured to share the location information obtained based on the registration, from the one or more first trackers (20) to the one or more second trackers (50), and among the one or more first trackers (20) from each other using the first transformation relation, and the second transformation relation respectively, during the surgery when the patient along with the one or more first trackers (20) are covered. In one embodiment, the second predefined computation may include performing one or more arithmetic operations, one or more algebraic operations, one or more trigonometric operations, one or more transformation operations, or the like. Also, in one embodiment, the third predefined computation may include performing one or more arithmetic operations, one or more algebraic operations, one or more trigonometric operations, one or more transformation operations, or the like. Now, with the help of the one or more second trackers (50), or the one or more first trackers (20) which are un-covered, the surgeon may perform the surgery based on the cranial navigation.
[0029] Suppose during the surgery, mistakenly, the patient gets slipped and hence the one or more second trackers (50) get displaced, thereby hampering the registration and losing an accuracy of the same. Thus, in order to overcome this, the processing subsystem (60) may also include an accuracy recovering module (as shown in FIG. 3) operatively coupled to the registration transfer module. The accuracy recovering module may be configured to recover an accuracy of the registration upon implementing a predefined accuracy recovering technique on one or more recovery fiducial points of the one or more fiducial points (40) when the one or more second trackers (50) are dislocated relative to a predefined location of the patient. In one embodiment, the predefined accuracy recovering technique may include placing the probe on top of the one or more recovery fiducial points of the one or more fiducial points (40), wherein the one or more recovery fiducial points of the one or more fiducial points (40) can be sensed by the probe even when covered with the cover which is used to cover the head of the patient. When the probe is placed on top of the one or more recovery fiducial points of the one or more fiducial points (40), a location of the probe is tracked via the visual media capturing device (90). Further, the correlation is obtained between the first predefined coordinate system and the second predefined coordinate system, and hence the registration may be established again with respect to the one or more recovery fiducial points of the one or more fiducial points (40), thereby recovering the accuracy of the registration.
[0030] FIG. 3 is a schematic representation of an exemplary embodiment of the system (10) for performing the registration for the cranial navigation of FIG. 1 in accordance with an embodiment of the present disclosure. Suppose a patient ‘A’ (120) has a tumor in the brain and hence is admitted to a hospital ‘B’ (130) to undergo surgery of the same. So, initially, a CT scan of head (135) of the patient ‘A’ (120) is obtained upon passing the patient ‘A’ (120) under a CT scanner (140). Thus, in the CT scan, the tumor is visible and now a location of the tumor in the CT scan is known. However, before taking the CT scan, the denture (30) having the one or more first trackers (20) possessing the one or more 3-D ballpoints (150) is attached to the upper jaw of the patient ‘A’ (120). Now, upon taking the CT scan, the one or more 3-D ballpoints (150) are also visible in the corresponding CT scan. Later, the patient ‘A’ (120) is taken to an operation theatre (OT) where the surgery needs to be initiated. Here, along with the one or more first trackers (20), the one or more second trackers (50) are fastened to the patient ‘A’ (120) at the head (135) of the patient ‘A’ (120) via the at least one head frame (55). Later, one or more images are captured via the stereoscopic vision camera (160).
[0031] Suppose the hospital ‘B’ (130) is using the system (10) proposed in the present disclosure for registration for the cranial navigation. The system (10) includes the processing subsystem (60) hosted on the server (70). Then, the one or more images and the CT scan are provided to the system (10) via the input module (80). Later, upon identifying the predefined pattern on the one or more first trackers (20) and the one or more second trackers (50), the one or more first tracker coordinates and the one or more second tracker coordinates in the camera coordinate system are obtained via the input processing module (110). Now, a distance between the one or more first trackers (20) and the one or more 3-D ballpoints (150) is constant. Hence, the one or more first fiducial coordinates of the one or more 3-D ballpoints (150) in the camera coordinate system are obtained based on the one or more first tracker coordinates. Then, from the CT scan, the one or more second fiducial coordinates of the one or more 3-D ballpoints (150) are obtained via the input processing module (110) in the scanned image coordinate system. Further, for the registration purpose, the correlation between the camera coordinate system and the scanned image coordinate system is obtained upon performing the first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates via the input processing module (110). Later, the registration is transferred from the one or more first trackers (20) to the one or more second trackers (50) via the registration transfer module (170) because during the surgery, the head (135) of the patient ‘A’ (120) along with the one or more first trackers (20) maybe covered and only the one or more second trackers (50) are visible. Moreover, during the surgery, the head (135) of the patient ‘A’ (120) got slipped and the accuracy of the registration was lost. Then, the accuracy of the registration is recovered via the s (180) upon tracking the one or more recovery fiducial points of the one or more fiducial points (40) using the probe.
[0032] FIG. 4 is a block diagram of a registration computer or a registration server (190) in accordance with an embodiment of the present disclosure. The registration server (190) includes processor(s) (200), and memory (210) operatively coupled to a bus (220). The processor(s) (200), as used herein, means any type of computational circuit, such as, but not limited to, a microprocessor, a microcontroller, a complex instruction set computing microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, an explicitly parallel instruction computing microprocessor, a digital signal processor, or any other type of processing circuit, or a combination thereof.
[0033] Computer memory elements may include any suitable memory device(s) for storing data and executable program, such as read only memory, random access memory, erasable programmable read only memory, electrically erasable programmable read only memory, hard drive, removable media drive for handling memory cards and the like. Embodiments of the present subject matter may be implemented in conjunction with program modules, including functions, procedures, data structures, and application programs, for performing tasks, or defining abstract data types or low-level hardware contexts. Executable program stored on any of the above-mentioned storage media may be executable by the processor(s) (200).
[0034] The memory (210) includes a plurality of subsystems stored in the form of executable program which instructs the processor(s) (200) to perform method steps illustrated in FIG. 4. The memory (210) includes a processing subsystem (60) of FIG 1. The processing subsystem (60) further has following modules: an input module (80) and an input processing module (110).
[0035] The input module (80) is configured to receive one or more visual media captured in real-time via a visual media capturing device (90), wherein the one or more visual media include a first illustration of at least one of the patient, the one or more first trackers (20), the one or more second trackers (50), and the one or more fiducial points (40). The input module (80) is also configured to receive one or more predefined scans captured via a predefined scanning device (100), wherein the one or more predefined scans include a second illustration of at least one of the patient and the one or more fiducial points (40). The visual media capturing device (90) and the predefined scanning device (100) are operatively coupled to the processing subsystem (60)
[0036] The input processing module (110) is configured to identify a predefined pattern associated with at least one of the one or more first trackers (20) and the one or more second trackers (50) using a predefined identification technique, upon receiving the one or more visual media. The input processing module (110) is also configured to obtain at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers (20) and the one or more second trackers (50) respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media. The input processing module (110) is also configured to obtain one or more first fiducial coordinates of the one or more fiducial points (40) in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers (20). The input processing module (110) is also configured to obtain one or more second fiducial coordinates of the one or more fiducial points (40) in a second predefined coordinate system based on the one or more predefined scans.
[0037] The input processing module (110) is also configured to obtain a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points (40). The input processing module (110) is also configured to perform the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained.
[0038] FIG. 5 is a flow chart representing steps involved in a method (230) for performing a registration for cranial navigation in accordance with an embodiment of the present disclosure. The method (230) includes fastening one or more first trackers to a denture attached to an upper jaw of a patient undergoing a surgery of at least one part of head of the patient based on the cranial navigation, wherein the denture includes one or more fiducial points located at a predefined region on the denture in step 240.
[0039] The method (230) also includes fastening one or more second trackers to the head of the patient in step 250.
[0040] Furthermore, the method (230) includes receiving one or more visual media captured in real-time via a visual media capturing device, wherein the one or more visual media includes a first illustration of at least one of a patient, the one or more first trackers, the one or more second trackers, and the one or more fiducial points in step 260. In one embodiment, receiving the one or more visual media may include receiving the one or more visual media by an input module (80).
[0041] Furthermore, the method (230) also includes receiving one or more predefined scans captured via a predefined scanning device, wherein the one or more predefined scans include a second illustration of at least one of the patient and the one or more fiducial points in step 270. In one embodiment, receiving the one or more predefined scans may include receiving the one or more predefined scans by the input module (80).
[0042] Furthermore, the method (230) also includes identifying a predefined pattern associated with at least one of the one or more first trackers and the one or more second trackers using a predefined identification technique, upon receiving the one or more visual media in step 280. In one embodiment, identifying the predefined pattern may include identifying the predefined pattern by an input processing module (110).
[0043] Furthermore, the method (230) also includes obtaining at least one of one or more first tracker coordinates and one or more second tracker coordinates of at least one of the one or more first trackers and the one or more second trackers respectively in a first predefined coordinate system upon identifying the respective predefined pattern in the corresponding one or more visual media in step 290. In one embodiment, obtaining at least one of the one or more first tracker coordinates and the one or more second tracker coordinates in the first predefined coordinate system may include obtaining at least one of the one or more first tracker coordinates and the one or more second tracker coordinates in the first predefined coordinate system by the input processing module (110).
[0044] Furthermore, the method (230) also includes obtaining one or more first fiducial coordinates of the one or more fiducial points in the first predefined coordinate system based on predefined criteria associated with the one or more first trackers in step 300. In one embodiment, obtaining the one or more first fiducial coordinates in the first predefined coordinate system may include obtaining the one or more first fiducial coordinates in the first predefined coordinate system by the input processing module (110).
[0045] Furthermore, the method (230) also includes obtaining one or more second fiducial coordinates of the one or more fiducial points in a second predefined coordinate system based on the one or more predefined scans in step 310. In one embodiment, obtaining the one or more second fiducial coordinates in the second predefined coordinate system may include obtaining the one or more second fiducial coordinates in the second predefined coordinate system by the input processing module (110).
[0046] Furthermore, the method (230) also includes obtaining a correlation between the first predefined coordinate system and the second predefined coordinate system upon performing a first predefined computation on the one or more first fiducial coordinates and the one or more second fiducial coordinates of the corresponding fiducial points in step 320. In one embodiment, obtaining the correlation between the first predefined coordinate system and the second predefined coordinate system may include obtaining the correlation between the first predefined coordinate system and the second predefined coordinate system by the input processing module (110).
[0047] Furthermore, the method (230) also includes performing the registration for the cranial navigation upon obtaining location information relative to the at least one part of the head of the patient based on the correlation obtained in step 330. In one embodiment, performing the registration may include performing the registration by the input processing module (110).
[0048] In one exemplary embodiment, the method (230) may also include obtaining a first transformation relation between the one or more first tracker coordinates and the one or more second tracker coordinates by performing a second predefined computation on the one or more first tracker coordinates and the one or more second tracker coordinates upon performing the registration for the cranial navigation. In such embodiment, obtaining the first transformation relation may include obtaining the first transformation relation by a registration transfer module (170).
[0049] Further, in an embodiment, the method (230) may also include obtaining a second transformation relation among the one or more first tracker coordinates with each other by performing a third predefined computation on the one or more first tracker coordinates upon obtaining the first transformation relation or performing the registration. In such embodiment, obtaining the second transformation relation may include obtaining the second transformation relation by the registration transfer module (170).
[0050] Furthermore, in one embodiment, the method (230) may also include sharing the location information obtained based on the registration, from the one or more first trackers to the one or more second trackers, and among the one or more first trackers from each other using the first transformation relation, and the second transformation relation respectively, during the surgery when the patient along with the one or more first trackers are covered. In such embodiment, sharing the location information may include sharing the location information by the registration transfer module (170).
[0051] In one exemplary embodiment, the method (230) may further include recovering an accuracy of the registration upon implementing a predefined accuracy recovering technique on one or more recovery fiducial points of the one or more fiducial points when the one or more second trackers are dislocated relative to a predefined location of the patient. In such embodiment, recovering the accuracy may include recovering the accuracy by an accuracy recovering module (180).
[0052] Further, from a technical effect point of view, the implementation time required to perform the method steps included in the present disclosure by the one or more processors of the system is very minimal, thereby the system maintains very minimal operational latency and requires very minimal processing requirements.
[0053] Various embodiments of the present disclosure enable the registration for the cranial navigation without any human intervention, thereby making the process quick and automatic. Also, the presence of more than a single tracker makes the system more efficient and more reliable as the registration performed with respect to a first tracker could be transferred to a second tracker. Further, the registration which is transferred can be used in the case when the first tracker is covered during the surgery and only the second tracker is visible. Also, the system enables recovering of the accuracy of the registration which might have been lost when the tracker might have got dislocated during the surgery, thereby making the system more efficient.
[0054] While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0055] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.
| # | Name | Date |
|---|---|---|
| 1 | 202241015579-STATEMENT OF UNDERTAKING (FORM 3) [21-03-2022(online)].pdf | 2022-03-21 |
| 2 | 202241015579-PROOF OF RIGHT [21-03-2022(online)].pdf | 2022-03-21 |
| 3 | 202241015579-POWER OF AUTHORITY [21-03-2022(online)].pdf | 2022-03-21 |
| 4 | 202241015579-FORM FOR STARTUP [21-03-2022(online)].pdf | 2022-03-21 |
| 5 | 202241015579-FORM FOR SMALL ENTITY(FORM-28) [21-03-2022(online)].pdf | 2022-03-21 |
| 6 | 202241015579-FORM 1 [21-03-2022(online)].pdf | 2022-03-21 |
| 7 | 202241015579-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-03-2022(online)].pdf | 2022-03-21 |
| 8 | 202241015579-EVIDENCE FOR REGISTRATION UNDER SSI [21-03-2022(online)].pdf | 2022-03-21 |
| 9 | 202241015579-DRAWINGS [21-03-2022(online)].pdf | 2022-03-21 |
| 10 | 202241015579-DECLARATION OF INVENTORSHIP (FORM 5) [21-03-2022(online)].pdf | 2022-03-21 |
| 11 | 202241015579-COMPLETE SPECIFICATION [21-03-2022(online)].pdf | 2022-03-21 |
| 12 | 202241015579-FORM-26 [01-04-2022(online)].pdf | 2022-04-01 |
| 13 | 202241015579-STARTUP [30-10-2023(online)].pdf | 2023-10-30 |
| 14 | 202241015579-FORM28 [30-10-2023(online)].pdf | 2023-10-30 |
| 15 | 202241015579-FORM 18A [30-10-2023(online)].pdf | 2023-10-30 |
| 16 | 202241015579-FER.pdf | 2023-11-14 |
| 17 | 202241015579-OTHERS [14-03-2024(online)].pdf | 2024-03-14 |
| 18 | 202241015579-FORM-26 [14-03-2024(online)].pdf | 2024-03-14 |
| 19 | 202241015579-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 20 | 202241015579-FER_SER_REPLY [14-03-2024(online)].pdf | 2024-03-14 |
| 21 | 202241015579-ENDORSEMENT BY INVENTORS [14-03-2024(online)].pdf | 2024-03-14 |
| 22 | 202241015579-ABSTRACT [14-03-2024(online)].pdf | 2024-03-14 |
| 23 | 202241015579-US(14)-HearingNotice-(HearingDate-26-09-2024).pdf | 2024-09-02 |
| 24 | 202241015579-Correspondence to notify the Controller [13-09-2024(online)].pdf | 2024-09-13 |
| 25 | 202241015579-FORM-26 [26-09-2024(online)].pdf | 2024-09-26 |
| 26 | 202241015579-FORM-26 [26-09-2024(online)]-1.pdf | 2024-09-26 |
| 27 | 202241015579-US(14)-ExtendedHearingNotice-(HearingDate-04-10-2024)-1630.pdf | 2024-09-27 |
| 28 | 202241015579-Correspondence to notify the Controller [04-10-2024(online)].pdf | 2024-10-04 |
| 29 | 202241015579-Written submissions and relevant documents [17-10-2024(online)].pdf | 2024-10-17 |
| 30 | 202241015579-PatentCertificate07-11-2024.pdf | 2024-11-07 |
| 31 | 202241015579-IntimationOfGrant07-11-2024.pdf | 2024-11-07 |
| 32 | 202241015579-FORM FOR STARTUP [27-02-2025(online)].pdf | 2025-02-27 |
| 33 | 202241015579-EVIDENCE FOR REGISTRATION UNDER SSI [27-02-2025(online)].pdf | 2025-02-27 |
| 1 | SSE_08-11-2023.pdf |