Sign In to Follow Application
View All Documents & Correspondence

Systems And Methods For Point Cloud Registration Using Surface Matching

Abstract: The present invention relates generally to semi-automatic methods for registration of point clouds to physical objects, and particularly, to a system and method of using an augmented reality interface for achieving robust registration. In one embodiment, the method comprising: acquiring atleast one reference surface representation of a physical object, wherein the reference surface representation is acquired by importing an imaging data of the physical object 110, rendering a virtual apparition of the atleast one reference surface representation alongside the physical object through an augmented reality display to a user 120, manipulating the virtual apparition of the atleast one reference surface representation through an interactive hand-held probe to visually align it with a real-time physical object and obtaining a transform between the reference surface representation and the physical object 130. Figure 3 (for publication)

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 August 2018
Publication Number
08/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
info@krishnaandsaurastri.com
Parent Application

Applicants

Cartosense Private Limited
Flat No.2, Vijay-Kiran Apt, Plot No.187, Shama Prasad Mukharji Rd, Tidke Colony, Nashik - 422002

Inventors

1. Hem Rampal
D-29, Dayanand Block, Shakarpur, Delhi – 110092
2. Nikhil Chandwadkar
4, Vrindavan, Shastri nagar soc., Indira nagar, Nashik - 422009
3. Bhargava Chintalapati
12-2-823/A/34/1, Plot No 34, Santosh Nagar Colony, Mehdipatnam, Hyderabad, Telangana - 500028

Specification

DESC:Field of the invention

The present invention relates generally to semi-automatic methods for registration to physical objects, and particularly, to a system and method of using an augmented reality interface for achieving robust registration.

Background of the invention

Image guided neurosurgery involves aligning pre-operative CT/MRI image volumes with patient anatomy and tracking the position and orientation of surgical instruments with respect to the patient to provide real-time intraoperative localization. Image guided surgery helps the surgeon to make smaller incisions, avoid critical anatomical structures during surgery and reduce trauma and thereby improve patient outcomes.
Alignment of volumetric pre-operative CT/MRI data with the patient anatomy is called registration and is a prerequisite for image guidance. The efficacy of image guidance depends entirely on the accuracy of registration.
The most commonly used method for registration in image guided neurosurgery, which is referred to here as surface trace matching, involves collecting a measurement cloud of points from the actual patient’s skin surface, and then aligning it to a triangle mesh or point cloud of the skin surface extracted from CT/MRI imaging data. Registration methods generally rely on a two-step process: initialization and refinement.
The user first performs pairwise selection of predetermined landmark points from the virtual head surface (extracted from imaging data) and the respective corresponding landmark points on the actual patient’s head surface. However, since the virtual head surface is displayed on a 2D monitor which is kept at a distance from the operating field the surgeon must rely completely on lighting-based cues to give the impression of depth. This makes the process of picking landmarks error-prone. The surgeon also needs to frequently shift his/her gaze between the patient’s head and the virtual model while selecting the landmarks. This too increases the error in the landmarks picked. Thus, registration performed using landmarks serves only to give an approximate initial estimate of the actual transform which is refined in the next step.
The user next collects a cloud of points by tracing a tracked physical probe tip on the patient’s skin surface. The acquired point cloud (also called the probe trajectory) and the point cloud of the skin surface extracted from the imaging data are aligned using local search techniques such as the Iterative Closest Point (ICP) algorithm. The ICP algorithm for example uses the initial transform obtained in the last step to roughly align the probe trajectory with the extracted skin surface and then iteratively does the following two steps: first, find correspondences by assuming that for each point in the probe trajectory, the point on the extracted skin surface which is the closest to it corresponds to it and second, find the transformation which minimizes the distance between the corresponding points. Local search techniques like the ICP algorithm have the drawback that their results are affected by the inaccuracy in the initial transform. This becomes a problem especially when registering imaging data with the patient in the prone operating position. Because the patient is positioned face-down on the operating table, the surgeon can only access the back of the patient’s head. Since the back of the head does not have any prominent landmarks, reliably determining a good initial transform is difficult. Also, because the back of the head is largely symmetric and due to its general lack of features, the solutions to the iterative alignment problem are not unique and hence it becomes difficult to find a globally optimal solution by local search techniques. Global search techniques for aligning point clouds like Super4PCS, Genetic Algorithm, Branch and Bound methods are preferable over local search techniques like ICP and its variants because they can locate the globally optimal solution even in the presence of multiple local minima. However, global techniques are rendered computationally impractical if the search space is too large, which is generally the case without the availability of a good initial transform. Because of these reasons, existing methods have the problem that surface trace matching registration is unreliable and inaccurate and especially when the patient is placed in the prone or lateral operating positions.
Therefore, there is a need in the art with a system and method of using an augmented reality interface for achieving robust registration to solve the above mentioned limitations.

Summary of the Invention

An aspect of the present invention is to address at least the above-mentioned problems and/or disadvantages and to provide at least the advantages described below.
Accordingly, in one aspect of the present invention relates to a method of registration of a reference surface representation to a physical object, the method comprising: acquiring atleast one reference surface representation of a physical object 110, rendering a virtual apparition of the atleast one reference surface representation alongside the physical object through an augmented reality display to a user 120 and manipulating the virtual apparition of the atleast one reference surface representation in order to visually align it with the physical object thereby obtaining a first transform between the reference surface representation and the physical object 130.
Another aspect of the present invention relates to a system for registering a reference surface representation to a physical object, the system comprising: an augmented reality display (4) to acquire and render a virtual apparition (2) of atleast one reference surface representation of a physical object (7) to a user, at least one hand-held probe (3) coupled and configured to allow the user to interact with the virtual apparition (2) of the atleast one reference surface representation of the physical object rendered to the user via the augmented reality display (4) and visually align the virtual apparition with the physical object (7), a tracking system (1) coupled and configured to track the orientation and position of the augmented reality display (4) and the atleast one hand-held probe, a processor coupled and configured to receive signals from the atleast one hand-held probe (3), processes and provides a transform between the reference surface representation and the physical object (7) and a memory coupled to the processor and configured to store the reference surface representation of the physical object.
Other aspects, advantages, and salient features of the invention will become apparent to those skilled in the art from the following detailed description, which, taken in conjunction with the annexed drawings, discloses exemplary embodiments of the invention.


Brief description of the drawings

The above and other aspects, features, and advantages of certain exemplary embodiments of the present invention will be more apparent from the following description taken in conjunction with the accompanying drawings in which:
Figure 1 illustrates a system with the user wearing an optical see-through head mounted display and seeing a virtual apparition of the patient’s head anchored in physical space according to one embodiment of the present invention.
Figure 2A and 2B illustrates a system with user holding a mobile tablet for seeing the virtual apparition of the patient’s head anchored in physical space according to one embodiment of the present invention.
Figure 3 illustrates a method for registration according to one embodiment of the present invention.
Persons skilled in the art will appreciate that elements in the figures are illustrated for simplicity and clarity and may have not been drawn to scale. For example, the dimensions of some of the elements in the figure may be exaggerated relative to other elements to help to improve understanding of various exemplary embodiments of the present disclosure.
Throughout the drawings, it should be noted that like reference numbers are used to depict the same or similar elements, features, and structures.


Detailed description of the invention

The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments of the invention as defined by the claims and their equivalents. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the bibliographical meanings, but, are merely used by the inventor to enable a clear and consistent understanding of the invention. Accordingly, it should be apparent to those skilled in the art that the following description of exemplary embodiments of the present invention are provided for illustration purpose only and not for the purpose of limiting the invention as defined by the appended claims and their equivalents.
It is to be understood that the singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to “a component surface” includes reference to one or more of such surfaces.
By the term “substantially” it is meant that the recited characteristic, parameter, or value need not be achieved exactly, but that deviations or variations, including for example, tolerances, measurement error, measurement accuracy limitations and other factors known to those of skill in the art, may occur in amounts that do not preclude the effect the characteristic was intended to provide.
Figs. 1 through 3, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way that would limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged communications system. The terms used to describe various embodiments are exemplary. It should be understood that these are provided to merely aid the understanding of the description, and that their use and definitions, in no way limit the scope of the invention. Terms first, second, and the like are used to differentiate between objects having the same terminology and are in no way intended to represent a chronological order, unless where explicitly stated otherwise. A set is defined as a non-empty set including at least one element.

The present invention focuses primarily on image guidance as applied to neurosurgery, the systems and methods described herein are sufficiently general to be applied to wherever there is a need for alignment of volumetric data, point-clouds or CAD data to their physical counterparts including but not limited to image guidance in other fields of surgery or dimensional inspection of components.

The present invention describe herein systems and methods for reliable and accurate intraoperative registration of pre-operatively acquired CT/MRI image data to the patient using the surface trace matching technique. The surface trace matching technique involves aligning a sparse set of points measured across the patient’s scalp with the dense point cloud of the scalp surface extracted from the patient’s CT/MRI scan. Although one focus here on neurosurgical applications, the systems and methods described herein are sufficiently general to solve problems in application areas where there is a need for aligning digital representations of real objects with their physical counterparts including but not limited to image guidance in other fields of surgery such as ENT, maxillofacial-reconstruction and industrial applications where CAD drawings or surface reconstructions of components need to be aligned to their physical samples.
Figure 1 illustrates a system with the user wearing an augmented reality display (4) and seeing virtual model of the patient’s head anchored in physical space. The augmented reality display (4) (head-mounted display) i.e. a three dimensional display device, which could be a stereoscopic optical or video see-through head mounted display, a head-mounted virtual reality display, or any other three dimensional display device such as a light-field or holographic display not necessarily head-mounted. The system comprises a tracking system (1), a virtual apparition of the head surface (2), at least one handheld tool (probe) (3), an augmented reality display (4) i.e. (OSTHMD), markers (5) attached rigidly to the display (4) and a real physical object i.e. patient head. In an embodiment, the tracking system (1) is an optical tracking system or an electromagnetic tracking system. The user wears an augmented reality display (4). The orientation and position of the augmented reality display (4) is tracked by the tracking system (1) using markers (5) attached rigidly to the display (4). Alternatively, the augmented reality display (4) might also have one or more onboard video cameras and/or depth cameras to track features in the environment to increase the allowed range of motion and enhance the stability of tracking.
The augmented reality display (4) (OST-HMD) may have a computing system integrated into it or may be connected to a computing system through a wired or wireless link. The position of the user’s eyes is assumed to be fixed and is computed from the assumed distance of the eyes from the screen and the user’s inter- pupillary distance (IPD). Alternatively, the position of the user’s eye projection points may be determined either statically through a manual calibration procedure such as SPAAM or dynamically through real-time eye tracking, which makes it possible to exactly superimpose the perceived virtual objects onto their real counterparts.
The computing system receives a stream of the position and orientation of the OST-HMD (4) with the respect to the world coordinate system from the tracking system (1) as well as user’s eye projection point positions in the OST- HMD’s frame of reference. The viewing parameters of the OST-HMD (4) are also known to the computing system. Using this information, the computing system generates renderings of the virtual objects from the point of view of the user’s eyes and relays them to the OST-HMD (4). The OST-HMD (4) can thus display virtual apparitions of the reference surface representation to the user, that are perceived in 3D and anchored at fixed locations within the world frame.
In one embodiment, the present invention relates to a system for registering a reference surface representation to a physical object, the system comprising: an augmented reality display (4) to acquire and render a virtual apparition (2) of atleast one reference surface representation of a physical object (7) to a user, at least one hand-held probe (3) coupled and configured to allow the user to interact with the virtual apparition (2) of the atleast one reference surface representation of the physical object rendered to the user via the augmented reality display (4) and visually align the virtual apparition with the physical object (7), a tracking system (1) coupled and configured to track the orientation and position of the augmented reality display (4) and the atleast one hand-held probe, a processor coupled and configured to receive signals from the atleast one hand-held probe (3), processes and provides a transform between the reference surface representation and the physical object (7) and a memory coupled to the processor and configured to store the reference surface representation of the physical object.
The system comprises one or more processors and one or more computer readable storage medium. The one or more processors are coupled and configured with the components of the system, that is optical see through head mounted display, the tracking system, and the handheld tool (probe) for aligning a virtual apparition of the reference surface representation to a physical object. The methods and algorithms corresponding to the system may be implemented in a computer readable storage medium appropriately programmed for general purpose computers and computing devices. Typically, the processor, for e.g., one or more microprocessors receive instructions from a memory or like device, and execute those instructions, thereby performing one or more processes defined by those instructions. Further, programs that implement such methods and algorithms may be stored and transmitted using a variety of media, for e.g., computer readable storage media in a number of manners. A “processor” means any one or more microprocessors, Central Processing Unit (CPU) devices, computing devices, microcontrollers, digital signal processors or like devices.
The term “computer-readable storage medium” refers to any medium that participates in providing data, for example instructions that may be read by a computer, a processor or a like device. Such a medium may take many forms, including but not limited to, non-volatile media, volatile media. Non-volatile media include, for example, optical or magnetic disks and other persistent memory volatile media include Dynamic Random Access Memory (DRAM), which typically constitutes the main memory. A transmission media include coaxial cables, copper wire and fiber optics, including the wires that comprise a system bus coupled to the processor and the computer readable storage media for providing the data. Common forms of computer-readable storage media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a Compact Disc-Read Only Memory (CD-ROM), Digital Versatile Disc (DVD), any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a Random Access Memory (RAM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read. In general, the computer-readable programs may be implemented in any programming language. Some examples of languages that can be used include C, C++, C#, or JAVA. The program will use various security, encryption and compression techniques to enhance the overall user experience. The software programs may be stored on or in one or more mediums as an object code. A computer program product comprising computer executable instructions embodied in a computer-readable medium comprises computer parsable codes for the implementation of the processes of various embodiments.
The method and the system disclosed herein can be configured to work in a network environment comprising one or more computers that are in communication with one or more devices via a network. In an embodiment, the computers communicate with the devices directly or indirectly, via a wired medium or a wireless medium such as the Internet, a local area network (LAN), a wide area network (WAN) or the Ethernet or via any appropriate communications mediums or combination of communications mediums. Each of the devices comprises processors, examples of which are disclosed above, that are adapted to communicate with the computers. In an embodiment, each of the computers is equipped with a network communication device, for example, a network interface card, a modem, or other network connection device suitable for connecting to a network. Each of the computers and the devices executes an operating system, examples of which are disclosed above. While the operating system may differ depending on the type of computer, the operating system provides the appropriate communications protocols to establish communication links with the network. Any number and type of machines may be in communication with the computers.
In the present invention, the user holds a probe (3) which is tracked by the tracking system (1). The probe may have buttons (6) or any other conceivable input devices to allow the user to interact with the virtual objects being shown to the user. Alternatively, there may be a plurality of video cameras and/or depth cameras either mounted on the augmented reality display (4) or separately to sense the movements of the user’s hands to enable them to interact with the virtual objects.
The reference surface representation i.e. head surface is perceived in 3D as a virtual apparition (2) floating in front of the user. Using the probe (3), the user may rotate and translate the virtual model (2) to align it with the real physical object i.e. patient’s head (7). Alternatively, gesture input derived from tracking the user’s hand may also be used for manipulating the virtual model. The manipulation may be accomplished through two UI features: arc-ball and/or drag-transform. The arc-ball feature allows the user to rotate the model by specifying an arc on the surface of a sphere centered about the model through a click and drag interaction and the rotation corresponding to the arc is applied to the model. The arc-ball feature also allows the user to translate the model by clicking a button and dragging the probe, where the translation seen by the probe tip during the drag process is applied to the model.
The drag-transform feature allows the user to click upon which the relative orientation and position of the model with respect to the probe is locked and then move it to the desired orientation and position by moving the probe, allowing the user to manoeuvre the model into the desired position.
The augmented reality interface essentially creates an identity mapping between the perceived virtual world and the perceived physical world which means that positions, orientations and transformations in the physical world have identical virtual counterparts. This enables the user to intuitively visualize and manipulate the arrangement of digital objects through physical motions. For instance, if the user aligns a virtual model of the patient’s head with the physical counterpart, the transformation required to do so is trivially available due to this identity mapping between the perceived virtual and physical worlds. The current state of the art visualizations, which are mostly based on displaying user viewpoint independent 2D perspective projections of the digital world, cannot achieve an identity mapping between the perceived virtual world and the perceived physical world. Thus, the user is forced to manually record locations of landmarks on both the virtual model and the physical patient to establish the mapping between the virtual and physical worlds. This process is both time consuming and error prone if sufficient features are not available to be used as landmarks and/or if the identified landmarks cannot be localized accurately.
The user with atleast one hand-held probe (3) selects a landmark on the reference surface representation of the physical object (7) and records a physical location of the landmark position on the reference surface representation of the physical object (7) using the atleast one hand-held probe (3) as a point measurement tool; and acquires a sparse set of point measurements from the surface of the physical object (7) using the atleast one probe (3) as a point measurement tool. The hand-held probe (3) has a tip which is physically traced over the surface of the physical object (7) to acquire the sparse set of point measurements from the surface of the physical object (7) at discrete intervals. In an alternative embodiment, the atleast one hand-held probe (3) is a non-contact probe.
The processor estimates a final transform between the reference surface representation and the real-time physical object by iteratively minimizing the distance between the acquired set of point measurements of the physical object and the reference surface representation using as a starting point the initial transform being trivially available from the previous step due to the identity mapping between the perceived physical world and the perceived physical world.
In an embodiment, the user first selects a landmark on the virtual model through a virtual probe which mimics the motion of the handheld real probe. Display characteristics of a small region on the virtual model closest to the virtual probe tip such as colour or transparency are changed dynamically as the user moves the probe to aid perception. When the user has touched the virtual landmark position with the virtual probe tip, a button click records the location of the virtual landmark. Once the location is recorded, an indication, such as a sphere appearing at the selected location, is given to the user. The user next touches the corresponding real landmark with the real probe tip and clicks the button to record the location of the corresponding real landmark. Stereoscopic depth cues are more effective than lighting-based depth cues at enabling accurate localization of surface landmarks since they make it significantly easier to perceive and hence recognize the local surface profile. With each pairwise input of real and virtual landmark points, the transform which minimizes the error between the real-virtual correspondences is computed and applied to the virtual model online. Cross-sectional MRI slices of the patient’s head at the probe tip location are also made available to the surgeon to enhance confidence in the alignment’s correctness. The user can also tweak the relative weights of the correspondences or delete bad correspondences to fine-tune the alignment.

Figure 2A and 2B illustrates a system with user holding a mobile tablet as the augmented reality display for seeing the virtual model of the patient’s head anchored in physical space according to one embodiment of the present invention.
The figure illustrates a system with user holding a mobile tablet for seeing the virtual model of the patient’s head anchored in physical space according to one embodiment of the present invention. The system comprises a display device 1, markers 2, device camera 3, a device handle 4, probe 5, tracking system 6 and a display 7. The tracking system 6 is an optical tracking system or an electromagnetic tracking system. The display is a handheld display comprising atleast one camera to capture a video stream of the environment.
Another embodiment for accomplishing the registration is depicted. The user holds a display device 1 in his/her hand. The hand-held device 1 has a display screen 7 which can be used to display virtual objects overlaid onto the environment and cameras 3 to capture the colour or monochrome images of the environment. The device 1 may also have a touchscreen interface. The device 1 also has markers 2 attached rigidly to it which enable the device camera 3 to be tracked in real-time with respect to the environment by the tracking system 6. Alternatively, the camera 3 and/or one or more video cameras or depth cameras attached might also be used to generate positioning input by tracking features in the environment to expand the workspace and/or enhance the stability of tracking.
The device 1 may have a computing system integrated into it or may be connected to a computing system through a wired or wireless link. The computing system receives a stream of the position and orientation of the device camera 3 with the respect to the world coordinate system from the tracking system 6. The camera’s intrinsic geometry and Euclidean transformation with respect to the markers 2, is also known to the computing system. Using this information, the computing system generates renderings of the virtual objects from the point of view of the camera 3. The computing system also receives the images of the environment captured through the device camera 3 which it blends with the renderings of the virtual objects to generate an output stream of images which are sent to the display 7.
This allows the user to view the virtual object against the environment from different perspectives. It also allows the user to superimpose virtual imagery onto objects whose locations are known in the tracking system’s coordinate system. The user holds a probe 5 in his/her hand, the orientation and position of which is measured by the tracking system 6. The user can use the probe to interact with virtual objects while looking at the display 7. If the device 1 has a touchscreen, the interaction can happen through touch input as well.
After importing the patient imaging data, a representation of the patient’s head surface is available to be visualized as a virtual model. While looking at the display 7, the user may rotate and translate the virtual model to manually visually align it roughly with the patient’s head.
The manipulation may be accomplished through two UI features: arc-ball and/or drag-transform. The arc-ball feature allows the user to rotate the model using a probe 5 or touch input by specifying an arc on the surface of a sphere centered about the model through a click and drag interaction and the rotation corresponding to the arc is applied to the virtual model. The arc-ball feature also allows the user to translate the model by clicking a button and dragging the probe, where the translation seen by the probe tip during the drag process is applied to the model. The drag-transform feature allows the user to click upon which the relative orientation and position of the model with respect to the probe is locked and then move it to the desired orientation and position by moving the probe, allowing the user to manoeuvre the virtual model into the desired position as if it were rigidly attached to the probe.
In another embodiment, the reference surface representation i.e. virtual model is positioned by selecting landmark locations on the virtual model and matching them with their real counterparts. The current embodiment of the mobile augmented reality display interface creates an identity mapping between the virtual world and the physical world as perceived through the device camera 3. Both the virtual and the physical worlds are perceived through the means of a perspective projection on a 2D display. Once this alignment is achieved, the user can intuitively visualize and manipulate the arrangement of digital objects through physical motions. The user first selects a landmark on the virtual model through a virtual probe which mimics the motion of the real probe. When the user has touched the virtual landmark position with the virtual probe tip, a button click on the probe records the location of the virtual landmark. The corresponding real landmark is touched next with the real probe tip and a button click records the location of the corresponding real landmark.
The selection of landmarks is done by a virtual probe which appears superimposed on the real probe. Mutual occlusion between the virtual probe tip and the model is used by the user as the depth cue for selecting the landmark. When the user has touched the virtual landmark position with the probe tip, a button click on the probe or touch input is used to record the location of the virtual landmark. The virtual landmark may alternatively be selected by projecting a virtual laser pointer from the point of view of the device camera and using the intersection point of the virtual laser pointer with the virtual model as the selected location. After the landmark has been selected, the user may view the selection from multiple viewpoints by moving the device 1 around to check its correctness. The corresponding real landmark is probed next with the real probe tip and a button click on the probe/device or touch input is used to record the location of the corresponding real landmark. With each pairwise input of real and virtual landmark points, the transform which minimizes the error between the real-virtual correspondences is computed and applied to the virtual model online. The user can assess the accuracy of alignment by moving the device 1 around and observing it from multiple viewpoints. The user can also probe various points on the head surface and view the cross-sectional MRI slices of the patient’s head at the probe tip location to gain confidence in the correctness of the alignment. The user can also tweak the relative weights of the correspondences to fine-tune the initial alignment.
Figure 3 illustrates a method for registration using surface matching according to one embodiment of the present invention.
The figure shows the procedure of registration. An alignment of volumetric pre-operative CT/MRI data with the patient anatomy is called registration and is a prerequisite for image guidance. The efficacy of image guidance depends entirely on the accuracy of registration. In step 1, patient imaging data is imported from local or network storage into system memory. The head surface may be computed from the imaging data using a manually selected threshold followed by the extraction of a reference surface representation. The reference surface representation may be a triangulated mesh, a point set, a volumetric distance field or an analytic surface. The reference surface representation may alternatively be imported from a pre-computed file stored in memory or retrieved through a remote server using a wired or wireless link. In step 2, the extracted head surface is presented to the user as a virtual model. The virtual model may be rendered as a virtual apparition on either an augmented reality display which could be a stereoscopic optical or video see-through head mounted display, a head-mounted virtual reality display, or any other three dimensional display device such as a light-field or holographic display – not necessarily head-mounted or on a mobile tablet display. In step 3, the user visually aligns the reference surface representation i.e. virtual head surface with the real physical object i.e. physical head surface using at least one hand-held probe while looking through the display. The output of this step is the initial transform. In an optional step 4, the user fine tunes the initial-transform by picking landmarks on the virtual head surface and recording the locations of their physical counterparts. Steps 2 to 4 are accomplished through two different embodiments which are described in Figure 1 and 2. In step 5, the user acquires a sparse set of point measurements of the physical object i.e. physical patient’s head surface. This can be achieved for example by physically tracing the tip of the probe over the patient’s head or using a separate non-contact optical probe while collecting surface measurements at discrete intervals. In step 6, the final alignment transform is computed by minimizing the distance between the collected trace points and the head surface representation iteratively. Step 6 uses the initial transform estimated in the previous step and a predefined translation and rotation uncertainty measure to calculate a restricted search space of translations and rotations. Alternatively, the translation and rotation uncertainty measure may not be predefined but provided by the user using interactive adjustments of the virtual model. A global search technique is triggered within the restricted search space for aligning the sparse measurements and the extracted skin surface, and the output solution is accepted as the final registration transform. In an alternate embodiment, the accurate initial transform is directly used to trigger a local search technique with said transform being the starting point for the local search technique and the output solution is accepted as the final registration transform to a physical object i.e. physical patient’s head surface.
In one embodiment, the present invention relates to a method of registration of a reference surface representation to a physical object, the method comprising: acquiring atleast one reference surface representation of a physical object, wherein the reference surface representation is acquired by importing an imaging data of the physical object 110, rendering a virtual apparition of the atleast one reference surface representation alongside the physical object through an augmented reality display to a user 120; and manipulating the virtual apparition of the atleast one reference surface representation in order to visually align it with the physical object thereby obtaining a first transform between the reference surface representation and the physical object 130.
In an embodiment, rendering is one of providing and/or displaying a virtual apparition of the atleast one reference surface representation. The reference surface representation of a physical object is obtained by computing a point cloud of the physical object from an imaging data. The augmented reality interface may be either a head mounted stereoscopic display or a mobile tablet with an object facing imaging device tracked in the same coordinate system as the collected sparse point set.
In the present invention, the method comprises manipulating the reference surface representation through an atleast one interactive probe to align with a real-time physical object, wherein manipulating the reference surface representation is accomplished through arc-ball and/or drag transform features of the said atleast one probe and aligning the reference surface representation of the physical object with the physical object in order to obtain an initial transform between the reference surface representation and the physical object.
The method of registration further comprises few steps: selecting a plurality of landmarks on the reference surface representation of the physical object, recording a point measurements of each of the selected landmark positions on the reference surface representation by clicking a button and aligning the plurality of landmarks on the reference surface representation with the corresponding point measurements to modify the first transform. Further the method comprises few steps: recording a sparse set of point measurements on the surface of the physical object and modifying the first transform between the reference surface representation and the physical object to obtain a refined transform by minimizing the distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation.
The distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation is minimized by an iterative local search technique and using the first transform as an initial estimate. The distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation is minimized by a global search technique and using the first transform to limit the search space. The final alignment transform is computed by minimizing the distance between the acquired set of point measurements of the physical object and the reference surface representation.
In another embodiment, the step of selecting landmarks on the reference surface representation of the physical object can be accomplished by selecting the landmarks on the virtual apparition of the reference surface representation by a virtual probe rendered to the user through the augmented reality display which mimics the position and orientation of a real hand-held probe.
The method and the system of aligning a reference surface representation to a physical object disclosed herein are not limited to a particular computer system platform, processor, operating system, or network. The method and the system disclosed herein are not limited to be executable on any particular system or group of systems, and are not limited to any particular distributed architecture, network, or communication protocol.
In an embodiment, the computer programs that implement the methods and algorithms disclosed herein are stored and transmitted using a variety of media, for example, the computer readable media in a number of manners. In an embodiment, hard-wired circuitry or custom hardware is used in place of, or in combination with, software instructions for implementing the processes of various embodiments. Therefore, the embodiments are not limited to any specific combination of hardware and software. The computer program codes comprising computer executable instructions can be implemented in any programming language. Examples of programming languages that can be used comprise C, C++, C#, Java®, JavaScript®, Fortran, Ruby, Perl®, Python®, Visual Basic®, hypertext preprocessor (PHP), Microsoft® .NET, Objective-C®, etc. Other object-oriented, functional, scripting, and/or logical programming languages can also be used. In an embodiment, the computer program codes, or software programs are stored on or in one or more mediums as object code. In another embodiment, various aspects of the method and the system of aligning disclosed herein are implemented in a non-programmed environment comprising documents created, for example, in a hypertext markup language (HTML), an extensible markup language (XML), or other format that render aspects of a graphical user interface (GUI) or perform other functions, when viewed in a visual area or a window of a browser program. In another embodiment, various aspects of the method and the system disclosed herein are implemented as programmed elements, or non-programmed elements, or any suitable combination thereof.
The foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the method and the system disclosed herein. While the method and the system have been described with reference to various embodiments, it is understood that the words, which have been used herein, are words of description and illustration, rather than words of limitation. Furthermore, although the method and the system have been described herein with reference to particular means, materials, and embodiments, the method and the system have are not intended to be limited to the particulars disclosed herein rather, the method and the extend to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. While multiple embodiments are disclosed, it will be understood by those skilled in the art, having the benefit of the teachings of this specification, that the method and the system disclosed herein are capable of modifications and other embodiments may be effected and changes may be made thereto, without departing from the scope and spirit of the method and the system disclosed herein.
Those skilled in the art can make various alterations and modifications without departing from the scope and spirit of the invention. Therefore, the scope of the invention shall be defined and protected by the following claims and their equivalents.
Figs. 1-3 are merely representational and are not drawn to scale. Certain portions thereof may be exaggerated, while others may be minimized. Figs. 1-3 illustrate various embodiments of the invention that can be understood and appropriately carried out by those of ordinary skill in the art.
In the foregoing detailed description of embodiments of the invention, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description of embodiments of the invention, with each claim standing on its own as a separate embodiment.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein,” respectively.

,CLAIMS:

1. A method of registration of a reference surface representation to a physical object, the method comprising:
acquiring atleast one reference surface representation of a physical object 110;
rendering a virtual apparition of the atleast one reference surface representation alongside the physical object through an augmented reality display to a user 120; and
manipulating the virtual apparition of the atleast one reference surface representation in order to visually align it with the physical object thereby obtaining a first transform between the reference surface representation and the physical object 130.

2. The method as claimed in claim 1, wherein the method of registration further comprises:
selecting a plurality of landmarks on the reference surface representation of the physical object;
recording a point measurements of each of the selected landmark positions on the reference surface representation; and
aligning the plurality of landmarks on the reference surface representation with the corresponding point measurements to modify the first transform.

3. The method of registration as claimed in claim 1, wherein the method further comprises:
recording a sparse set of point measurements on the surface of the physical object; and
modifying the first transform between the reference surface representation and the physical object to obtain a refined transform by minimizing the distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation.

4. The method as claimed in claim 3, wherein the distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation is minimized by an iterative local search technique and using the first transform as an initial estimate.

5. The method as claimed in claim 3, wherein the distance between the sparse set of point measurements on the surface of the physical object and the reference surface representation is minimized by a global search technique and using the first transform to limit the search space.

6. The method as claimed in claim 1, wherein the reference surface representation of a physical object is obtained by computing a point cloud of the physical object from an imaging data.

7. The method as claimed in claim 1, wherein manipulating the virtual apparition of the reference surface representation is accomplished through arc-ball and/or drag transform features.

8. A system for registering a reference surface representation to a physical object, the system comprising:
an augmented reality display (4) to acquire and render a virtual apparition (2) of atleast one reference surface representation of a physical object (7) to a user;
at least one hand-held probe (3) coupled and configured to allow the user to interact with the virtual apparition (2) of the atleast one reference surface representation of the physical object rendered to the user via the augmented reality display (4) and visually align the virtual apparition with the physical object (7);
a tracking system (1) coupled and configured to track the orientation and position of the augmented reality display (4) and the atleast one hand-held probe;
a processor coupled and configured to receive signals from the atleast one hand-held probe (3), processes and provides a transform between the reference surface representation and the physical object (7); and
a memory coupled to the processor and configured to store the reference surface representation of the physical object.

9. The system as claimed in claim 8, wherein the augmented reality display (4) is a head mounted display.

10. The system as claimed in claim 8, wherein the augmented reality display is a handheld display comprising atleast one camera to capture a video stream of the environment.

11. The system as claimed in claim 8, wherein the augmented reality display (4) further comprises at least one camera to capture the user’s hand movements and enables the user to manipulate the virtual apparition with hand gestures.

12. The system as claimed in claim 8, wherein the user with atleast one hand-held probe (3),
selects a landmark on the reference surface representation of the physical object (7) and records a physical location of the landmark position on the reference surface representation of the physical object (7) using the atleast one hand-held probe (3) as a point measurement tool; and
acquires a sparse set of point measurements from the surface of the physical object (7) using the atleast one probe (3) as a point measurement tool.

12. The system as claimed in claim 12, wherein the atleast one hand-held probe (3) has a tip which is physically traced over the surface of the physical object (7) to acquire the sparse set of point measurements from the surface of the physical object (7) at discrete intervals.

13. The system as claimed in claim 12, wherein the atleast one hand-held probe (3) is a non-contact probe.

14. The system as claimed in claim 12, wherein the processor further estimates a refined transform between the reference surface representation and the physical object (7) by minimizing the distance between the acquired set of point measurements from the surface of the physical object and the reference surface representation.

15. The system as claimed in claim 12, wherein when the user selects a landmark on the reference surface representation of the physical object using a virtual probe rendered to the user using the augmented reality display which mimics the position and orientation of the atleast one hand-held probe.

Documents

Application Documents

# Name Date
1 201821030991-PROVISIONAL SPECIFICATION [18-08-2018(online)].pdf 2018-08-18
1 201821030991-Response to office action [07-04-2025(online)].pdf 2025-04-07
2 201821030991-Annexure [23-06-2023(online)].pdf 2023-06-23
2 201821030991-FORM FOR STARTUP [18-08-2018(online)].pdf 2018-08-18
3 201821030991-FORM FOR SMALL ENTITY(FORM-28) [18-08-2018(online)].pdf 2018-08-18
3 201821030991-CLAIMS [23-06-2023(online)].pdf 2023-06-23
4 201821030991-FORM 3 [18-08-2018(online)].pdf 2018-08-18
4 201821030991-COMPLETE SPECIFICATION [23-06-2023(online)].pdf 2023-06-23
5 201821030991-FORM 1 [18-08-2018(online)].pdf 2018-08-18
5 201821030991-DRAWING [23-06-2023(online)].pdf 2023-06-23
6 201821030991-FER_SER_REPLY [23-06-2023(online)].pdf 2023-06-23
6 201821030991-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-08-2018(online)].pdf 2018-08-18
7 201821030991-OTHERS [23-06-2023(online)].pdf 2023-06-23
7 201821030991-EVIDENCE FOR REGISTRATION UNDER SSI [18-08-2018(online)].pdf 2018-08-18
8 201821030991-FORM 4(ii) [24-05-2023(online)].pdf 2023-05-24
8 201821030991-DRAWINGS [18-08-2018(online)].pdf 2018-08-18
9 201821030991-FER.pdf 2022-11-24
9 201821030991-FORM 3 [17-08-2019(online)].pdf 2019-08-17
10 201821030991-ENDORSEMENT BY INVENTORS [17-08-2019(online)].pdf 2019-08-17
10 201821030991-FORM 18 [16-08-2022(online)].pdf 2022-08-16
11 201821030991-DRAWING [17-08-2019(online)].pdf 2019-08-17
11 201821030991-ORIGINAL UR 6(1A) FORM 26-240919.pdf 2019-09-27
12 201821030991-CORRESPONDENCE-OTHERS [17-08-2019(online)].pdf 2019-08-17
12 Abstract1.jpg 2019-09-20
13 201821030991-COMPLETE SPECIFICATION [17-08-2019(online)].pdf 2019-08-17
13 201821030991-FORM 13 [17-09-2019(online)].pdf 2019-09-17
14 201821030991-FORM-26 [17-09-2019(online)].pdf 2019-09-17
14 201821030991-RELEVANT DOCUMENTS [17-09-2019(online)].pdf 2019-09-17
15 201821030991-FORM-26 [17-09-2019(online)].pdf 2019-09-17
15 201821030991-RELEVANT DOCUMENTS [17-09-2019(online)].pdf 2019-09-17
16 201821030991-COMPLETE SPECIFICATION [17-08-2019(online)].pdf 2019-08-17
16 201821030991-FORM 13 [17-09-2019(online)].pdf 2019-09-17
17 Abstract1.jpg 2019-09-20
17 201821030991-CORRESPONDENCE-OTHERS [17-08-2019(online)].pdf 2019-08-17
18 201821030991-DRAWING [17-08-2019(online)].pdf 2019-08-17
18 201821030991-ORIGINAL UR 6(1A) FORM 26-240919.pdf 2019-09-27
19 201821030991-ENDORSEMENT BY INVENTORS [17-08-2019(online)].pdf 2019-08-17
19 201821030991-FORM 18 [16-08-2022(online)].pdf 2022-08-16
20 201821030991-FER.pdf 2022-11-24
20 201821030991-FORM 3 [17-08-2019(online)].pdf 2019-08-17
21 201821030991-DRAWINGS [18-08-2018(online)].pdf 2018-08-18
21 201821030991-FORM 4(ii) [24-05-2023(online)].pdf 2023-05-24
22 201821030991-EVIDENCE FOR REGISTRATION UNDER SSI [18-08-2018(online)].pdf 2018-08-18
22 201821030991-OTHERS [23-06-2023(online)].pdf 2023-06-23
23 201821030991-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [18-08-2018(online)].pdf 2018-08-18
23 201821030991-FER_SER_REPLY [23-06-2023(online)].pdf 2023-06-23
24 201821030991-DRAWING [23-06-2023(online)].pdf 2023-06-23
24 201821030991-FORM 1 [18-08-2018(online)].pdf 2018-08-18
25 201821030991-FORM 3 [18-08-2018(online)].pdf 2018-08-18
25 201821030991-COMPLETE SPECIFICATION [23-06-2023(online)].pdf 2023-06-23
26 201821030991-FORM FOR SMALL ENTITY(FORM-28) [18-08-2018(online)].pdf 2018-08-18
26 201821030991-CLAIMS [23-06-2023(online)].pdf 2023-06-23
27 201821030991-FORM FOR STARTUP [18-08-2018(online)].pdf 2018-08-18
27 201821030991-Annexure [23-06-2023(online)].pdf 2023-06-23
28 201821030991-Response to office action [07-04-2025(online)].pdf 2025-04-07
28 201821030991-PROVISIONAL SPECIFICATION [18-08-2018(online)].pdf 2018-08-18

Search Strategy

1 201821030991E_23-11-2022.pdf