Sign In to Follow Application
View All Documents & Correspondence

Navigational Aid

Abstract: A surgical robotic system for augmenting a representation of at least a portion of a surgical site for aiding in orienting the representation, the system comprising: a processor (521) configured to: receive an imaging device signal indicative of the location and orientation of an imaging device relative to a surgical site, augment a representation of at least a portion of the surgical site in dependence on the received imaging device signal, the augmentation indicating an orientation of the representation of the surgical site, receive a further imaging device signal indicative of an updated location and/or orientation of the imaging device, determine a change in at least one of the location and orientation of the imaging device in dependence on the imaging device signal and the further imaging device signal, and update the augmented representation in dependence on the determined change; and a display configured to display at least part of the augmented representation. Figure 5 is the representative figure.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 March 2025
Publication Number
14/2025
Publication Type
INA
Invention Field
BIO-MEDICAL ENGINEERING
Status
Email
Parent Application

Applicants

CMR SURGICAL LIMITED
1 Evolution Business Park, Milton Road, Cambridge CB24 9NG, United Kingdom.

Inventors

1. HARES, Luke David Ronald
c/o CMR Surgical Limited, 1 Evolution Business Park, Milton Road, Cambridge CB24 9NG, United Kingdom.
2. MAWBY, Andrew Robert
c/o CMR Surgical Limited, 1 Evolution Business Park, Milton Road, Cambridge CB24 9NG, United Kingdom.
3. SLACK, Mark Clifford
c/o CMR Surgical Limited, 1 Evolution Business Park, Milton Road, Cambridge CB24 9NG, United Kingdom.

Specification

NAVIGATIONAL AID

BACKGROUND

It is known to use robots for assisting and performing surgery. Figure 1 illustrates a typical surgical robot 100 which consists of a base 108, an arm 102, and an instrument 105. The base supports the robot, and is itself attached rigidly to, for example, the operating theatre floor, the operating theatre ceiling or a trolley. The arm extends between the base and the instrument. The arm is articulated by means of multiple flexible joints 103 along its length, which are used to locate the surgical instrument in a desired location relative to the patient. The surgical instrument is attached to the distal end 104 of the robot arm. The surgical instrument penetrates the body of the patient 101 at a port 107 so as to access the surgical site. At its distal end, the instrument comprises an end effector 106 for engaging in a medical procedure.

Figure 2 illustrates a typical surgical instrument 200 for performing robotic laparoscopic surgery. The surgical instrument comprises a base 201 by means of which the surgical instrument connects to the robot arm. A shaft 202 extends between the base 201 and an articulation 203. The articulation 203 terminates in an end effector 204. In figure 2, a pair of serrated jaws are illustrated as the end effector 204. The articulation 203 permits the end effector 204 to move relative to the shaft 202. It is desirable for at least two degrees of freedom to be provided to the motion of the end effector 204 by means of the articulation.

An imaging device can be located at a surgical site together with the surgical instrument. The imaging device can image the surgical site. The image of the surgical site provided by the imaging device can be displayed on a display for viewing by a surgeon carrying out the procedure. Laparoscopic (or minimally invasive) surgery, where the surgeon does not have a direct line of sight to the surgical site, can therefore be performed.

There is often a disconnect between the view of the surgical site and the body on which the surgeon operates. The imaging device used to image the surgical site can be provided at any desired orientation, and the orientation can be changed during a surgical procedure. A surgeon may therefore become disoriented during a procedure and may lose awareness of the position and/or orientation in the surgical site at which the displayed end effectors are located.

SUMMARY

According to an aspect of the present invention there is provided a surgical robotic system for augmenting a representation of at least a portion of a surgical site for aiding in orienting the representation, the system comprising: a processor configured to: receive an imaging device signal indicative of the location and orientation of an imaging device relative to a surgical site, augment a representation of at least a portion of the surgical site in dependence on the received imaging device signal, the augmentation indicating an orientation of the representation of the surgical site, receive a further imaging device signal indicative of an updated location and/or orientation of the imaging device, determine a change in at least one of the location and orientation of the imaging device in dependence on the imaging device signal and the further imaging device signal, and update the augmented representation in dependence on the determined change; and a display configured to display at least part of the augmented representation.

At least one of the received imaging device signal and the received further imaging device signal may comprise kinematics data relating to the imaging device. The processor may be configured to determine the augmentation in dependence on a feature in the representation of the surgical site. The processor may be configured to determine the augmentation in dependence on augmentation data. The augmentation data may comprise data associated with the representation. The augmentation data may comprise data indicative of a feature in the representation. The data indicative of a feature in the representation may be indicative of one or more feature group of a set of feature groups. The augmentation data may comprise an augmentation signal indicative of user input relating to the representation. The system may comprise a controller configured to generate the augmentation signal in response to user input at the controller.

The processor may be configured to determine the augmentation in dependence on image processing of at least a portion of the representation. The processor may be configured to track a location of a feature in the representation, and update the augmented representation in dependence on the tracked location and a change in at least one of the imaging device orientation, the imaging device location, a field of view of the imaging device and a field of view of a displayed portion of the representation.

The system may further comprise an imaging device, whereby the imaging device may be configured to image at least a portion of the surgical site and generate an image feed of the imaged portion. The processor may be configured to determine the augmentation in dependence on image processing of at least a portion of the generated image feed.

The representation may be obtained in dependence on at least one of 3D model data of a 3D model of at least a portion of the surgical site and the generated image feed. The processor may be configured to receive at least one of the 3D model data and the generated image feed.

The system may further comprise a memory coupled to the processor, the memory being configured to store at least one of the representation of the surgical site and the augmentation.

According to another aspect of the present invention there is provided a method for augmenting a representation of at least a portion of a surgical site for aiding in orienting the representation, the method comprising: receiving an imaging device signal indicative of the location and orientation of an imaging device relative to a surgical site, augmenting a representation of at least a portion of the surgical site in dependence on the received imaging device signal, the augmentation indicating an orientation of the representation of the surgical site, receiving a further imaging device signal indicative of an updated location and/or orientation of the imaging device, determining a change in at least one of the location and orientation of the imaging device in dependence on the imaging device signal and the further imaging device signal, updating the augmented representation in dependence on the determined change, and displaying at least part of the augmented representation.

The method may comprise determining the augmentation in dependence on a feature in the representation of the surgical site. The method may comprise determining the augmentation in dependence on a feature that is not present in a displayed portion of the representation. The method may comprise receiving augmentation data, and determining the augmentation in dependence on the received augmentation data. The augmentation data may comprise data indicative of a feature in the representation. The data indicative of a feature in the representation may be indicative of one or more feature group of a set of feature groups. The augmentation data may comprise an augmentation signal indicative of user input relating to the representation.

The method may comprise determining the augmentation in dependence on image processing of at least a portion of the representation. The method may comprise tracking a location of a feature in the representation, and updating the augmented representation in dependence on the tracked location and a change in at least one of the imaging device orientation, the imaging device location, a field of view of the imaging device and a field of view of a displayed portion of the representation.

Any one or more feature of any aspect above may be combined with any one or more feature of any other aspect above. Any apparatus feature may be written as a method feature where possible, and vice versa. These have not been written out in full here merely for the sake of brevity.

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. The mention of features in this Summary does not indicate that they are key features or essential features of the invention or of the claimed subject matter, nor is it to be taken as limiting the scope of the claimed subject matter.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will now be described by way of example with reference to the accompanying drawings.

In the drawings:

Figure 1 illustrates a surgical robot performing a surgical procedure;

Figure 2 illustrates a known surgical instrument;

Figure 3 illustrates a surgical robot;

Figure 4 illustrates a surgeon console;

Figure 5 schematically illustrates the configuration of a controller;

Figure 6 illustrates a method for augmenting a representation of a surgical site;

Figure 7 illustrates examples of augmentations;

Figure 8 illustrates examples of displayed portions of a representation;

Figure 9 illustrates a process for centring a portion of a representation; and

Figure 10 illustrates another example of an augmentation.

DETAILED DESCRIPTION

The following description describes the present techniques in the context of surgical robotic systems, though the features described below are not limited to such systems, but may be applied to robotic systems more generally. In some examples, the present techniques may be applied to robotic systems that operate remotely. In some examples, the present techniques may be applied at sites where the user of the robotic system may become disoriented when operating the robotic system. Some examples of situations in which the present techniques may be useful include those that make use of‘snake-like’ robots for exploration, investigation or repair.

Robotic systems can include manufacturing systems, such as vehicle manufacturing systems, parts handling systems, laboratory systems, and manipulators such as for hazardous materials or surgical manipulators.

Figure 3 illustrates a surgical robot having an arm 300 which extends from a base 301 . The arm comprises a number of rigid limbs 302. The limbs are coupled by revolute joints 303. The most proximal limb 302a is coupled to the base by a proximal joint 303a. It and the other limbs are coupled in series by further ones of the joints 303. Suitably, a wrist 304 is made up of four individual revolute joints. The wrist 304 couples one limb (302b) to the most distal limb (302c) of the arm. The most distal limb 302c carries an attachment 305 for a surgical instrument 306. Each joint 303 of the arm has one or more motors 307 which can be operated to cause rotational motion at the respective joint, and one or more position and/or torque sensors 308 which provide information regarding the current configuration and/or load at that joint. Suitably, the motors are arranged proximally of the joints whose motion they drive, so as to improve weight distribution. For clarity, only some of the motors and sensors are shown in figure 3. The arm may be generally as described in our co-pending patent application PCT/GB2014/053523.

The arm terminates in the attachment 305 for interfacing with the instrument 306. Suitably, the instrument 306 takes the form described with respect to figure 2. The instrument has a diameter less than 8mm. Suitably, the instrument has a 5mm diameter. The instrument may have a diameter which

is less than 5mm. The instrument diameter may be the diameter of the shaft. The instrument diameter may be the diameter of the profile of the articulation. Suitably, the diameter of the profile of the articulation matches or is narrower than the diameter of the shaft. The attachment 305 comprises a drive assembly for driving articulation of the instrument. Movable interface elements of the drive assembly interface mechanically engage corresponding movable interface elements of the instrument interface in order to transfer drive from the robot arm to the instrument. One instrument is exchanged for another several times during a typical operation. Thus, the instrument is attachable to and detachable from the robot arm during the operation. Features of the drive assembly interface and the instrument interface aid their alignment when brought into engagement with each other, so as to reduce the accuracy with which they need to be aligned by the user.

The instrument 306 comprises an end effector for performing an operation. The end effector may take any suitable form. For example, the end effector may be smooth jaws, serrated jaws, a gripper, a pair of shears, a needle for suturing, a camera, a laser, a knife, a stapler, a cauteriser, a suctioner. As described with respect to figure 2, the instrument comprises an articulation between the instrument shaft and the end effector. The articulation comprises several joints which permit the end effector to move relative to the shaft of the instrument. The joints in the articulation are actuated by driving elements, such as cables. These driving elements are secured at the other end of the instrument shaft to the interface elements of the instrument interface. Thus, the robot arm transfers drive to the end effector as follows: movement of a drive assembly interface element moves an instrument interface element which moves a driving element which moves a joint of the articulation which moves the end effector.

Controllers for the motors, torque sensors and encoders are distributed within the robot arm. The controllers are connected via a communication bus to a control unit 309. The control unit 309 comprises a processor 310 and a memory 31 1 . The memory 31 1 stores in a non-transient way software that is executable by the processor to control the operation of the motors 307 to cause the arm 300 to operate in the manner described herein. In particular, the software can control the processor 310 to cause the motors (for example via distributed controllers) to drive in dependence on inputs from the sensors 308 and from a surgeon command interface 312. The control unit 309 is coupled to the motors 307 for driving them in accordance with outputs generated by execution of the software. The control unit 309 is coupled to the sensors 308 for receiving sensed input from the sensors, and to the command interface 312 for receiving input from it. The respective couplings may, for example, each be electrical or optical cables, and/or may be provided by a wireless connection. The command interface 312 comprises one or more input devices whereby a user can request motion of the end effector in a desired way. The input devices could, for example, be manually operable mechanical input devices such as control handles or joysticks, or contactless input devices such as optical gesture sensors. The software stored in the memory 31 1 is configured to respond to those inputs and cause the joints of the arm and instrument to move accordingly, in compliance with a pre- determined control strategy. The control strategy may include safety features which moderate the motion of the arm and instrument in response to command inputs. Thus, in summary, a surgeon at the command interface 312 can control the instrument 306 to move in such a way as to perform a desired surgical procedure. The control unit 309 and/or the command interface 312 may be remote from the arm 300.

Suitably the imaging device is configured to output an image signal or image feed, representative of an image of a surgical site at which the imaging device is located, and/or comprising an image of the surgical site. The image signal may comprise a video signal.

Whilst the above description refers to a single screen as a display device, in some examples the robotic surgical system comprises a plurality of display devices, or screens. The screens are suitably configured to display the image as a two-dimensional image and/or as a three-dimensional image.

The screens can be provided on a single user console, or two or more consoles can comprise at least one screen each. This permits additional viewing screens which can be useful for allowing people other than the console user to view the surgical site, for example for training, and/or for viewing by other people in the operating room.

Representation of the surgical site

A representation of a surgical site can be displayed on the display, to permit the surgeon to see the site and to enable them to perform the surgical procedure. The representation can be, or can comprise, an image feed from an imaging device such as an endoscope located at the surgical site. The representation of the surgical site can comprise a 2D or 3D representation. The 3D

representation can be generated from a 2D original representation by suitable processing. In some examples, the 3D representation can comprise a 3D model, for example a 3D model of a body, or of a portion of a body. For instance, where a surgical procedure is to be carried out in an abdominal cavity, the 3D model can represent such an abdominal cavity. The 3D model may, in some examples, be derived at least in part from data such as physical data relating to a patient. For example, the data can comprise data from a scan such as an MRI scan. The 3D model may be modified or selected according to knowledge of a patient to undergo a surgical procedure, for example in dependence on knowledge of that person’s physiology. In some examples, the representation is based on both a captured image feed and a model such as a 3D model of the site.

The 3D model is suitably a 3D anatomical model. The model may comprise a simulation or simulated data. The model may comprise model data that has been built up from data obtained in relation to earlier procedures. The earlier procedures may be of the same or similar type to the procedure being planned or performed.

The representation of the surgical site is likely to change during a surgical procedure. For example, the representation may change as a patient moves (for example where the orientation of a patient changes such as where a patient table is tilted) and/or as the imaging device changes its position (or location) and/or orientation relative to the surgical site. The imaging device may pan through a surgical site, and/or zoom in or out. Such changes can also change the representation of the site displayed on the display. The portion of a model displayed on the display can change, for example during a surgical procedure. The portion of the model displayed may change in dependence on a determined position of the end effector. The position of the end effector may be determined, for example, in dependence on control signals sent to the end effector and/or kinematics data of the end effector (or system more generally). The portion of a model displayed on the display can change by changing the digital zoom of the representation, i.e. the zoom of the imaging device itself need not change; the change can be effected by processing performed on the representation of the surgical site.

Typically, the displayed representation permits the surgeon to see where the end effector is, and to control the end effector accordingly so as to perform the surgical procedure. Since the surgeon sees the portion of the surgical site that is displayed, rather than the body as a whole, the surgeon may become disoriented during the procedure. In other words, the surgeon may lose track of which part of the surgical site is being viewed, and/or at what orientation that part of the surgical site is being viewed. This can have implications including lengthening the time taken to perform a procedure, due to additional time being required for the surgeon to correctly orient themselves before continuing. The present inventors have realised that additional information may be provided to the surgeon (and/or to other members of operating room staff), preferably during the surgical procedure. Such additional information may be provided by augmenting the representation of the surgical site, and/or by augmenting the display of the representation of the surgical site.

Augmentation

Augmenting the representation of the surgical site can permit the inclusion of visual aids to a user such as a surgeon, which can for example aid in orienting the displayed representation of the surgical site. In examples discussed herein,‘orienting’ suitably refers to working out or gaining an

understanding of the orientation of the displayed representation. Orienting’ can include appreciating what orientation the displayed representation is in. The approach of the present techniques can enable a surgical procedure to be completed more quickly, and or more accurately. Augmentation may provide an enhanced human-machine interaction, such as between a surgeon using the system to control an end effector and the operation of the end effector, or of the robotic system in general. As will be explained herein, such augmentation can enable users of the system to perform technical tasks more repeatably, more reliably, and/or more quickly, and so on.

Augmenting the representation of the surgical site can be done before, during and/or after a surgical procedure. Optionally, at least one augmentation is displayed on or as part of the representation during a surgical procedure. In some examples this can enable a user such as a surgeon to more accurately orient the representation in real time as a procedure is performed.

Augmentations can relate to one or more of a path taken by a surgeon through a site (e.g. by one or more end effector controlled by the surgeon), actions taken at one or more points at the site, features present (or absent) at particular locations in the site, movement of portions of the site, and so on.

Augmentations may be added automatically or in response to user input. Where augmentations are added in response to user input, the user may specify a part of the augmentation and another part of the augmentation may occur automatically. An example of this is feature identification. A processor such as an image processor may monitor the representation of the site and determine which features are present, or are likely to be present in the representation, or in the portion of the representation displayed. The image processor, or another processer operating in dependence on an output from the image processor, may automatically label or tag one or more determined feature. Alternatively, a user may indicate a feature, and the system may automatically select the label or tag to apply to that feature, for example in dependence on an automatic feature determination, such as by image recognition or image matching.

The above example illustrates one use of the present techniques. It is possible for a user to tag or identify a feature such as an anatomical feature in a displayed image feed or a model of a surgical site. An augmentation can be added to the representation in dependence on a user input. For example, the user input can indicate the feature, or the location of the feature, to which an augmentation is to be added. The user may also indicate or specify the nature of the augmentation which is to be added, for example a name or label for the augmentation, and/or the type of augmentation.

In a surgical robotic system, the position of portions of the surgical robot within 3D space are typically known. For example, the location in 3D space of an end effector is known, or can be determined based on kinematic control information. Such kinematic data is already present in the system, so there may be no need to calculate additional kinematic data. Thus, the present techniques can advantageously make use of existing information in the system to provide additional benefits to users of the system. The augmentation may be an augmentation in 3D space relating to the representation of the surgical site. Where the representation moves in the display, the associated movement of the augmentation can take account of depth, rotation and/or lateral translation, and so on. This approach can give an increased accuracy of the location of the augmentation, and so an increased accuracy of interactions with the system that are based on the augmentation.

In one example, a user such as a surgeon may wish to indicate a position on an anatomical feature to make an incision or to insert a stitch. This can be useful where a surgeon finds a suitable location for the incision or stitch, but may wish to perform another task before making the incision or inserting the stich. Enabling the surgeon to add such an augmentation enables the surgeon to return to the location indicated by the augmentation quickly and accurately. The location of such an augmentation relative to a current location can aid in navigating through the surgical site and/or in orienting the

representation of the surgical site. For example, viewing such augmentations on a displayed portion of the representation can enable a user to quickly and accurately determine the part of the site that is being displayed, facilitating a more efficient human-machine interaction. In the context of a surgical procedure, this can reduce the operation time, by permitting the surgeon to minimise time required to re-locate identified locations. Reducing the operation time is beneficial to patients, as it can reduce the risk of complications and aid recovery time. Reductions in operation time may be beneficial to patients and hospitals, because this can lead to an increase in the number of operations that may be performed, which may in turn lead to reductions in per-operation cost.

The augmentation(s) need not be displayed on or as part of the representation. For example, an augmentation may be added to one portion of the representation, then the displayed image changed (such as by zoom or panning of the imaging device and/or the representation) such that the augmentation(s) are no longer visible on the display. In this case, the system can be configured to indicate the presence and/or location of the augmentation(s) by adding a further augmentation, which may be of a different type, to the representation or to the displayed portion of the representation. Such an augmentation may be indicative of the direction and/or distance to or towards a feature of the representation, such as another augmentation and/or an anatomical feature in the representation.

This will be described in more detail below.

In some examples, an action may be performed automatically, or in an assisted manner. For example, a common action such as tying a knot may be started by a surgeon, and the remainder of the action can be completed automatically so as to assist the surgeon. Such assisted actions are useful in repetitive movements or actions. The augmentation may be provided at a location where the automatic action is to be performed, or where assistance in performing the action is required. For instance, the surgeon can indicate one or more locations at which a knot is to be tied, and/or indicate to the system part-way through tying a knot that assistance in tying the knot is required. Once an assistive action such as tying a knot has been performed, a pre-existing augmentation may be modified to indicate that an assistive action has been performed. Additionally or alternatively, a new augmentation may be provided to indicate that an assistive action has been performed. The location of the new augmentation can provide an indication of where the assistive action was performed.

The identification of suitable locations (in the most appropriate places, orientations, spacings from one another), for example locations at which a surgeon may perform a task and/or at which the system

may perform an assistive action, is suitably done by a surgeon. In some examples, the identification of suitable locations may be performed automatically. Such automatic identification may be performed in dependence on previously identified locations, for example locations which are contained in a model of the surgical site. The model may be built up by considering one or more previous procedures. Hence the automatic identification may benefit from a combined knowledge of earlier procedures, which may have been carried out by the same or another surgeon or surgeons compared to a procedure about to be carried out. In this way, a more junior surgeon may benefit from the knowledge of more experienced colleagues, without those more experienced colleagues needing to be present during the procedure or to be directly consulted before the procedure. The automatic identification of locations may be subject to confirmation by the user of the system. For example, the system may suggest optional locations. The suggested optional locations may be associated with a confidence factor, which can be indicative of the confidence that that suggested optional location is appropriate for the current procedure. The confidence factor may be determined in any appropriate way. In some examples, the confidence factor may be determined by determining a similarity between a previous procedure and the current procedure and/or a similarity between a model used in a previous procedure and the model used in the current procedure. The confidence factor may be determined in dependence on the user who performed one or more of the previous procedures. For example, a confidence factor may be higher, indicating a greater confidence, where it is based at least in part on a procedure carried out by a relatively more experienced surgeon.

The surgeon may indicate the locations, for example by identifying locations and/or by confirming suggested locations, such that the system can augment the representation of the surgical site accordingly. Where actions are to be performed autonomously, or partly autonomously, the surgeon can then indicate to the system that such at least partly autonomous actions may be performed.

A tag or augmentation may be added manually or automatically. Automatic augmentation may occur in dependence on image recognition. This will be discussed in more detail below. The augmentation may be added in dependence on an indicator. The indicator may be displayed on the display. The indicator may comprise a cursor or other pointer. The indicator may indicate a point or region of the display. For example, the indicator may be in the form of a shape such as a circle, an interior part of which is indicated by the indicator. The indicator may comprise an outer edge that appears solid and/or coloured on the display, for example the circumference of a circular indicator may be a solid black or white line. The outer edge may flash to highlight the edge. The indicator may have a different colour, contrast, 3D appearance and/or focus compared to a remainder of the display. For example, the interior of an indicator shape may be in colour, and the exterior of the indicator may be in black and white. In some examples, the interior of the indicator may be emphasised by being in focus whilst the exterior of the indicator may be out of focus. Any other suitable way of highlighting the indicator may be used.

An augmentation may be given a label. The labelling of augmentations may occur manually, automatically or some combination of manually and automatically. For example, once an

augmentation has been added, a label can be specified by a user. This can be done by a user entering data, for example by entering a label on a keyboard, via a voice interface, via a gesture interface, via a pointer interface such as a mouse, and/or in any other suitable way. Combinations of these approaches to entering data may be used. In some examples, a voice-responsive input device, such as a microphone, can be used to input a label. A user may speak a label aloud which can be applied to the augmentation. A label may be added to an augmentation by selecting the label from a set of possible labels. The selection of a label can be performed via a menu system. The menu system may comprise all possible labels. The menu system may comprise, or make available, a set of possible labels. The set of possible labels may be pre-selected. The set of possible labels may be selected in dependence on at least one of a user profile for the user, a surgical procedure being performed, a location of the surgical site, and so on. In some examples a user may pre-define labels for use. The user-defined labels may be in addition to system-defined labels. The labels may be associated with one or more surgical procedure, such that a sub-set of the labels may be made available for the relevant surgical procedure. For example, the label‘artery’ may be appropriately available for a wide range of procedures. The label‘kidney’ need not be made available where the kidney will not be visible at the surgical site of a given procedure.

Image recognition may be used to assist in labelling an augmentation. Image recognition algorithms may select a set of possible labels to apply to an augmentation. The image recognition algorithms may select the set in dependence on the model, for example the 3D model, and/or the location of the surgical site.

The label may comprise a text label and a label highlight. The text label can, for example, provide the name of the feature being augmented, such as‘kidney’ or‘artery’. Any desired level of detail may be provided in the text label. The label highlight may comprise a visual indication of a point or a region of the representation. For example the edges of a feature, such as the edges of an artery or the edges of an organ such as a kidney, may be highlighted. Additionally or alternatively an interior region of a feature, for example a region bounded by edges of the feature, may be highlighted. The highlighting may take the form of one or more of an outline, shading, colouring, a change in 2D/3D appearance, differing contrast and so on. In one example, an organ such as the kidney (or that part of the organ visible in the representation) can be shaded. This can assist the user by providing a clear indication of the whereabouts of that feature, the kidney in this example, in the representation. Shading a feature may be desirable where a surgical procedure does not envisage interacting with that feature. Where a surgical procedure envisages interacting with a feature to be highlighted, an alternative form of highlighting, such as one that does not obscure the feature, may be used. The edges of the feature to be highlighted can be determined manually and/or automatically. In one example, a surgeon can guide the indicator to a point on the edge, and indicate to the system that this point represents a point on the edge of a feature. The surgeon may trace out the edge, or provide one or more points along the edge, based on which the remainder of the edge can be interpolated, extrapolated and/or otherwise determined by the system, for example by image analysis. For example, the system may be configured to determine a difference in one or more image characteristic to either side of the edge (for example one or more of colour, contrast, luminosity, depth and so on) and to trace a line through the image that follows the change in that one or more characteristic. Once the feature has been labelled, the system may perform image analysis and/or tracking to consistently label that feature as the representation changes.

The augmentation may be added or selected by a user of the system. For example, a controller, which may comprise an input device, may be configured to output an augmentation signal. The augmentation signal may be associated with the location of the indicator on the display. The augmentation signal may be indicative of the location on the display of the indicator. For example, where a user controls the controller to output the augmentation signal, the system may be configured to add an augmentation at the location of the indicator. In some examples, where a menu system is to be navigated, the location of the indicator in the menu system (i.e. a menu system value, or label) may be selected by activating the controller so as to output the augmentation signal.

The controller may be configured to output the augmentation signal in response to activation of a user control at the controller. The user control may comprise a button or switch. The user control may comprise a keyboard. The user control may comprise a resistive sensor, a capacitive sensor, a track ball, a joystick or a thumbstick, a voice sensor, a gesture sensor and/or any combination of these and other user input devices. The controller is, in some examples, configured to output the augmentation signal in response to receiving user input at the input device. The input device may comprise the user control. The input device suitably enables the user to control an indicator on a display. For example, movement of a joystick or thumbstick on the input device may cause a corresponding movement of an indicator on the display. In some examples, movement of the input device may cause a corresponding movement of the indicator on the display. For instance, a hand controller may be moved in three dimensions via, for example hand controller arm links and gimbal joints. Movement of the indicator may be based on at least one dimension of movement of the hand controller. For example, movement in two dimensions (such as those defining an x-y plane or a y-z plane) may control movement of the indicator in two dimensions on the display. This approach allows the indicator to be moved around the display by the user in an easy and intuitive manner. In one example, the movement in three dimensions of the hand controller may be used to control the positions of end effectors, and a thumbstick on one (or in some examples, both) input devices can be used to control the indicator position in the representation of the surgical site.

The controller may comprise or be part of a user console of a surgical robot.

An example illustration of a user console such as a surgeon console 400 is shown in figure 4. A user such as a surgeon can control the robot arms 302 and the instruments 320 coupled to the robot arms 302 via the input devices 304 at the surgeon console 400 and can manipulate the robot arms and/or the instruments as desired. As illustrated in figure 4, the surgeon console 400 comprises a contactless input device 410 which comprises at least one of a gesture sensor such as an optical gesture sensor and a voice sensor. The surgeon console 400 comprises a touchscreen input device 420. Additionally or alternatively, the display 306 may comprise a touchscreen input. The surgeon console 400 comprises a foot-operable input device 430 such as a foot pedal. One of each of devices 410, 420, 430 are shown in figure 4, but it will be appreciated that any numbers of any combination of these devices may be provided in other examples. Not all input devices, or all types of input devices, need be provided in all examples.

A schematic diagram of the configuration of a portion of a controller such as a user console 400 is illustrated in figure 5. The controller 500 comprises a surgeon command interface 510. The system further comprises a command processor unit 520 and a display 530. The command processor unit 520 is coupled to both the surgeon command interface 510 and to the display 530. The surgeon command interface 510 is configured to be operable by a user such as a surgeon. The surgeon command interface 510 permits the user to enter commands to the surgical robotic system. The user can use the command interface to control the operation of the surgical robotic system, for example by controlling one or more robot arms and/or end effectors coupled to the robot arms. The command interface 510 comprises an input device 512. The input device 512 may, for example, be an input device as illustrated in figure 4 at 304. Only one input device is shown in figure 5, but more than one input device may be provided. Typically, two input devices 512 are provided, one for use by each of a user’s two hands.

In some examples, the input device can be a handheld controller for manipulation by a surgeon controlling the surgical robot. For instance, the input device can be communicatively coupled to a robot arm and instrument, whereby the position and operation of an end effector of the instrument, such as at a surgical site, can be controlled by the surgeon.

A second input device 514 is provided at the command interface 510. The second input device is, in some examples, of the same type as the input device 512. In other examples, the second input device 514 is a different type of device to the input device 512. For example the second input device 514 may comprise one or more of a voice interface, a gesture interface and a touch interface. Thus, the second input device 514 may be responsive to a voice command, a gesture and/or a touch received at the second input device. The input device 514 may, for example, be a device as illustrated in figure 4 at 306, 410, 420 or 430.

This arrangement permits a surgeon to use the second input device to augment a representation of the surgical site. For example, during a surgical procedure, a surgeon may use the input device to perform part of the surgical procedure. At a point in time selected by the surgeon, it may be desirable to augment that part of the representation of the surgical site at which the indicator, controlled by the input device, is located. For example, where a surgeon has just completed a stitch, the surgeon may wish to augment the representation of the surgical site at or near to the stitch. This can allow the surgeon to record, on the representation of the surgical site, the location of the stitch. This can enable the surgeon (and/or another person) to locate that location, i.e. the stitch, at a later time (for example later in the procedure or during post-procedure review). As the surgical procedure progresses, the part of the surgical site displayed on the display is likely to change. Augmenting the representation of the surgical site so as to record the location of the stitch permits the surgeon to determine the orientation or direction, and/or distance, of that location from the current location. This can be useful in helping the surgeon to appropriately orient the displayed representation, for example where the stitch is no longer displayed on the display. The provision of the second input device permits the augmentation to be added without requiring the surgeon to change the manner in which the input device is used to control the end effector. That is, the end effector need not be moved to add the augmentation. The surgeon could, for example, say‘stitch’, and the second input device can detect the surgeon’s voice input, determine the command (here,‘stitch’) and cause a signal to be generated to cause the representation of the surgical site to be augmented accordingly. Additionally or alternatively, the surgeon may perform a gesture for detection by an input device sensitive to gestures, such as a camera. The surgeon may touch a touch-responsive input device, for example the display screen on which the representation of the surgical site is displayed. The second input device need not be controlled by the same person controlling the input device. In some examples a surgeon will control the input device so as to perform a surgical procedure. A surgical assistant, or other member of operating room staff, may use the second input device. In some examples, the second input device may take a similar form to the input device. In some examples, one or both of the input device and the second input device can be used to navigate a menu of the surgical robotic system, for example a menu displayed on the display. The menu options may be pre-configured. The menu options may be pre-configured according to one or more of: user preference, type of surgical procedure, stage in the surgical procedure, type and/or number of end effectors coupled to the system, and so on.

The second input device 514 need not be provided at the same command interface 510 as the input device 512 in all examples. For instance, the input device may be at or associated with one user console, and the second input device may be at or associated with the same or a different user console. The second input device may be configured to generate a signal. The processor may be configured to receive the generated signal from the second input device and augment the

representation of the surgical site or modify an augmentation accordingly.

The provision of a second input device 514 of a different type to the input device 512 advantageously permits the user of the command interface 510 to effect control of the robotic system more easily. For instance, where a user is controlling two manually operable input devices 512, the user is likely to need to let go of one of these input devices to be able to control a further manually operable input device. The user can advantageously effect control of the second input device 514 without needing to relinquish control of either of the two input devices 512. For example, where the second input device 514 comprises a voice interface, the user can speak a command aloud. This can be done whilst retaining a hold of the input device(s) 512.

As illustrated in figure 3, the command interface is coupled to a control unit 309 for effecting control of the robot arms and end effectors of the surgical robotic system. Referring again to figure 5, the command interface 510 is communicatively coupled to a command processor unit 520. The command processor unit 520 comprises a processor 521 . The processor 521 is configured to communicate with the command interface, or controller, 510 and to be able to control augmentation of a representation of a surgical site. The processor 521 may be configured to perform image processing, such as image recognition. Additionally or alternatively, an image processor 522 may be provided. The image processor 522 may be configured to perform image processing such as image recognition. The processor 521 and/or the image processor 522 may be configured to perform edge detection, spectral analysis and so on.

The processor 521 and the optional image processor 522 have access to a memory 523. In the example illustrated in figure 5 the memory is provided at the command processor unit 520. In some examples the memory may be provided elsewhere, and/or an additional memory may be provided elsewhere. Providing the memory 523 locally to the command processor unit 520 may improve memory access times. Providing the memory 523, at least in part, remote from the command processor unit 520 may enable a larger memory to be used without requiring a large physical size of the command processor unit 520. Where at least a portion of the memory 523 is provided remote from the command processor unit 520, the remote portion of the memory 523 may couple to the command processor unit 520 by one or more of a wired and a wireless connection.

The memory may store programs for execution by the processor 521 and/or the image processor 522. The memory 523 may be used to store the results of processing, and optionally intermediate processing results. The memory 523 may store a representation of the surgical site, or at least a portion thereof. The memory 523 may store augmentations in respect of the representation of the surgical site. The augmentations may be stored as part of the representation of the surgical site, or separately therefrom. In some examples, the augmentation(s) may be stored at the memory 523 at the command processor unit, and the representation of the surgical site on which the stored augmentation(s) is based may be stored at a remote memory. In some examples one or more augmentation may be stored at the memory 523 at the command processor unit, and one or more

augmentation and the representation of the surgical site on which the augmentations are based may be stored at a remote memory.

The command processor unit 520 may comprise calculation logic 525. The calculation logic may comprise distance calculation logic 526, area calculation logic 527, volume calculation logic 528 and/or user-defined calculation logic 529. The calculation logic is suitably configured to calculate one or more metric in dependence on at least one augmentation, as is described in more detail elsewhere herein.

Specifying augmentations

The following describes examples of how an augmentation may be specified by a user.

As discussed, an indicator on the display is controllable by the controller. The indicator may comprise a pointer such as a mouse pointer. An indicator such as a mouse pointer is typically a virtual indicator, in that it is not present at the surgical site, but is added to the representation of the site, or overlaid on the representation when the representation is displayed on the display.

In some examples, the indicator may comprise a physical indicator. The physical indicator may be provided as part of the surgical robot. For instance, the physical indicator may be provided as part of the end effector. Since the end effector is viewable on the display, the end effector itself may be used as the physical indicator.

The physical indicator may comprise an indicator portion of an end effector. For example, where the end effector is a gripper tool that has a pair of jaws, the indicator may comprise the tip of the jaws in the closed position. The end effector may have an indicator mode in which the end effector can act as an indicator. The indicator mode may be entered where the jaws are completely closed, or closed past a pre-determined point. Additionally or alternatively, the user may be able to select, via for example a control at the controller, for instance a control at one or other of the input device and the second input device, between an indicator mode for the end effector and a non-indicator mode for the end effector. Suitably, the end effector remains controllable in the usual manner whether or not the indicator or non-indicator mode is selected, such that there need not be any disruption in the procedure being carried out. The jaws of the gripper (or, more generally, the configuration or operation of any other end effector) need not be in a particular configuration for the end effector to act as an indicator.

In some examples, the indicator may comprise a particular portion of the end effector. For instance, a tip of a jaw, a tip of a needle, and so on. For example, where the end effector is a gripper, the tip of the left-most jaw can act as the indicator. Here,‘left-most jaw’ may be whichever jaw is to the left in the display as viewed by a user, or it may be a given jaw of the end effector irrespective of the

orientation as viewed. The indicator portion of an end effector may be indicated as such, for example by a mark on the end effector itself (which may be a difference in colour, shape and/or configuration from another portion of the end effector) and/or virtually on the display. Indicating the indicator portion of the end effector virtually has the advantage that such indication can be changed in accordance with one or more of user preference, operating conditions, surgical procedure being undertaken, and so on.

The augmentation may be added to the representation of the surgical site at the location indicated by the indicator. For example, the augmentation may be added at the position at which the indicator is located. The indicator position may be the position on the display (i.e. in the two-dimensional screen space of the display). The indicator position may be a projection of the indicator on the display onto a feature in the three-dimensional representation.

The augmentation is suitably added in response to the output of an augmentation signal by the controller. The augmentation signal may be output by the input device.

For example, an augmentation may be added to a feature indicated by the indicator, or to the location on the feature indicated by the indicator. A user may control the indicator to be over a desired feature, such as an organ in the displayed representation. The user may cause the controller to output the augmentation signal. In response, the processor 521 may augment the representation of the surgical site, such as by adding an augmentation at the indicator location. The user may open a menu, or a menu may open automatically on addition of an augmentation. The user can navigate the menu using the controller, for example the input device or the second input device, so as to select a desired label for the augmentation. In other examples, the processor 521 or image processor 522 may determine the feature to which the augmentation has been added. This determination may be made by image recognition and/or image matching or other image analysis, for example based on a 3D model which may be derived at least in part from data derived from a scan such as an MRI scan. An appropriate label for the determined feature may then be added to the representation or otherwise associated with the augmentation. In some examples, the label may be the name of the feature, e.g.‘artery’,‘kidney’, and so on.

An augmentation may be added automatically by the system, for example by the processor 521. The user need not take any action for the controller to output the augmentation signal. In some examples, the controller may comprise or have access to a clock, and the augmentation signal can be output by the controller in dependence on the clock. For instance, augmentation signals may be output at a preselected time, or at a pre-selected frequency. This can permit data to be obtained about the time taken for different surgeons to reach a given point in a procedure, or for data to be obtained about the point in a procedure a surgeon reaches at a given time from starting the procedure. Such data can enhance post-procedure, or offline, analysis of procedures that have been carried out. Such

augmentation can be done automatically, without needing user input. This approach enables the data to be obtained repeatably and efficiently.

An augmentation may be added automatically, or manually, at any desired stage in a procedure. In some examples, the system may monitor the procedure and compare the procedure being carried out with one or more previous procedure. Such a comparison may be performed continuously or periodically. A periodic comparison may use less processing power than a continuous comparison, and so may be preferred. The comparison may be performed by image processing of the

representation of the surgical site in respect of the procedure being carried out and a representation of a corresponding site in respect of the one or more previous procedures. The representation of the corresponding site may be based on an average of models associated with the one or more previous procedures. Where action is occurring in the procedure, it may be preferred to perform the

comparison at a relatively greater rate than when no action is occurring. Thus, movement at the site, for example of an end effector, can be taken into account in performing the comparison. The rate of comparison may be increased where the rate of movement is greater. A higher rate of comparison during periods of activity enables accurate comparisons to be made whilst saving processing power during periods of relative inactivity. The rate of comparison may be determined in dependence on robot kinematic data. For example, the rate of comparison may be determined in dependence on a velocity of a portion of an end effector, and/or on the operation of an end effector.

An augmentation may be added where a determination is made that the current procedure deviates from the one or more previous procedure. The deviation may relate to the time, or relative time, at which stages of the procedure are performed. The deviation may relate to locations at which stages of the procedure are performed. A deviation may be determined to occur when the current procedure varies from an expected behaviour, such as one expected on the basis of the one or more previous procedure, by more than a threshold amount. The threshold amount may be a time period at which an action is performed. For example, where actions are performed greater than 5 seconds, 10 seconds, 30 seconds, 1 minute, 2 minutes or 5 minutes earlier or later than an expected time, it may be determined that a deviation has occurred. The threshold amount may be a distance from a predetermined location in the representation of the surgical site. For example, where actions are performed greater than 1 mm, 2 mm, 5 mm, 10 mm or 20 mm from an expected location, it may be determined that a deviation has occurred. Such an approach allows a useful comparison to be made between procedures whilst permitting differences, such as physiological differences, to be taken into account.

In some examples, the representation may be augmented where it is determined that an error condition has occurred. The error condition may comprise a robot arm and/or instrument fault.

Threshold values for determining the occurrence of an error condition may be user-definable, and/or pre-set.

The augmentation signal may be generated in dependence on any desired source, for example a telematic data source. For example, the augmentation signal may be generated in dependence on determining one or more of an instrument change, change of a hand controller-arm association, change of electrosurgical mode, movement of the endoscope, re-indexing at least one hand controller, etc. The augmentation signal may be generated in dependence on a particular combination of any of the actions described herein occurring.

The augmentation signal may be indicative of a feature displayed on the display. The augmentation signal may comprise location data indicative of the location of the indicator with respect to the representation of the surgical site. For example, the augmentation signal may comprise data relating to the location of the indicator with respect to the 3D model. The controller, for example the input device, may be configured to provide the location data. The location data may be obtained in dependence on the displayed indicator.

The location data may comprise joint data, such as data associated with one or more joint of the robot arm and/or end effector. The joint data may comprise joint position data. For example the joint position data may comprise data relating to the positions, orientations and/or configurations of joints of a robot arm supporting an instrument, and/or data relating to the positions, orientations and/or configurations of joints of the instrument and/or end effector of the instrument. The joint data may comprise kinematic data. For example the kinematic data may comprise data relating to a change in position, orientation and/or configuration of one of more joint of the robot arm and instrument. The kinematic data may comprise data relating to a rate of change in position, orientation and/or configuration of one of more joint of the robot arm and instrument. The kinematic data may comprise initial position data of the one or more joints, from which the change in position occurs. The provision of such location data is particularly useful where an end effector acts as the indicator. In such cases, the 3D position of the indicator (i.e. the end effector) will be known. Thus the augmentation may be added to the representation in a highly accurate manner. This approach may also offer savings in terms of processing required, since the location data already exists in the system, and need not be recalculated.

In cases where a more accurate identification of a feature in the displayed representation is desired, it is possible to tag a feature at more than one location. The tags may be input at spaced, for example laterally-spaced, positions on the feature. In some examples, it may be desirable to change the orientation or zoom of the representation between one tag and another tag. In some examples, it may be desirable to change viewing conditions between one tag and another tag. Viewing conditions may comprise whether the view is a 2D view or a 3D view, image contrast, image colouring and/or shading, and/or whether image enhancement is present. In some examples, a feature may be indicated by one or more tags with a patient in one position, and the patient subsequently moved

(such movement can comprise moving one or more limb of the patient, and/or moving the orientation of an operating table on which the patient rests, and so on). Patient movement can, in some cases, cause parts of a surgical site to move relative to one another. For example, a change in orientation of the patient may cause organs to move due to differing gravitational effects. With the patient in a new position, the same feature may be tagged by one or more tags. Tagging features in this way can assist system robustness against patient movement or other similar effects. For example, tagging a feature at different patient positions may assist in enabling that feature to be tracked during a procedure, during which a patient may change positions. Image processing may be performed to identify whether the two (or more) tagged locations are part of the same feature, for example points spaced along an artery or points at different positions on a kidney. Where one point is on one feature (say on an artery) and another point is on a different feature (not on the artery), the system may prompt the user to tag one or more further points. A feature may be selected as the feature at which augmentation is desired in dependence on the relative number of tags at that feature compared to tags that are not at that feature. In this way, an inadvertent tag need not be removed or adjusted by a user, which may be time consuming, but rather one or more additional tag may be made (which may be quicker for the user) to indicate the feature to augment.

Once the representation has been augmented, the augmentation may be movable, for example automatically or by a user. The augmentation may be movable by dragging and dropping the augmentation on the display. The controller may be configured to permit such dragging and dropping, for example via a computer mouse cursor, the input device and/or the second input device.

As mentioned, an augmentation may be labelled by a user, using free text, and/or selecting from a set of labels. The labelling of an augmentation may occur automatically, or at least partly automatically. For instance, the representation may be processed by the processor 521 or the image processor 522 and image recognition techniques used to suggest what a particular feature is. In some cases, there may be difficulties in using such image recognition techniques alone. This may be because the imaged surgical site may not have high enough contrast, and/or it may not be well lit. Advantageously, the present techniques permit an enhancement in image processing. A comparison can be made between the representation of the site for the current procedure, and one or more representation of a previous procedure and/or a model. The representation of the previous procedure and/or the model suitably comprise at least one labelled or known feature. Where the feature in the current

representation is determined to be the same or similar to that in the previous representation or in the model, the label of that feature in the previous representation or the model may be made available for selection by a user, or automatically applied to the augmentation of the feature in the current representation. A determination as to whether a label is made available to a user for selection or automatically applied may be made in dependence on a confidence factor associated with the label. This approach permits the system to‘learn’ the identities of different features by building up a database in respect of similar types of procedures and/or models of procedures. Thus the automatic labelling, or suggestion of possible labels, can be made more accurate. This can save user time in correctly labelling features. A more accurate labelling of features can increase the accuracy of tasks based on those labels.

The processor may be configured to monitor or track an augmentation and/or a feature of the representation such as an anatomical feature, for example an organ or a blood vessel. In some examples, the feature such as an anatomical feature can be determined automatically. For example the system can determine the feature by image processing. This monitoring or tracking of the augmentation and/or feature is useful where the representation of the surgical site changes. For example, a viewing position of the representation may change. This may be due to a lateral move of the viewing position, or to an angular change in viewing position, as may occur, for example, on a change in the location and/or orientation of the imaging device. Usefully, the system is configured such that the augmentation retains its position relative to the representation of the surgical site as the portion of the representation that is displayed on the display changes. For example, where an imaging device moves to the right, causing the representation based on the image output of that imaging device to move to the left on the display, the augmentation will also move to the left. The system is suitably configured so that the augmentation moves in registration with the representation of the surgical site, i.e. the system is configured so that the augmentation moves together with the representation. For instance, where an augmentation is added to a particular feature in the representation, the augmentation suitably moves together with movement of that feature. Such movement of the feature may occur on a pan and/or zoom change with respect to the representation of the surgical site. Movement of the feature may occur in other ways. For example, the surgical procedure may involve moving the feature. The feature, such as an organ, may move due to one or more of breathing, heartbeat and gravity (e.g. when a patient table is adjusted).

As mentioned above, an augmentation may be added manually and/or automatically. An

augmentation may be added in dependence on determining that an augmentation criterion is satisfied. The processor 521 may be configured to determine whether or not the augmentation criterion is satisfied.

The augmentation criterion may comprise determining whether a surgical instrument is attached to the system, detached from the system, whether the surgical instrument is operated by the system and/or whether there is a change in state of the surgical instrument. The augmentation criterion may comprise determining that there is an image recognition signal indicative of an image recognition match in the representation of the surgical site. For example, where a feature in the representation is determined by image recognition to be a particular feature, such as a kidney, it is useful for the occurrence of such an image recognition match to trigger the augmentation of the representation accordingly.

The augmentation criterion may comprise determining that there is an error signal indicative of an error associated with the surgical robotic system. The augmentation criterion may comprise determining that a particular time has been reached, or that a particular time has elapsed. More generally, it may be determined, for example by the processor 521 , whether the augmentation criterion is satisfied in dependence on a time signal. For example, an augmentation criterion may comprise a particular action occurring at a particular time, or within a particular time frame. This may include a stitch or a series of stitches being made within a given time period.

The time signal may comprise a signal indicative of the time in the day. The time signal may comprise a signal indicative of the time elapsed since the start of a procedure, and/or a pre-defined point in the procedure. The pre-defined point can, for example, be the start or end of a cutting procedure, the start or end a suturing procedure and/or the start or end of an electrocautery procedure. The time signal may comprise an indication of the duration of a procedure, for example one or more of a cutting, suturing and electrocautery procedure.

The augmentation criterion may comprise determining that there is a change of a user of the surgical robotic system. A signal indicating a change of user can be indicative of a surgeon controlling the input device pausing the procedure, for example by clutching out the input device so as to decouple the input device from active control of the end effector, and the procedure being resumed by another surgeon. The signal indicating a change of user can be indicative of a surgeon at one console taking over from a surgeon at another console. This could occur during a surgical procedure, or at a break in a surgical procedure.

Determining that there is a change of a user may be made in dependence on a signal such as a user-change signal. It may be determined that a user of the system has changed in dependence on a login or registration associated with the user of the system. It may be determined that a user of the system has changed in dependence on a recognition signal associated with a user. The recognition signal may be output from an imaging device or visual processor, which may be configured to perform facial recognition to identify a user, for example from a group of users, and/or to perform pattern recognition on for instance a 2D code such as a QR code. The recognition signal may be output from a wireless receiving device configured to detect a wireless signal. The wireless receiving device may be configured to detect WiFi (TM) signals, Bluetooth (TM) signals and/or radio frequency signals such as RFID signals. A device carried by a user that emits at least one of these types of signal can be used to distinguish between users, and optionally to identify a particular user.

Augmenting the representation‘offline’

An augmentation may be added during a procedure, as described in examples above. An

augmentation may additionally or alternatively be added before or after a procedure. An augmentation may be added before a procedure is started, for example in a planning phase. Before the procedure is started there will not be a‘live’ image feed from an imaging device at the surgical site. Preferably therefore, before a procedure is started, an augmentation is added to a model such as a 3D model of the site. Such a 3D model may be generated in one of several ways. For example, the 3D model may be derived from a scan such as an MRI scan. The 3D model may be derived from a stereotype, which may be selected according to one or more patient-related parameter. The 3D model may be derived from more than one stereotype. For example, the 3D model may be derived from a weighted combination of different stereotypes.

The controller is suitably configured to be able to navigate through the model so as to visualise the expected representation during the procedure. For example, this can be done by the input device and/or the second input device.

In the planning phase, augmenting the representation of the surgical site is useful so as to be able to identify possible areas of interest and/or the location of expected surgical interventions. In such a planning phase, the augmentations may be added by a surgeon or other medical practitioner. For example the augmentation may be added by a trainee surgeon or a nurse. Such augmentations added in the planning phase need not be that accurate. It may be sufficient to indicate a general area or feature. Such augmentations can indicate an approximate location in the overall surgical site, for example by identifying key features and/or the direction of key features such as blood vessels, organs and/or bone structure. Such indications may reduce the time needed by a surgeon during a procedure. For instance, by assisting the surgeon to locate themselves within the surgical site, the augmentations can save the surgeon time and/or effort. This can help in reducing the overall time required for the procedure, which can have advantages for both the patient and the hospital, as discussed elsewhere herein.

During the procedure, augmentations may be added to highlight areas of interest. These can include points to which the surgeon may wish to return during the procedure. This could be because the surgeon has noticed something unexpected which warrants a more detailed check. Such

augmentations may indicate a single point of interest (such as an organ), or multiple points of interest (for example multiple organs or multiple points on the same organ).

An augmentation may be added by the surgeon to indicate an area of higher risk or danger. For example, it may be desirable to highlight, by means of an augmentation, the location of a blood vessel such as an artery. This can assist the surgeon in avoiding the blood vessel, and so can reduce the risk of causing unintentional bleeding during a procedure.

Augmentations may be added to indicate way points in a procedure. Such way points may be useful in guiding a user (such as a trainee or less experienced surgeon). This approach can enable a user to more quickly retrace a traversed path, which, in turn, permits a reduction in the time needed to

complete a procedure. Way points may be useful in guiding a trainee or less experienced surgeon either during a live procedure, or during a simulation of a live procedure.

An augmentation may be added at the location of a suture, a cut, an electrocautery operation and/or a grip point of tissue. In general, an augmentation may be added at a point at which an end effector is or becomes operational. Augmenting the representation of the surgical site in this way permits activity sites to be tracked. The 3D location of these activity sites can be determined, based on the augmentations. This can permit later analysis of the procedure, or of the particular activity during the procedure.

Augmentations may be added after a procedure has been completed. Such augmentations may be added on a recorded feed of the procedure and/or on a model constructed in dependence on data obtained from or during the procedure. The augmentations may indicate possible areas for improvement. Adding such augmentations after the procedure has been completed means that a review of the procedure may be carried out in a less stressful environment than during the procedure itself. A greater level of analysis may therefore be performed than might be possible during the procedure. In some examples, the augmentations can indicate an optimum location and/or spacing of suture sites. Such augmentations can raise awareness amongst users of potential issues which might occur in later procedures which are the same as or similar to the procedure being reviewed. Raising awareness in this way can reduce the number of undesirable incidents in later procedures, which can, for instance, increase the efficiency of these later procedures and/or may reduce the number of complications during a procedure.

Augmentations added to the representation of the surgical site can be used in several different ways. One example of the way in which an augmentation can be used is to help in orienting a surgeon during a procedure. Augmentations permit indications to be added to the representation of where in the site the surgeon is looking. For example by labelling an organ such as the kidney, the surgeon will have a better understanding of the site, and so can control the end effectors and/or move around the site more easily. This will be discussed in more detail below.

Grouping augmentations

Augmentations may be added to one or more group of augmentations. A plurality of augmentations may be added to a particular group of augmentations. Augmentations can be grouped according to a characteristic common to those augmentations. For example, augmentations may be grouped according to one or more of:

• the user of the system at the point at which the augmentation is added,

• the user who adds the augmentation,

• the procedure being carried out,

• the type of procedure being carried out,

• the type of feature being augmented (for example organs, blood vessels, tissue and/or bone, damage and or diseased areas),

• the feature being augmented (for example a particular organ or blood vessel),

• a point at which action is desired (i.e. a“to do” list, which might include an incision point),

• time (e.g. all augmentations added in the last hour, or the last 30 minutes),

and so on.

Augmentations in different groups may be distinguishable on the display. For example, the system (e.g. the processor 521) may be configured to highlight augmentations in different groups differently. Augmentations may be highlighted by one or more of being in a different colour, by having a label with a different font and/or size, by having a different outline and by flashing or by flashing at a different frequency.

The system is suitably configured to show or hide augmentations in dependence on a group of augmentations to which a particular augmentation belongs. This increases the ease with which a user is able to identify augmentations of a particular type, and so to take action in dependence on those augmentations.

The system, for example the processor 521 , may be configured to perform calculations in

dependence on one or more augmentation. Suitably the calculations may be performed automatically. For example, the system may automatically count the number of augmentations added, or the number of augmentations in a particular group of augmentations. For example, a group of augmentations may relate to tendrils extending along a feature, such as the stomach. The surgeon may need to move around the site to see all the tendrils, as some may be on sides of the stomach facing away from each other. The surgeon need not add augmentations in respect of all the tendrils in one go. The augmentations may be added at different stages in the procedure, and indeed even in more than one procedure. It can be difficult to correctly remember the number of tendrils in such a situation. More than one user may have added the augmentations. It is therefore useful if the system provides a count of the number of augmentations in the, for example,‘tendril’ group of augmentations.

The system may be configured to determine a distance between a plurality of augmentations, or a plurality of augmentations in a group of augmentations. The distance between the plurality of augmentations may comprise a largest distance between the augmentations, for example the distance between the two augmentations that are furthest apart from one another (in the 2D or 3D space of the representation of the surgical site). The distance between the plurality of augmentations may comprise the smallest distance between the augmentations, for example the distance between the two augmentations that are closest to one another (in the 2D or 3D space of the representation of the surgical site). The distance between the plurality of augmentations may comprise an average (e.g. one or more of a mean, mode or median) of the distances between the plurality of augmentations, or

a subset of the plurality of augmentations. The distance between the plurality of augmentations may comprise the distance between subsequent augmentations (i.e. for 3 augmentations, the total distance may be the sum of the distance between the first and second augmentations and the distance between the second and third augmentations). This approach permits a user to add augmentations along the length of a feature, and the system to then determine the length of that feature.

The system may be configured to determine the orientation of a feature. For example, the system may be configured to determine the orientation of a feature in dependence on a line or lines joining two or more augmentations associated with that feature. The orientation may be determined with respect to a convenient frame of reference, for example a surgical table, a body cavity, another identified or selected feature at the surgical site, and so on.

The system may be configured to indicate a line between two more

CLAIMS

1 . A surgical robotic system for augmenting a representation of at least a portion of a surgical site for aiding in orienting the representation, the system comprising:

a processor configured to:

receive an imaging device signal indicative of the location and orientation of an imaging device relative to a surgical site,

augment a representation of at least a portion of the surgical site in dependence on the received imaging device signal, the augmentation indicating an orientation of the representation of the surgical site,

receive a further imaging device signal indicative of an updated location and/or orientation of the imaging device,

determine a change in at least one of the location and orientation of the imaging device in dependence on the imaging device signal and the further imaging device signal, and update the augmented representation in dependence on the determined change; and a display configured to display at least part of the augmented representation.

2. A surgical robotic system according to claim 1 , in which at least one of the received imaging device signal and the received further imaging device signal comprises kinematics data relating to the imaging device.

3. A surgical robotic system according to claim 1 or claim 2, in which the processor is configured to determine the augmentation in dependence on a feature in the representation of the surgical site.

4. A surgical robotic system according to any preceding claim, in which the processor is configured to determine the augmentation in dependence on augmentation data.

5. A surgical robotic system according to claim 4, in which the augmentation data comprises data associated with the representation.

6. A surgical robotic system according to claim 4 or claim 5, in which the augmentation data comprises data indicative of a feature in the representation.

7. A surgical robotic system according to claim 6, in which the data indicative of a feature in the representation is indicative of one or more feature group of a set of feature groups.

8. A surgical robotic system according to any of claims 4 to 7, in which the augmentation data comprises an augmentation signal indicative of user input relating to the representation.

9. A surgical robotic system according to claim 8, in which the system comprises a controller configured to generate the augmentation signal in response to user input at the controller.

10. A surgical robotic system according to any preceding claim, in which the processor is configured to determine the augmentation in dependence on image processing of at least a portion of the representation.

1 1 . A surgical robotic system according to any preceding claim, in which the processor is configured to track a location of a feature in the representation, and update the augmented representation in dependence on the tracked location and a change in at least one of the imaging device orientation, the imaging device location, a field of view of the imaging device and a field of view of a displayed portion of the representation.

12. A surgical robotic system according to any preceding claim, further comprising an imaging device, whereby the imaging device is configured to image at least a portion of the surgical site and generate an image feed of the imaged portion.

13. A surgical robotic system according to claim 12, in which the processor is configured to determine the augmentation in dependence on image processing of at least a portion of the generated image feed.

14. A surgical robotic system according to any preceding claim, in which the representation is obtained in dependence on at least one of 3D model data of a 3D model of at least a portion of the surgical site and the generated image feed.

15. A surgical robotic system according to claim 14, in which the processor is configured to receive at least one of the 3D model data and the generated image feed.

16. A surgical robotic system according to any preceding claim, further comprising a memory coupled to the processor, the memory being configured to store at least one of the representation of the surgical site and the augmentation.

17. A method for augmenting a representation of at least a portion of a surgical site for aiding in orienting the representation, the method comprising:

receiving an imaging device signal indicative of the location and orientation of an imaging device relative to a surgical site,

augmenting a representation of at least a portion of the surgical site in dependence on the received imaging device signal, the augmentation indicating an orientation of the representation of the surgical site,

receiving a further imaging device signal indicative of an updated location and/or orientation of the imaging device,

determining a change in at least one of the location and orientation of the imaging device in dependence on the imaging device signal and the further imaging device signal,

updating the augmented representation in dependence on the determined change, and displaying at least part of the augmented representation.

18. A method according to claim 17, comprising determining the augmentation in dependence on a feature in the representation of the surgical site.

19. A method according to claim 17 or claim 18, comprising determining the augmentation in dependence on a feature that is not present in a displayed portion of the representation.

20. A method according to any of claims 17 to 19, comprising receiving augmentation data, and determining the augmentation in dependence on the received augmentation data.

21 . A method according to claim 20, in which the augmentation data comprises data indicative of a feature in the representation.

22. A method according to claim 21 , in which the data indicative of a feature in the representation is indicative of one or more feature group of a set of feature groups.

23. A method according to any of claims 20 to 22, in which the augmentation data comprises an augmentation signal indicative of user input relating to the representation.

24. A method according to any of claims 17 to 23, comprising determining the augmentation in dependence on image processing of at least a portion of the representation.

25. A method according to any of claims 17 to 24, comprising tracking a location of a feature in the representation, and updating the augmented representation in dependence on the tracked location and a change in at least one of the imaging device orientation, the imaging device location, a field of view of the imaging device and a field of view of a displayed portion of the representation.

Documents

Application Documents

# Name Date
1 202528021912-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2025(online)].pdf 2025-03-11
2 202528021912-REQUEST FOR EXAMINATION (FORM-18) [11-03-2025(online)].pdf 2025-03-11
3 202528021912-FORM 18 [11-03-2025(online)].pdf 2025-03-11
4 202528021912-FORM 1 [11-03-2025(online)].pdf 2025-03-11
5 202528021912-FIGURE OF ABSTRACT [11-03-2025(online)].pdf 2025-03-11
6 202528021912-DRAWINGS [11-03-2025(online)].pdf 2025-03-11
7 202528021912-DECLARATION OF INVENTORSHIP (FORM 5) [11-03-2025(online)].pdf 2025-03-11
8 202528021912-COMPLETE SPECIFICATION [11-03-2025(online)].pdf 2025-03-11
9 202528021912-FORM-26 [18-03-2025(online)].pdf 2025-03-18
10 202528021912-Proof of Right [20-03-2025(online)].pdf 2025-03-20
11 Abstract.jpg 2025-03-27
12 202528021912-FORM 3 [09-09-2025(online)].pdf 2025-09-09