Abstract: There are defined techniques (e.g., methods, systems...) for localization, such as depth map estimation. For example, a method (30, 350) for localizing, in a space containing at least one determined object (91, 101, 111, 141), an object element (93, 143, X) associated to a particular 2D representation element (XL) in a determined 2D image of the space, may comprise: deriving (351) a range or interval of candidate spatial positions (95, 105, 114b, 115, 135, 145, 361') for the imaged object element (93, 143, X) on the basis of predefined positional relationships (381a', 381b'); restricting (35, 36, 352) the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions (93a, 93a', 103a, 112', 137, 147, 362'), wherein restricting includes at least one of: limiting the range or interval of candidate spatial positions using at least one inclusive volume (96, 86, 106, 376, 426a, 426b) surrounding at least one determined object (91, 101, ); and limiting the range or interval of candidate spatial positions using at least one exclusive volume (279, 299a-299e, 375) surrounding non-admissible candidate spatial positions; and retrieving (37, 353), among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position (363') on the basis of similarity metrics
1 Technical field
Examples here refer to techniques (methods, systems, etc.) for localizing object elements in an
imaged space.
For example, some techniques relate with computation of admissible depth intervals for (e.g., semiautomatic) depth map estimation, e.g., based on 3D geometry primitives.
Examples may related to 2D images obtained in multi-camera systems.
2 Techniques discussed here
Having multiple images from a scene permits to compute depth or disparity values for each pixel of
the image. Unfortunately, automatic depth estimation algorithms are error prone and not able to
provide an error-free depth map.
To correct those errors, the literature proposes a depth map refinement based on meshes, in that
the distance of the mesh Is an approximation of the admissible depth value. However, so far no clear description how the admissible depth interval can be computed is available. Moreover, it is not
considered that the mesh only partially represents a scene, and specific methods are necessary to
avoid wrong depth map constraints.
Present examples propose, inter alia, methods for attaining such purpose. The present application
further describes methods to compute a set of admissible depth intervals (or more in general a range or interval of candidate spatial positions) from a given 3D geometry of a mesh, for example. Such a
set of depth intervals (or range or interval of candidate spatial positions) may disambiguate the
problem of correspondence detection and hence improve the overall resulting depth map quality.
Moreover, the present application further shows how inaccuracies in the 3D geometries can be
compensated by so called inclusive volumes to avoid wrong mesh constraints. Moreover, inclusive
volumes can also be used to cope with occluders to prevent wrong depth values.
In general, examples relate to a method for localizing, in a space (e.g., 3D space) containing at least
one determined object, an object element (e.g., the part of the surface of the object which is imaged in the 2D image) which is associated to a particular 2D representation element (e.g., a pixel) in a
determined 2D image of the space. The depth of the object element may therefore be obtained.
In examples, the imaged space may contain more than one object, such as a first object and a second object, and it may be requested to determine whether a pixel (or more in general a 2D
representation element) is to be associated to the first Imaged object or the second imaged object. It is also possible to obtain the depth of the object element in the space: the depth for each object
element will be similar to the depths of the neighbouring elements of the same object.
Examples above and below may be based on multi-camera systems (e.g., stereoscopic systems or
light-field camera arrays, e.g. for virtual movie productions, virtual reality, etc.), in which each
different camera acquires a respective 2D image of the same space (with the same objects) from
.
different angles (more in general, on the basis of a predefined positional and/or geometrical relationship). By relying on the known predefined positional and/or geometrical relationship, is possible to localize each pixel (or more in general each 2D representation element). For example, epi-polar geometry may be used.
It is also possible to reconstruct the shape of one or more object(s) placed within the imaged space, e.g., by localizing multiple pixels (or more in general 2D representation elements), to construct a complete depth map.
In general terms, the processing power wasted by processing units for performing these methods are not negligible. Therefore, it is in general requested to reduce the required computational effort.
The following figures are discussed here:
Fig. 1 refers a localization technique and in particular relates to the definition of depth.
Fig. 2 illustrates the epi-polar geometry, valid both for the prior art and for the present examples.
Fig. 3 shows a method according to an example (workflow for the interactive depth map
improvement).
Figs. 4-6 illustrate challenges in the technology, and in particular:
Fig. 4 illustrates to a technique for approximating an object using a surface approximation.
Fig. 5 illustrates competing constraints due to approximate surface approximations;
Fig. 6 illustrates the challenge of occluding objects for determination of the admissible depth interval.
Figs. 7-38 show techniques according to the present examples, and in particular:
Fig. 7 shows admissible locations and depth values induced by a surface approximation;
Fig. 8 shows admissible locations and depth values induced by an inclusive volume;
Fig. 9 illustrates a use of inclusive volumes to bound the admissible depth values allowed by a surface approximation;
Fig. 10 illustrates a concept for coping with occluding objects;
Fig. 11 illustrates a concept for solving competing constraints;
Figs. 12 and 12a show planar surface approximations and their comparison with closed volume surface approximations;
Fig. 13 shows an example of restricted range of admissible candidate positions (ray) intersecting several inclusive volumes and one surface approximation;
Fig. 14 shows an example for derivation of the admissible depth ranges by means of a tolerance value;
Fig. 15 illustrates an aggregation of matching costs between two 2D images for computation of similarity metrics;
Fig. 16 shows an implementation with a slanted plane;
Fig. 17 shows the relation of 3D points (X, Y, Z) in a camera coordinate system and a corresponding pixel;
Fig. 18 shows an example of a generation of an inclusive volume by scaling a surface approximation relating to a scaling center;
Fig. 19 illustrates the creation of an inclusive volume from a planar surface approximation;
Fig. 20 shows an example of surface approximation represented by a triangle structure;
Fig. 21 shows an example of an initial surface approximation where all triangles elements have been shifted in the 3D space along their normal vector;
Fig. 22 shows an example of reconnected mesh elements;
Fig. 23 shows a technique for duplicating control points;
Fig. 24 illustrates the reconnection of the duplicated control points such that all control points originating from the same initial control point are directly connected by a mesh;
Fig. 25 shows an original triangle element intersected by another triangle element;
Fig. 26 shows a triangle structure decomposed into three new sub-triangle structures;
Fig. 27 shows an example of exclusive volumes;
Fig. 28 shows an example of a closed volume;
Fig. 29 shows an example of exclusive volumes;
Fig. 30 shows an example of planar exclusive volumes;
Fig. 31 shows an example of image-based rendering or displaying;
Fig. 32 shows an example of determination of the pixels causing the view-rendering (or displaying) artefacts;
Fig. 33 shows an example of epi-polar editing mode;
Fig. 34 illustrates a principle of the free epi-polar editing mode;
Fig. 35 shows a method according to an example;
Fig. 36 shows a system according to an example;
Fig. 37 shows an implementation according to an example (e.g., for exploiting multi-camera consistency);
Fig. 38 shows a system according to an example.
Fig. 39 shows a procedure according to an example.
Fig. 40 shows a procedure which may be avoided.
Figs. 41 and 42 show examples.
Figs. 43-47 show methods according to examples.
Fig. 48 shows an example.
3 Background
2D photos captured by a digital camera provide a faithful reproduction of a scene (e.g., one or more objects within in a space). Unfortunately, this reproduction is valid for a single point of view. This is too limited for advanced applications, such as virtual movie productions or virtual reality. Instead, the latter require generation of novel views from the captured material.
Such creation of novel views is possible in different ways. The most straightforward approach is to create a mesh or a point cloud from the captured data [12][13]. Alternatively, depth image-based rendering or displaying can be applied.
In both cases it is necessary to compute a depth map per view point for the captured scene. A view point corresponds to one camera position from which the scene has been captured. A depth map assigns to each pixel of the considered captured image a depth value.
The depth value 12 is the projection of the object element distance 13 on the optical axis 15 of the camera 1 as depicted in Fig. 1. The object element 14 is to be understood as an (ideally 0-dimensional) element that is represented by a 2D representation element (which may be a pixel, even If it would ideally be 0-dimensional). The object element 14 may be an element of the surface of a solid, opaque object (of course, in case of transparent objects, the object element may be refracted by the transparent object according to the laws of optics).
In order to obtain the depth value 12 per pixel (or more in general per 2D representation element), there exist different methods:
• Usage of active depth sensing devices such as structured illumination, or LIDAR
• Computation of the depth from multiple images of the same scene
• A combination of both
While all of these methods have their merits, the computation of depth from multiple images excels by its low costs, its short capture times and the high-resolution depth maps. Unfortunately, It Is not free of errors. If such an erroneous depth map is directly used in one of the applications mentioned before, the synthesis of novel views would result in artefacts.
It is hence necessary to devise methods by which artefacts in depth maps can be corrected in an intuitive and fast manner.
4 Problems encountered in the technical field
In several examples of the following we may be considering a scenario where a scene is
photographed - or captured or acquired or imaged - from multiple camera positions (in particular, at known positional/geometrical relationships between the different cameras). The cameras themselves can be photo cameras, video cameras, or other types of cameras (LIDAR, infrared, etc...).
Moreover, a single camera can be moved to several places, or multiple cameras can be used simultaneously. In the first case, static scenes (e.g., with non-movable object/s) can be captured, while in the latter case, also acquisitions of moving objects are supported.
The camera(s) can be arranged In an arbitrary manner (but, in examples, in known positional relationships with each other). In a simple case, two cameras may be arranged next to each other, with parallel optical axes. In a more advanced scenario, the camera positions are situated on a regular two-dimensional grid and all optical axes are parallel. In the most general case, the known camera positions are located arbitrarily in space, and the known optical axes can be oriented in any direction.
Having multiple images from a scene, we may derive a depth value for (or more in general localize) each pixel (or more in general 2D representation element) in each 2D image.
To this end, we can principally distinguish two approaches, namely the manual assignment of depth values, the automatic computation, and the semi-automatic approach. The manual assignment is the one extreme, where photos are manually converted into a 3D mesh [10], [15], [16]. Then the depth maps can be computed by rendering or displaying so called depth or z-passes [14]. Though this approach allows full user control, it is very cumbersome to come to precise pixel-wise depth maps.
The other extreme are the fully automatic algorithms [12] [13]. While they have the potential to compute a precise depth value per pixel, they are error-prone by nature. In other words, for some pixels the computed depth value is simply wrong.
Consequently, none of these approaches is completely satisfying. Methods are required where the precision of the automatic depth map computation can be combined with the flexibility and control of the manual approach. Moreover, the required method(s) must be such that they rely as much as possible on existing software tools for 2D and 3D image processing. This allows to profit from the very powerful editing tools already available in the art, without needing to recreate everything from scratch.
To this end, reference [8] indicates the possibility to use a 3D mesh for refinement of depth values. While their method uses a mesh created for or within a previous frame, it is possible to deduce that instead of using the mesh from a preceding frame, also a 3D mesh created for the current frame can be used. Then reference [8] advocates correcting possible depth map errors by limiting a stereo matching range based on the depth information available from the 3D mesh.
While such an approach hence entitles us to use existing 3D editing software for interactive correction and improvement of depth maps, its direct application is not possible. First of all, since our meshes shall be created in a manual manner, we need to prevent that a user has to remodel the complete scene for fixing a depth map error that is well located in a precise subpart of the image. To this end, we will require specific mesh types as explained in Section 9. Secondly, reference [8]
doesn't explain how to precisely limit the search reach of the underlying depth estimation algorithms. Consequently we will propose a precise method how to translate 3D meshes into admissible depth ranges.
S Methods using similarity metrics
Depth computation from multiple images of a scene essentially requires the establishment of correspondences within the images. Such correspondences then permit to compute the depth of an object by means of triangulation.
Fig. 2 depicts a corresponding example in form of two cameras (left 2D image 22 and right 2D image 23), whose optical centers (or entrance pupils or nodal points) are located in 0L and 0R respectively. They both take a picture of an object element X (e.g., the object element 14 of Fig. 1). This object element s is depicted in pixel XL in the left camera image 22.
Unfortunately, from the left image 22 only it is not possible to determine the depth of (or otherwise localize) the object element s. In fact, all object elements X,X1, X2,X3 would result in the same pixel (or otherwise 2D representation element) XL. All these object elements are situated on a line in the right camera, the so-called epi-polar line 21. Hence, identifying the same object element in both the left view 22 and the right view 23 permits to compute the depth of the corresponding pixels. More in general, by identifying the object element X, associated in the left 2D image 22 to the 2D
representation element XL, in the right view 23, it is possible to localize the object element X.
To identify such correspondences, there exist a huge amount of different techniques. In general, these different techniques have in common that they compute for every possible correspondence some matching costs or similarity metrics. In other words, every pixel or 2D representation element (and possibly its neighbor pixels or 2D representation elements) on the epi-polar line 21 in the right view 23 is compared with the reference pixel XL (and possibly its neighbor pixels) in the left view 22. Such a comparison can for instance be done by computing a sum of absolute differences [11] or the hamming distance of a census transform [11], The remaining difference is then considered (in examples) as a matching cost or similarity metric, and larger costs indicate a worse match. Depth estimation hence comes back to choosing for every pixel a depth candidate such that the matching costs are minimized. This minimization can be performed independently for every pixel, or by performing a global optimization over the whole image.
Unfortunately, from this description it is possible to grasp that determination of correspondences is a problem which is difficult to be solved. It can happen that there are several similar objects that are located on the epi-polar line of the right view shown in Fig. 2. As a consequence, a wrong correspondence might be chosen, leading to a wrong depth value, and hence to an artefact in virtual view synthesis. Just to give an example, If on the basis of the similarity metrics it is incorrectly concluded that reference pixel XL of the left image view 22 corresponds to the position of the object element X2, the object element X will be consequently incorrectly located in space.
In order to reduce such depth errors, the user must have the possibility to manipulate the depth values. One approach to do so is the so-called 2D to 3D conversion. In this case, the user can assign depth values to the different pixels in a tool assisted manner [1][2][3][4]. However, such depth maps are typically not consistent between multiple captured views and can thus not be applied to virtual view synthesis, because the latter requires a set of captured input images and consistent depth maps for high-quality occlusion free results.
Another class of methods consists in post filtering operations [5]. In this case the user marks erroneous regions in the depth map, combined with some additional information like whether a pixel belongs to a foreground, or a background region. Based on this information, depth errors are then eliminated by some filtering. While such an approach is straight-forward, it shows several drawbacks. First of all, it directly operates in 2D space, such that each depth map of each image needs to be corrected individually, which is a lot of work. Secondly, the correction is only indirect in form of filtering, such that a successful depth map correction cannot be guaranteed.
The third class of methods hence avoids filtering erroneous depth maps, but aims to directly improve the initial depth map. One way to do so is to simply limit the admissible depth values on a pixel level. Hence, instead of searching the whole epi-polar 21 line in Fig. 2 for correspondences, only a smaller part will be considered. This limits the probability of confusing correspondences, and hence leads to improved depth maps.
Such a concept is followed by [8]. It assumes a temporal video sequences, where the depth map should be computed for time instance t. Moreover, it Is assumed that a 3D model is available for time instance t-1. This 3D model is then used as an approximation for the current pixel depths, allowing reducing the search space. Since the 3D model belongs to a different time instance than the frame for which the depth map should be improved, they need to align the 3D model with the current frame by performing a pose estimation. While this complicates the application, it is possible to grasp that by replacing the pose estimation by a function that simply returns the 3D model instead of changing its pose, this method is very close to the challenge defined in Section 4. Unfortunately, such a concept misses important properties that are necessary for using manually created meshes. Adding those methods is subject to the present examples.
Reference [6] explicitly introduces a method for manual user interaction. Their algorithm applies a graph cut algorithm to minimize the global matching costs. These matching costs are composed of two parts: A data cost part, which defines how well the color of two corresponding pixels matches, and a smoothness cost that penalizes depth jumps between neighboring pixels with similar color. The user can influence the cost minimization by setting the depth values for some pixels or by requesting that the depth value for some pixels is the same of that in the previous frame. Due to the
smoothness costs, these depth guides will also be propagated to the neighbor pixels. In addition, the user can provide an edge map, such that the smoothness costs are not applied at edges. Compared to this work, our approach is complementary. We do not describe how to precisely correct a depth map error for a specific depth map algorithm. Instead we show how to derive admissible depth ranges from user input provided in 3D space. These admissible depth ranges that can be different for every pixel can then be used in every depth map estimation algorithm to limit the possible depth candidates and hence to reduce the probability of a depth map error.
An alternative method for depth map improvement is presented in reference [7], It allows a user to define a smooth region that should not contain a depth Jump. This information is then used to improve the depth estimation. Compared to our method, this method is only indirect and can hence not guarantee to be error free depth map. Moreover, the smoothness constraints are defined in the
2D image domain, which makes it difficult to propagate this information into all captured camera views.
6. Contributions
At least some of the contributions are present in examples:
• We provide a precise method that derives an admissible depth value range for relevant pixels of an image. By these means, it is possible to use meshes that describe a 3D object only approximately to generate a high precision and high-quality depth map. In other words, instead of insisting that the 3D mesh exactly describes the position of the 3D object, we take into account that a user will only be able to provide a rough estimate. This estimate is then converted into an interval of admissible depth values. Independent of the applied depth estimation algorithm, this reduces the search space for correspondence determination, and hence the probability of wrong depth values (Section 10).
• In case the user is only requested to provide approximate 3D mesh locations, meshes can be interpreted wrongly. We provide a method how this can be avoided (Section 9.3).
• The method supports the partial constraining of a scene. In other words, the user needs not to give depth constraints for all objects of a scene. Instead, only for the most difficult objects a precise depth guide is necessary. This explicitly includes scenarios, where one object is occluded by another object. This requires specific mesh types that are not known in literature and that may hence be essential parts of present examples (Section 9).
• We show how the normal vectors known from the meshes can further simplify the depth map estimation by taking slanted surfaces into account. Our contribution may limit the admissible normal vectors and hence reduces the possible matching candidates, which again reduces the probability of wrong matches and hence higher quality depth maps (Section 11).
• We introduce another constraint type, called exclusive volumes which explicitly disallow some depth values. By these means, we can further reduce the possible depth candidates and hence reduce the probability for wrong depth values (Section 13).
• We give some improvements how 3D geometry constraints can be created based on a single or multiple 2D images (Section 15).
According to an aspect, there is provided a method for localizing, in a space containing at least one determined object, an object element associated to a particular 2D representation element (XJ in a determined 2D image of the space, the method comprising:
deriving a range or interval of candidate spatial positions for the imaged object element on the basis of predefined positional relationships;
restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions, wherein restricting includes at least one of:
limiting the range or interval of candidate spatial positions using at least one inclusive volume surrounding at least one determined object; and
limiting the range or interval of candidate spatial positions using at least one exclusive volume surrounding non-admissible candidate spatial positions; and
retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics.
In methods according to examples, the space there may be contained at least one first determined object and one second determined object,
wherein restricting includes limiting the range or interval of candidate spatial positions to: at least one first restricted range or interval of admissible candidate spatial positions associated to the first determined object; and
at least one second restricted range or interval of admissible candidate spatial positions associated to the second determined object,
wherein restricting includes defining the at least one inclusive volume as a first inclusive volume surrounding the first determined object and/or a second inclusive volume surrounding the second determined object, to limit the at least one first and/or second range or interval of candidate spatial positions to at least one first and/or second restricted range or interval of admissible candidate spatial positions; and
wherein retrieving includes determining whether the particular 2D representation element is associated to the first determined object or is associated to the second determined object.
According to examples, determining whether the particular 2D representation element is associated to the first determined object or to the second determined object is performed on the basis of similarity metrics.
According to examples, determining whether the particular 2D representation element Is associated to the first determined object or to the second determined object is performed on the basis of the observation that:
one of the at least one first and second restricted range or interval of admissible candidate spatial positions is void; and
the other of the at least one first and second restricted range or interval of admissible candidate spatial positions is not void, so as to determine that the particular 2D representation element is within the other of the at least one first and second restricted range of admissible candidate spatial positions.
According to examples, restricting includes using information from a second camera or 2D image for determining whether the particular 2D representation element is associated to the first determined object or to the second determined object.
According to examples, the information from a second camera or 2D image includes a previously obtained localization of an object element contained in:
the at least one first restricted range or interval of admissible candidate spatial positions, so as to conclude that the object element is associated to the first object; or
the at least one second restricted range or interval of admissible candidate spatial positions, so as to conclude that the object element is associated to the second object.
in accordance with an aspect, there is provided a method comprising:
as a first operation, obtaining positional parameters associated to a second camera position and at least one inclusive volume;
as a second operation, performing a method according to any of the methods above and below for a particular 2D representation element for a first 2D image acquired at a first camera position, the method including:
analyzing, on the basis of the positional parameters obtained at the first operation, whether both the following conditions are met:
at least one candidate spatial position would occlude at least one inclusive volume in a second 2D image obtained or obtainable at the second camera position, and
the at least one candidate spatial position would not be occluded by at least one inclusive volume in the second 2D image,
so as, in case the two conditions are met:
to refrain from performing retrieving even if the at least one candidate spatial position was in the restricted range of admissible candidate spatial positions for the first 2D image and/or
to exclude the at least one candidate spatial position from the restricted range or interval of admissible candidate spatial positions for the first 2D image even if the at least one candidate spatial position was in the restricted range of admissible candidate spatial positions.
According to examples, the method may comprise:
as a first operation, obtaining positional parameters associated to a second camera position and at least one inclusive volume;
as a second operation, performing a method according to any of the methods above and below for a particular 2D representation element for a first 2D image acquired at a first camera position, the method including:
analyzing, on the basis of the positional parameters obtained at the first operation, whether at least one admissible candidate spatial position of the restricted range would be occluded by the at least one inclusive volume in a second 2D image obtained or obtainable at the second camera position, so as to maintain the admissible candidate spatial position in the restricted range.
According to examples the method may include
as a first operation, localizing a plurality of 2D representation elements for a second 2D image,
as a second, subsequent operation, performing the deriving, the restricting and the retrieving of the method according to any of the methods above and below for determining a most appropriate candidate spatial position for the determined 2D representation element of a first determined 2D image, wherein the second 2D image and the first determined 2D image are acquired at spatial positions in predetermined positional relationship,
wherein the second operation further includes finding a 2D representation element in the second 2D Image, previously processed in the first operation, which corresponds to a candidate spatial position of the first determined 2D representation element of the first determined 2D image, so as to further restrict, in the second operation, the range or interval of admissible candidate spatial positions and/or to obtain similarity metrics on the first determined 2D
representation element.
According to examples, wherein the second operation is such that, at the observation that the previously obtained localized position for the 2D representation element in the second 2D image would be occluded to the second 2D image by the candidate spatial position of the first determined 2D representation element considered in the second operation:
further restricting the restricted range or interval of admissible candidate spatial positions so as to exclude the candidate spatial position for the determined 2D representation element of the first determined 2D image from the restricted range or interval of admissible candidate spatial positions.
According to examples, at the observation that the localized position of the 2D representation element in the second 2D image corresponds to the first determined 2D representation element: restricting, for the determined 2D representation element of the first determined 2D image, the range or interval of admissible candidate spatial positions so as to exclude, from the restricted range or interval of admissible candidate spatial positions, positions more distant than the localized position.
According to examples a method may further comprise, at the observation that the localized position of the 2D representation element in the second 2D image does not correspond to the localized position of the first determined 2D representation element:
invalidating the most appropriate candidate spatial position for the determined 2D representation element of the first determined 2D image as obtained in the second operation.
According to examples, the localized position of the 2D representation element in the second 2D image corresponds to the first determined 2D representation element when the distance of the localized position is within a maximum predetermined tolerance distance to one of the candidate spatial positions of the first determined 2D representation element.
According to examples, the method may further comprise, when finding a 2D representation element in the second 2D image, analysing a confidence or reliability value of the localization of the first a 2D representation element in the second 2D image, and using it only in case of the confidence or reliability value being above a predetermined confidence threshold, or the unreliability value being below a predetermined threshold.
According to examples, the confidence value may be at least partially based on the distance between the localized position and the camera position, and is increased for a closer distance.
According to examples, the confidence value is at least partially based on the number of objects or inclusive volumes or restricted ranges of admissible spatial positions, so as to increase the confidence value if, in the range or interval of admissible spatial candidate positions, where there are found a fewer number of objects or inclusive volumes or restricted ranges of admissible spatial positions.
According to examples, restricting includes defining at least one surface approximation, so as to limit the at least one range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions.
According to examples, defining includes defining at least one surface approximation and one tolerance interval, so as to limit the at least one range or interval of candidate spatial positions to a restricted range or interval of candidate spatial positions defined by the tolerance interval, wherein the tolerance interval has:
a distal extremity defined by the at least one surface approximation; and
a proximal extremity defined on the basis of the tolerance interval; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics.
According to examples, a method may comprise for localizing, in a space containing at least one determined object, an object element associated to a particular 2D representation element in a 2D image of the space, the method comprising:
deriving a range or interval of candidate spatial positions for the imaged object element on the basis of predefined positional relationships;
restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions, wherein restricting includes:
defining at least one surface approximation and one tolerance interval, so as to limit the at least one range or interval of candidate spatial positions to a restricted range or interval of candidate spatial positions defined by the tolerance interval, wherein the tolerance interval has:
a distal extremity defined by the at least one surface approximation; and a proximal extremity defined on the basis of the tolerance interval; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics.
According to examples, a method may be reiterated by using an increased tolerance interval, so as to increase the probability of containing the object element.
According to examples, a method may be reiterated by using a reduced tolerance interval, so as to reduce the probability of containing a different object element.
According to examples, wherein restricting includes defining a tolerance value for defining the tolerance interval.
Claims
1. A method (30, 350) for localizing, in a space containing at least one determined object (91, 101, 111, 141), an object element (93, 143, X) associated to a particular 2D representation element (XL) in a determined 2D image of the space, the method comprising:
deriving (351) a range or interval of candidate spatial positions (95, 105, 114b, 115, 135, 145, 361') for the imaged object element (93, 143, X) on the basis of predefined positional relationships (381a', 381b');
restricting (35, 36, 352) the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions (93a, 93a', 103a, 112', 137, 147, 362'), wherein restricting includes at least one of:
limiting the range or interval of candidate spatial positions using at least one inclusive volume (96, 86, 106, 376, 426a, 426b) surrounding at least one determined object (91, 101); and
limiting the range or interval of candidate spatial positions using at least one exclusive volume (279, 299a-299e, 375) surrounding non-admissible candidate spatial positions; and
retrieving (37, 353), among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position (363') on the basis of similarity metrics.
2. The method of claim 1, wherein in the space there are contained at least one first determined object (91) and one second determined object (101, 111),
wherein restricting (35, 36, 352) includes limiting the range or interval of candidate spatial positions (95, 115) to:
at least one first restricted range or interval of admissible candidate spatial positions (93a', 96a) associated to the first determined object (91); and
at least one second restricted range or interval of admissible candidate spatial positions (103a, 112') associated to the second determined object (101, 111),
wherein restricting (352) includes defining the at least one inclusive volume (96) as a first inclusive volume surrounding the first determined object (91) and/or a second inclusive volume (106) surrounding the second determined object (101), to limit the at least one first and/or second range or interval of candidate spatial positions to at least one first and/or second restricted range or interval of admissible candidate spatial positions; and
wherein retrieving (353) includes determining whether the particular 2D representation element (93) is associated to the first determined object (91) or is associated to the second determined object (101, 111).
3. The method of claim 2, wherein determining whether the particular 2D representation element (XL) Is associated to the first determined object (91) or to the second determined object (101) is performed on the basis of similarity metrics.
4. The method of claim 2 or 3, wherein determining whether the particular 2D representation element is associated to the first determined object (91) or to the second determined object (101) is performed on the basis of the observation that:
one of the at least one first and second restricted range or interval of admissible candidate spatial positions is void; and
the other (93a) of the at least one first and second restricted range or interval of admissible candidate spatial positions is not void, so as to determine that the particular 2D representation element (93) is within the other (93a) of the at least one first and second restricted range of admissible candidate spatial positions.
5. The method of any of claims 2-4, wherein restricting (352) includes using information from a second camera (114) or 2D image for determining whether the particular 2D representation element (93) is associated to the first determined object (91) or to the second determined object (101, 111).
6. The method of claim 5, wherein the information from a second camera (114) or 2D image includes a previously obtained localization of an object element (93) contained in:
the at least one first restricted range or interval of admissible candidate spatial positions (96a), so as to conclude that the object element (93) is associated to the first object (91); or
the at least one second restricted range or interval of admissible candidate spatial positions (112'), so as to conclude that the object element (93) is associated to the second object (111).
7. A method (350, 450) comprising:
as a first operation (451), obtaining positional parameters associated to a second camera position (424b) and at least one inclusive volume (424a, 424b);
as a second operation (452), performing a method (350) according to any of the preceding claims for a particular 2D representation element for a first 2D image acquired at a first camera position, the method including:
analyzing (457a, 457b), on the basis of the positional parameters obtained at the first operation (451), whether both the following conditions are met:
(457b) at least one candidate spatial position (425d") would occlude at least one inclusive volume (426a) in a second 2D image obtained or obtainable at the second camera position, and
(457a) the at least one candidate spatial position (425d") would not be occluded (457a") by at least one inclusive volume (426a) in the second 2D image, so as, in case the two conditions are met (457a", 457b'):
to refrain (457b', 457c) from performing retrieving (353, 458) even If the at least one candidate spatial position (425d") was in the restricted range of admissible candidate spatial positions for the first 2D image and/or
to exclude (457b', 457c) the at least one candidate spatial position (425d") from the restricted range or interval of admissible candidate spatial positions for the first 2D image even if the at least one candidate spatial position (425d") was in the restricted range of admissible candidate spatial positions.
8. A method (350, 450) comprising:
as a first operation (451), obtaining positional parameters associated to a second camera position (424b) and at least one inclusive volume (424a, 424b);
as a second operation (452), performing a method (350) according to any of the preceding claims for a particular 2D representation element for a first 2D image acquired at a first camera position, the method including:
analyzing (457a), on the basis of the positional parameters obtained at the first operation (451), whether at least one admissible candidate spatial position (425e"') of the restricted range would be occluded by the at least one inclusive volume (426b) in a second 2D image obtained or obtainable at the second camera position, so as to maintain the admissible candidate spatial position (425e"') in the restricted range.
9. A method (430) including
as a first operation (431), localizing a plurality of 2D representation elements for a second 2D image,
as a second, subsequent operation (432), performing the deriving, the restricting and the retrieving of the method (350) according to any of the preceding claims for determining a most appropriate candidate spatial position for the determined 2D representation element ((x0, y0)) of a first determined 2D image, wherein the second 2D image and the first determined 2D Image are acquired at spatial positions in predetermined positional relationship,
wherein the second operation (432) further includes finding (435) a 2D representation element ((x',/)) in the second 2D image, previously processed in the first operation (431), which corresponds to a candidate spatial position (A') of the first determined 2D representation element ((x0, y0)) of the first determined 2D image,
so as to further restrict (436), in the second operation (432), the range or interval of admissible candidate spatial positions and/or to obtain similarity metrics on the first determined 2D representation element ((x0, y0)).
10. The method of claim 9, wherein the second operation (432) is such that, at the observation that the previously obtained localized position for the 2D representation element ((x',y')) in the second 2D image would be occluded to the second 2D image by the candidate spatial position (A') of the first determined 2D representation element ((x0, y0)) considered in the second operation:
further restricting (436) the restricted range or interval of admissible candidate spatial positions so as to exclude the candidate spatial position (A') for the determined 2D representation element ((x0, y0)) of the first determined 2D image from the restricted range or interval of admissible candidate spatial positions.
11. The method of claim 9 or 10, wherein, at the observation that the localized position (93) of the 2D representation element ((x',y')) in the second 2D image corresponds to the first determined 2D representation element ((x0, y0)):
restricting, for the determined 2D representation element ((x0, y0)) of the first determined 2D image, the range or interval of admissible candidate spatial positions so as to exclude, from the restricted range or interval of admissible candidate spatial positions (96a, 112'), positions more distant than the localized position (93).
12. The method of claim 9 or 10 or 11, further comprising, at the observation that the localized position (93) of the 2D representation element ((x',y')) in the second 2D image does not correspond to the localized position (112') of the first determined 2D representation element ((x0, y0)):
invalidating the most appropriate candidate spatial position (93) for the determined 2D representation element ((x0, y0)) of the first determined 2D Image as obtained in the second operation.
13. The method of any of claims 9-12, wherein the localized position (93) of the 2D
representation element ((x',y')) in the second 2D image corresponds to the first determined 2D representation element ((x0, y0)) when the distance of the localized position (93) is within a maximum predetermined tolerance distance to one of the candidate spatial positions of the first determined 2D representation element ((x0, y0)).
14. The method of any of claims 9-13, further comprising, when finding a 2D representation element ((x',y')) in the second 2D image, analysing a confidence or reliability value of the localization of the first a 2D representation element ((x',y')) in the second 2D image, and using it only in case of the confidence or reliability value being above a predetermined confidence threshold, or the unreliability value being below a predetermined threshold.
15. The method of claim 14, wherein the confidence value is at least partially based on the distance between the localized position and the camera position, and is increased for a closer distance.
16. The method of claim 15, wherein the confidence value is at least partially based on the number of objects (91, 111) or inclusive volumes (96) or restricted ranges of admissible spatial positions (96b), so as to increase the confidence value if, in the range or interval of admissible spatial candidate positions, there are found a fewer number of objects (91, 111) or inclusive volumes (96) or restricted ranges of admissible spatial positions (96b).
17. The method of any of the preceding claims, wherein restricting (352) includes defining at least one surface approximation (92, 132, 142), so as to limit the at least one range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions.
18. The method of claim 17, wherein defining includes defining at least one surface
approximation (142) and one tolerance interval (147), so as to limit the at least one range or interval of candidate spatial positions to a restricted range or interval of candidate spatial positions defined by the tolerance interval, wherein the tolerance interval has:
a distal extremity (143"') defined by the at least one surface approximation; and a proximal extremity (147') defined on the basis of the tolerance interval; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position (143) on the basis of similarity metrics.
19. A method for localizing, in a space containing at least one determined object (141), an object element (143) associated to a particular 2D representation element in a 2D image of the space, the method comprising:
deriving a range or interval of candidate spatial positions (145) for the imaged object element (143) on the basis of predefined positional relationships;
restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions (147), wherein restricting includes:
defining at least one surface approximation (142) and one tolerance interval (147), so as to limit the at least one range or interval of candidate spatial positions to a restricted range or interval of candidate spatial positions defined by the tolerance interval, wherein the tolerance interval has:
a distal extremity (143'") defined by the at least one surface approximation; and a proximal extremity (147') defined on the basis of the tolerance interval; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position (143) on the basis of similarity metrics.
20. The method of claim 18 or 19, reiterated by using an increased tolerance interval, so as to increase the probability of containing the object element (143).
21. The method of 18 or 19, reiterated by using a reduced tolerance interval, so as to reduce the probability of containing a different object element.
22. The method of any of claims 18-21, wherein restricting includes defining a tolerance value (to) for defining the tolerance interval (147).
23. The method of any of claims 18-22, wherein restricting includes defining a tolerance interval value Dd obtained from the at least one surface approximation (142) based on where r½ is the normal vector of the surface approximation (142) in the point (143'") where the interval of candidate spatial positions (145) intersects with the surface approximation (142), and vector a defines the optical axis of the determined camera or 2D image.
24. The method of claim 23, wherein at least part of the tolerance interval value Dd is defined from the at least one surface approximation (142) on the basis of
where t0 is a predetermined tolerance value, where i s the normal vector of the surface approximation (142) in the point (143"') where the interval of candidate spatial positions (145) intersects with the surface approximation (142), where vector d
defines the optical axis of the considered camera, and Fmax clips the angle between T
and 5
25. The method of any of the claims 24, wherein retrieving includes:
considering a normal vector (
of the surface approximation (142) located at the intersection (143"') between the surface approximation (142) and the range or interval of candidate spatial positions (145);
retrieving, among the admissible candidate spatial positions of the restricted range or interval (149), a most appropriate candidate spatial position (143) on the basis of similarity metrics, wherein retrieving includes retrieving, among the admissible candidate spatial positions of the restricted range or interval (149) and on the basis of the normal vector a most appropriate
candidate spatial position on the basis of similarity metrics involving the normal vector
26. A method for localizing, in a space containing at least one determined object, an object element (143) associated to a particular 2D representation element in a determined 2D image of the space, the method comprising:
deriving a range or interval of candidate spatial positions (145) for the imaged object element (143) on the basis of predefined positional relationships;
restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions (149), wherein restricting includes defining at least one surface approximation (142), so as to limit the at least one range or interval of candidate spatial positions (145) to a restricted range or interval of candidate spatial positions (149);
considering a normal vector
of the surface approximation (142) located at the intersection between the surface approximation (142) and the range or interval of candidate spatial positions (145);
retrieving, among the admissible candidate spatial positions of the restricted range or interval (149), a most appropriate candidate spatial position (143) on the basis of similarity metrics, wherein retrieving includes retrieving, among the admissible candidate spatial positions of the restricted range or interval (149) and on the basis of the normal vecto , a most appropriate
candidate spatial position on the basis of similarity metrics involving the normal vecto
27. The method of claim 25 or 26, wherein
retrieving comprises processing similarity metrics (csum) for at least one candidate spatial position (d) for the particular 2D representation element ((xO,gO),CL),
wherein processing involves further 2D representation elements ((x,y)) within a particular neighbourhood (N(cO,gO)) of the particular 2D representation element ((cq,gq)),
wherein processing includes obtaining a vecto n among a plurality of vectors within a predetermined range defined from vecto , to derive a candidate spatial position (D), associated to vector n, for each of the other 2D representation elements ((x,y)), under the assumption of a planar surface of the object in the object element, wherein the candidate spatial position (D) is used to determine the contribution of each of the 2D representation elements ((x,y)), in the neighbourhood (N(x0,y0)) to the similarity metrics (cSum).
28. The method of any of claims 25-27, wherein retrieving is based on a relationship of the type:
where (xo, yo) is the particular 2D representation element, (x, y) are elements in a neighbourhood of (xo, yo), K is the intrinsic camera matrix, d is a depth candidate representing the candidate spatial position,
is a function that computes a depth candidate for the particular 2D representation element (x,y) based on the depth candidate d for the particular 2D representation element (xo, yo) under the assumption of a planar surface of the object in the object element.
29. The method of claim 28, wherein retrieving is based on evaluating a similarity metric Csum(d) of the type
where å(x,y)Î N(x0,y0) represents a sum or a general aggregation function.
30. The method of any of claims 25-29, wherein restricting includes obtaining a vector n normal to the at least one determined object (141) at the intersection (143) with the range or interval of candidate spatial positions (145) among a range or interval of admissible vectors within a maximum inclination angle relative to the normal vector
31. The method of any of claims 25-30, wherein restricting includes obtaining the vector normal to the at least one determined object (143) according to
where Q is the inclination angle around the normal vector qmax is a predefined maximum
inclination angle, F is the azimuth angle, and where
n is interpreted relative to an orthonormal coordinate system whose third axes (z) is parallel to and whose other two axes (x,y) are
orthogonal to
32. The method of any of claims 25-31, wherein the normal vecto ) is different for different
restricted ranges of admissible candidate spatial positions (96a, 112') associated to the same range of candidate spatial positions (115).
33. The method of any of claims 17-32, wherein restricting is only applied when the normal vector of the surface approximation (122a, 142) in the intersection (143"") between the surface
approximation (142) and the range or interval of candidate spatial positions (145) has a predetermined direction within a particular range of directions.
34. The method of claim 33, wherein the particular range of directions is computed based on the direction of the range or interval of candidate spatial positions (145) related to the determined 2D Image.
35. The method of claim 34, wherein restricting is only applied when the dot product between the normal vector of the surface approximation (122a, 142) in the intersection (143"") between
the surface approximation (142) and the range or interval of candidate spatial positions (145) and the vector describing the path of the candidate spatial positions from the camera has a predefined sign.
36. The method of any of claims 14-35, wherein restricting (352) includes defining at least one surface-approximation-defined extremity (93'") of at least one restricted range or interval of admissible candidate spatial positions (93a), wherein the at least one surface-approximation-defined extremity (93'") is located at the intersection between the surface approximation (92) and the range or interval of candidate spatial positions (95).
37. The method of claims 17-36, wherein the surface approximation (92) is selected by a user.
38. The method of any of claims 17-37, wherein restricting (352) includes sweeping along the range of candidate positions (135) from a proximal position towards a distal position (135) and is concluded at the observation that the at least one restricted range or interval of admissible candidate spatial positions (137) has a distal extremity (17) associated to a surface approximation (132).
39. The method of any of claims 17-38, wherein at least one inclusive volume (96, 136d, 206) is defined automatically from the at least one surface approximation (92, 132, 202).
40. The method of any of claims 17-39, wherein at least one inclusive volume (186) is defined from the at least one surface approximation (182) by scaling the at least one surface approximation (182).
41. The method of any of claims 17-40, wherein at least one inclusive volume (186) is defined from the at least one surface approximation (182) by scaling the at least one surface approximation from a scaling center (182a) of the at least one surface approximation (182).
42. The method of any of claims 17-41, wherein restricting involves at least one inclusive volume or surface approximation to be formed by a structure (200, 206) composed of vertices or control points, edges and surface elements, where each edge connects two vertices, and each surface element is surrounded by at least three edges, and from every vertex there exists a connected path of edges to any other vertex of the structure.
43. The method of claim 42, where each edge is connected to an even number of surface elements.
44. The method of claim 43, where each edge is connected to two surface elements.
45. The method of any of claim 42-44, wherein the structure (200, 206) occupies a closed volume which has no border.
46. The method of any of claims 17-45, wherein at least one inclusive volume (96, 206) is formed by a geometric structure (200), the method further comprising defining the at least one inclusive volume (96, 206) by:
shifting the elements (200a-200i) by exploding the elements (200a-200i) along their normals ; and
reconnecting the elements (200b, 200c) by generating additional elements (210bc, 210cb).
47. The method of claim 46, further comprising:
inserting at least one new control point (220) within the exploded area (200');
reconnecting the at least one new control point (220) with the exploded elements (210bc) to form further elements (220bc).
48. The method of any of claims 42-47, wherein the elements (200a-200b) are triangle elements.
49. The method of any of claims 17-48, wherein restricting (352) includes:
searching for restricted ranges or intervals (137) within the range or interval of candidate spatial positions (135) from a proximal position to a distal position, and ending the searching at the retrieval of a surface approximation.
50. The method of any of claims 17-49, wherein the at least one surface approximation (92, 132,
142) is contained within the at least one object (91, 141).
51. The method of any of claims 17-50, wherein the at least one surface approximation (92, 132, 142) is a rough approximation of the at least one object.
52. The method of any of claims 17-51, further comprising, at the observation that the range or interval of candidate spatial positions obtained during deriving does not intersect with any surface approximation, defining the restricted range or interval of admissible candidate spatial positions as the range or interval of candidate spatial positions obtained during deriving.
53. The method of any of the preceding claims, wherein the at least one inclusive volume is a rough approximation of the at least one object.
54. The method of any of the preceding claims, wherein retrieving is applied to a random subset among the restricted range or interval of admissible candidate positions and/or admissible normal vectors.
55. The method of any of the preceding claims, wherein restricting (352) includes defining at least one inclusive-volume-defined extremity (96"'; 10, II, 12, 13, 14, 15, 16) of the at least one restricted range or interval of admissible candidate spatial positions (137).
56. The method of any of the preceding claims, wherein the inclusive volume (96, 106) is defined by a user.
57. The method of any of the preceding claims, wherein retrieving (37, 353) includes determining whether the particular 2D representation element is associated to at least one determined object (91, 101) on the basis similarity metrics.
58. The method of any of the preceding claims, wherein at least one of the 2D representation elements is a pixel (XL) in the determined 2D image (22).
59. The method of any of the preceding claims, wherein the object element (93, 143) is a surficial element of the at least one determined object (91, 141).
60. The method of any of the preceding claims, wherein the range or interval of candidate spatial positions (95, 105, 114b, 115, 135, 145) for the imaged object element is developed in a depth direction with respect to the determined 2D representation element.
61. The method of any of the preceding claims, wherein the range or interval of candidate spatial positions for the imaged object element (93, 143) is developed along a ray (95, 105, 114b, 115, 135, 145) exiting from the nodal point of the camera with respect to the determined 2D representation element.
62. The method of any of the preceding claims, wherein retrieving (353) includes measuring similarity metrics along the admissible candidate spatial positions of the restricted range or interval (93a; 93, 103; 93a; 96a, 112'; 137) as obtained from a further 2D image (23) of the space and in predefined positional relationship with the determined 2D image (22).
63. The method of claim 62, wherein retrieving includes measuring similarity metrics along the 2D representation elements, in the further 2D image, forming an epi-polar line (21) associated to the at least one restricted range (45).
64. The method of any of the preceding claims, wherein restricting (35, 36, 352) includes finding an intersection (13-19, 13'-I9') between the range or interval of candidate positions (135, 295, 375) with at least one of the inclusive volume (136a-136f), exclusive volume (299a-299e, 379), and/or surface approximation (132, 172).
65. The method of any of the preceding claims, wherein restricting includes finding an extremity (13-19, 13'-I9') of a restricted range or interval of candidate positions (135, 295, 375) with at least one of the inclusive volume (136a-136f), exclusive volume (299a-299e, 379), and/or surface
approximation (132, 172).
66. The method of any of the preceding claims, wherein restricting includes:
searching for ranges or intervals (137) within the range or interval of candidate spatial positions (135) from a proximal position towards a distal position.
67. The method of any of the preceding claims, wherein defining (351) includes:
selecting a first 2D image (335) of the space and a second 2D image (335') of the space, wherein the first and second 2D images (335, 335') have been acquired at camera positions in a predefined positional relationship with each other;
displaying at least the first 2D image (335),
guiding a user to select a control point in the first 2D image (335), wherein the selected control point (330) is a control point (210) of an element (200a-200i) of a structure (200) forming a surface approximation or an exclusive volume or an inclusive volume;
guiding the user to selectively translate the selected control point (330), in the first 2D image (335), while limiting the movement of the control point (330) along the epi-polar line (331) associated to a second 2D-image-control-point (330') in the second 2D image (335'), wherein the second 2D-image-control-point (330') corresponds to the same control point (210, 330) of the element (200a-200i) in the first 2D image,
so as to define a movement of the element (2Q0a-200i) of the structure (200) in the 3D space.
68. The method of any of the preceding claims, wherein defining (351) includes:
selecting a first 2D image (345) of the space and a second 2D image (345') of the space, wherein the first and second 2D images have been acquired at camera positions in a predefined positional relationship with each other;
displaying at least the first 2D image (345),
guiding a user to select a control point (340) in the first 2D Image (345), wherein the selected control point is a control point (210) of an element (200a-200i) of a structure (200) forming a surface approximation or an exclusive volume or an inclusive volume;
obtaining from the user a selection associated to a new position (341) for the control point (340) in the first 2D image (345);
restricting the new position in the space of the control point (340) as a position on the epi-polar line (342), in the second 2D image (345'), associated to the new position (341) of the control point in the first 2D image (345), and determining the new position (34G) in the space as the position (341') on the epi-polar line (342) which is the closest to the initial position (340') in the second 2D image (345'),
wherein point (340') corresponds to the same control point (210, 340) of the element (200a-200i) of a structure (200) of the selected control point (340),
so as to define a movement of the element (200a-200i) of the structure (200).
69. A method for localizing, in a space containing at least one determined object, an object element associated to a particular 2D representation element in a determined 2D image of the space, the method comprising:
obtaining a spatial position for the imaged object element;
obtaining a reliability or unreliability value for this spatial position of the imaged object element;
in case of the reliability value does not comply with a predefined minimum reliability or the unreliability does not comply with a predefined maximum unreliability, performing the method of any of the preceding claims so as to refine the previously obtained spatial position.
70. A method for refining, in a space containing at least one determined object, a previously obtained localization of an object element associated to a particular 2D representation element in a determined 2D Image of the space, the method comprising:
graphically displaying the determined 2D image of the space;
guiding a user to define at least one inclusive volume and/or at least one surface
approximation and/or at least one surface approximation;
performing a method according to any of the preceding claims so as to refine the previously obtained localization.
71. The method of any of the preceding claims, further comprising, after the definition of at least one inclusive volume (376) or surface approximation, automatically defining an exclusive volume (379) between the at least one inclusive volume (376) or surface approximation and the position of at least one camera.
72. The method of any of the preceding claims, further comprising, at the definition of a first proximal inclusive volume (426b) or surface approximation and a second distal inclusive volume or surface approximation (422, 426a), automatically defining:
a first exclusive volume (C) between the first inclusive volume (426b) or surface
approximation and the position of at least one camera (424b);
at least one second exclusive volume (A, B) between the second inclusive volume (426a) or surface approximation (422) and the position of at least one camera (424b), with the exclusion of a non-excluded region (D) between the first exclusive volume (C) and the second exclusive volume (A,
B).
73. The method of any of the preceding claims, applied to a multi-camera system.
74. The method of any of the preceding claims, applied to a stereo-imaging system.
75. The method of any of the preceding claims, wherein retrieving comprises selecting, for each candidate spatial position of the range or interval, whether the candidate spatial position is to be part of the restricted range of admissible candidate spatial positions.
76. A system (360, 380) for localizing, in a space for containing at least one determined object (91, 101, 111, 141), an object element (93, 143) associated to a particular 2D representation element in a determined 2D image of the space, the system comprising:
a deriving block (361) for deriving a range or interval of candidate spatial positions (95, 105, 114b, 115, 135, 145) for the Imaged object element (93, 143) on the basis of predefined positional relationships;
a restricting block (35, 36, 352) for restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions (93a, 93a', 103a, 112', 137, 147), wherein the restricting block is configured for:
limiting the range or interval of candidate spatial positions using at least one inclusive volume (96, 86, 106) surrounding at least one determined object; and/or
limiting the range or interval of candidate spatial positions using at least one exclusive volume (279, 299a-299e, 375) including non-admissible candidate spatial positions; and
a retrieving block (37, 353) for retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics.
77. The system of claim 76, further comprising a first and a second cameras for acquiring 2D images in predefined positional relationships.
78. The system of claim 76 or 77, further comprising at least one movable camera for acquiring
2D images from different positions and in predefined positional relationships.
79. The system of any of claims 76-78, further comprising a constraint definer (364) for rendering or displaying at least one 2D image to obtain an input (384a) for defining at least one constraint.
80. The system of any of claims 76-79, further configured to perform a method according to any of claims 1-76.
81. A non-transitory storage unit including instructions which, when executed by a processor, cause the processor to perform a method according to any of claims 1-75.
| # | Name | Date |
|---|---|---|
| 1 | 202137038240-STATEMENT OF UNDERTAKING (FORM 3) [24-08-2021(online)].pdf | 2021-08-24 |
| 2 | 202137038240-FORM 1 [24-08-2021(online)].pdf | 2021-08-24 |
| 3 | 202137038240-FIGURE OF ABSTRACT [24-08-2021(online)].pdf | 2021-08-24 |
| 4 | 202137038240-DRAWINGS [24-08-2021(online)].pdf | 2021-08-24 |
| 5 | 202137038240-DECLARATION OF INVENTORSHIP (FORM 5) [24-08-2021(online)].pdf | 2021-08-24 |
| 6 | 202137038240-COMPLETE SPECIFICATION [24-08-2021(online)].pdf | 2021-08-24 |
| 7 | 202137038240-FORM 18 [26-08-2021(online)].pdf | 2021-08-26 |
| 8 | 202137038240.pdf | 2021-10-19 |
| 9 | 202137038240-FORM-26 [01-11-2021(online)].pdf | 2021-11-01 |
| 10 | 202137038240-Proof of Right [17-11-2021(online)].pdf | 2021-11-17 |
| 11 | 202137038240-Information under section 8(2) [01-02-2022(online)].pdf | 2022-02-01 |
| 12 | 202137038240-FER.pdf | 2022-04-11 |
| 13 | 202137038240-FORM 3 [12-07-2022(online)].pdf | 2022-07-12 |
| 14 | 202137038240-Information under section 8(2) [09-09-2022(online)].pdf | 2022-09-09 |
| 15 | 202137038240-FORM 4(ii) [06-10-2022(online)].pdf | 2022-10-06 |
| 16 | 202137038240-OTHERS [10-01-2023(online)].pdf | 2023-01-10 |
| 17 | 202137038240-Information under section 8(2) [10-01-2023(online)].pdf | 2023-01-10 |
| 18 | 202137038240-FER_SER_REPLY [10-01-2023(online)].pdf | 2023-01-10 |
| 19 | 202137038240-ENDORSEMENT BY INVENTORS [10-01-2023(online)].pdf | 2023-01-10 |
| 20 | 202137038240-COMPLETE SPECIFICATION [10-01-2023(online)].pdf | 2023-01-10 |
| 21 | 202137038240-CLAIMS [10-01-2023(online)].pdf | 2023-01-10 |
| 22 | 202137038240-Annexure [10-01-2023(online)].pdf | 2023-01-10 |
| 23 | 202137038240-FORM 3 [25-01-2023(online)].pdf | 2023-01-25 |
| 24 | 202137038240-Information under section 8(2) [26-05-2023(online)].pdf | 2023-05-26 |
| 25 | 202137038240-FORM 3 [03-08-2023(online)].pdf | 2023-08-03 |
| 26 | 202137038240-Information under section 8(2) [27-10-2023(online)].pdf | 2023-10-27 |
| 27 | 202137038240-Information under section 8(2) [14-11-2023(online)].pdf | 2023-11-14 |
| 28 | 202137038240-Information under section 8(2) [06-12-2023(online)].pdf | 2023-12-06 |
| 29 | 202137038240-Information under section 8(2) [12-01-2024(online)].pdf | 2024-01-12 |
| 30 | 202137038240-FORM 3 [12-01-2024(online)].pdf | 2024-01-12 |
| 1 | SearchHistory-202137038240E_08-04-2022.pdf |