Abstract: Autonomous flight in indoor space requires a dense depth map for navigable space detection which is the fundamental component for autonomous navigation. Traditional approaches provide erroneous metric depth when baselines are small. Also, executing a real-time dense Visual Simultaneous Localization and Mapping (vSLAM) is computationally very heavy. The present disclosure addresses the problem of reconstructing dense depth while a drone is hovering (small camera motion) in indoor scenes using already estimated cameras and sparse point cloud obtained from a vSLAM. An indoor scene is segmented based on sudden depth variation using sparse 3D points and a patch-based local plane fitting is employed via energy minimization which combines photometric consistency and co-planarity with neighboring patches. The weighted patch-based optimization of the present disclosure is computationally lightweight which makes the method suitable for Micro Aerial Vehicle (MAV) navigation.
Claims:
A processor implemented method for dense depth map generation (200), the method comprising the steps of:
receiving, by one or more hardware processors, a predetermined number of estimated images from Edge simultaneous localization and mapping SLAM with small motion wherein a first image in small motion is a reference image and the number of estimated images is empirically determined, the small motion is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º (202);
segmenting, by the one or more hardware processors, a sparse point cloud corresponding to the reference image into a plurality of 2D segments (204), wherein the step of segmenting comprises:
generating clusters of 3D points comprised in the sparse point cloud using a Kd-tree representation thereof, the 3D points within each cluster having geometric and photometric proximity, and wherein the generated clusters represent a plurality of 3D segments (204a);
segmenting the reference image using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments (204b); and
generating a second set of plurality of 2D segments by merging two or more of the first set of plurality of 2D segments based on the geometric proximity information from the plurality of 3D segments (204c); and
estimating, by the one or more hardware processors, dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches (206), wherein the step of estimating dense depth comprises:
identifying the set (?) of patches (p) within each of the plurality of 2D segments, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D and is characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p (206a);
initializing depth of a most stable patch comprised in the set (?) of patches using plane fitting, wherein the most stable patch is characterized by a maximum cardinality of a set f_( p)of sparse 3D points whose projection lies within a boundary of a patch (p) (206b);
initializing depths of the neighboring patches, wherein cardinality of the set f_( p)is more than 0 using continuous plane fitting (206c);
initializing depths of the neighboring patches, wherein cardinality of the set f_( p)is 0 by connecting 3D surfaces of the initialized neighboring patches (206d);
initializing the depths of the neighboring patches having unsuccessful initialization by a plane sweep technique (206e);
associating a normalized weight to each patch (p) based on the initialized depth thereof (206f);
adding a patch for optimization when the associated normalized weight is below a pre-determined threshold indicative of the patch being an unstable patch, wherein the pre-determined threshold is based on cardinality of the set f_( p) of sparse 3D points whose projection lies within a boundary of a patch and area thereof (206g);
optimizing a cost function through the parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise (206h); and
obtaining an optimized depth based on the optimized cost function to generate the dense depth map (206i).
The processor implemented method of claim 1, wherein the step of segmenting is preceded by de-noising the reference image, wherein the de-noising comprises non-linear bilateral filtering in an intensity domain along with filtering in a spatial domain.
The processor implemented method of claim 1, wherein the geometric proximity corresponds to an adaptive proximity threshold based on camera locations.
A system (100) comprising:
one or more data storage devices (102) operatively coupled to one or more hardware processors (104) and configured to store instructions configured for execution by the one or more hardware processors to:
receive a predetermined number of estimated images from Edge simultaneous localization and mapping SLAM with small motion wherein a first image in small motion is a reference image and the number of estimated images is empirically determined, the small motion is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º;
segment a sparse point cloud corresponding to the reference image into a plurality of 2D segments by:
generating clusters of 3D points comprised in the sparse point cloud using a Kd-tree representation thereof, the 3D points within each cluster having geometric and photometric proximity, and wherein the generated clusters represent a plurality of 3D segments;
segmenting the reference image using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments; and
generating a second set of plurality of 2D segments by merging two or more of the first set of plurality of 2D segments based on the geometric proximity information from the plurality of 3D segments; and
estimate dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches (206), wherein the one or more hardware processors estimate dense depth by:
identifying the set (?) of patches (p) within each of the plurality of 2D segments, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D and is characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p;
initializing depth of a most stable patch comprised in the set (?) of patches using plane fitting, wherein the most stable patch is characterized by a maximum cardinality of a set f_( p)of sparse 3D points whose projection lies within a boundary of a patch (p);
initializing depths of the neighboring patches, wherein cardinality of the set f_( p) is more than 0 using continuous plane fitting;
initializing the depths of the neighboring patches, wherein cardinality of the set f_( p) is 0 by connecting 3D surfaces of the initialized neighboring patches;
initializing the depths of the neighboring patches having unsuccessful initialization by a plane sweep technique;
associating a normalized weight to each patch (p) based on the initialized depth thereof;
adding a patch for optimization when the associated normalized weight is below a pre-determined threshold indicative of the patch being an unstable patch, wherein the pre-determined threshold is based on cardinality of the set f_( p)of sparse 3D points whose projection lies within a boundary of a patch and area thereof;
optimizing a cost function through parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise; and
obtaining an optimized depth based on the optimized cost function to generate the dense depth map.
The system of claim 4, wherein the one or more processors are further configured to de-noise the reference image by non-linear bilateral filtering in an intensity domain along with filtering in a spatial domain.
The system of claim 4, wherein the geometric proximity corresponds to an adaptive proximity threshold based on camera locations.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
ON THE FLY INDOOR DENSE DEPTH MAP GENERATION USING SfSM
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to dense depth map generation, and, more particularly, to systems and methods for on the fly indoor dense depth map generation using Structure from Small Motion (SfSM).
BACKGROUND
Autonomous navigation of Micro Aerial Vehicles (MAVs) for indoor navigation has been a challenge due to low-textured man-made environment. Robust estimation of a robot’s pose along with dense depth is needed for navigable space detection. Visual Simultaneous Localization and Mapping (vSLAM) addresses the problem of estimating camera pose along with 3-Dimensional (3D) scene structure and it has achieved significant improvement. Most of the existing vSLAMs produce a sparse 3D structure which is insufficient for navigable space detection but executing a real-time dense vSLAM is computationally very heavy. In contrast to a dense SLAM, dense depth map computation during hovering of a drone for locating free space is computationally less complex. After understanding the depth map drone can start moving using sparse vSLAM until it requires another understanding for new free space and hover again. While the drone is hovering, the baselines between consecutive images are very small. Traditional feature based vSLAM approaches or 3D reconstruction pipelines produce erroneous metric depth when the baseline is small. There exists inaccuracy in the estimation of dense depth for navigable space detection on the fly.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
In an aspect, there is provided a processor implemented method for benchmarking asset performance comprising: receiving, by one or more hardware processors, a predetermined number of estimated images from Edge simultaneous localization and mapping SLAM with small motion wherein a first image in small motion is a reference image and the number of estimated images is empirically determined, the small motion is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º; segmenting, by the one or more hardware processors, a sparse point cloud corresponding to the reference image into a plurality of 2D segments, wherein the step of segmenting comprises: using a Kd-tree representation of the sparse point cloud to generate clusters of 3D points therein having geometric and photometric proximity within each cluster, the generated clusters representing a plurality of 3D segments; segmenting the reference image using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments; and generating a second set of plurality of 2D segments by merging two or more of the first set of plurality of 2D segments based on the geometric proximity information from the plurality of 3D segments; and estimating, by the one or more hardware processors, dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches, wherein the step of estimating dense depth comprises: identifying the set (?) of patches (p) within each of the plurality of 2D segments, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D and is characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p; initializing depth of a most stable patch comprised in the set (?) of patches using plane fitting, wherein the most stable patch is characterized by a maximum cardinality of a set f_( p)of sparse 3D points whose projection lies within a boundary of a patch (p); initializing depths of the neighboring patches, wherein cardinality of the set f_( p)is more than 0 using continuous plane fitting; initializing depths of the neighboring patches, wherein cardinality of the set f_( p)is 0 by connecting 3D surfaces of the initialized neighboring patches; initializing the depths of the neighboring patches having unsuccessful initialization by a plane sweep technique; associating a normalized weight to each patch (p) based on the initialized depth thereof; adding a patch for optimization when the associated normalized weight is below a pre-determined threshold indicative of the patch being an unstable patch, wherein the pre-determined threshold is based on cardinality of the set f_( p) of sparse 3D points whose projection lies within a boundary of a patch and area thereof; optimizing a cost function through the parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise; and obtaining an optimized depth based on the optimized cost function to generate the dense depth map.
In another aspect, there is provided a system comprising: one or more data storage devices operatively coupled to the one or more processors and configured to store instructions configured for execution by the one or more processors to: receive a predetermined number of estimated images from Edge simultaneous localization and mapping SLAM with small motion wherein a first image in small motion is a reference image and the number of estimated images is empirically determined, the small motion is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º; segment a sparse point cloud corresponding to the reference image into a plurality of 2D segments by: using a Kd-tree representation of the sparse point cloud to generate clusters of 3D points therein having geometric and photometric proximity within each cluster, the generated clusters representing a plurality of 3D segments; segmenting the reference image using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments; and generating a second set of plurality of 2D segments by merging two or more of the first set of plurality of 2D segments based on the geometric proximity information from the plurality of 3D segments; and estimate dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches (206), wherein the one or more hardware processors estimate dense depth by: identifying the set (?) of patches (p) within each of the plurality of 2D segments, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D and is characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p; initializing depth of a most stable patch comprised in the set (?) of patches using plane fitting, wherein the most stable patch is characterized by a maximum cardinality of a set f_( p)of sparse 3D points whose projection lies within a boundary of a patch (p); initializing depths of the neighboring patches, wherein cardinality of the set f_( p) is more than 0 using continuous plane fitting; initializing the depths of the neighboring patches, wherein cardinality of the set f_( p) is 0 by connecting 3D surfaces of the initialized neighboring patches; initializing the depths of the neighboring patches having unsuccessful initialization by a plane sweep technique; associating a normalized weight to each patch (p) based on the initialized depth thereof; adding a patch for optimization when the associated normalized weight is below a pre-determined threshold indicative of the patch being an unstable patch, wherein the pre-determined threshold is based on cardinality of the set f_( p)of sparse 3D points whose projection lies within a boundary of a patch and area thereof; optimizing a cost function through parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise; and obtaining an optimized depth based on the optimized cost function to generate the dense depth map.
In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a predetermined number of estimated images from Edge simultaneous localization and mapping SLAM with small motion wherein a first image in small motion is a reference image and the number of estimated images is empirically determined, the small motion is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º; segment a sparse point cloud corresponding to the reference image into a plurality of 2D segments by: using a Kd-tree representation of the sparse point cloud to generate clusters of 3D points therein having geometric and photometric proximity within each cluster, the generated clusters representing a plurality of 3D segments; segmenting the reference image using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments; and generating a second set of plurality of 2D segments by merging two or more of the first set of plurality of 2D segments based on the geometric proximity information from the plurality of 3D segments; and estimate dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches (206), wherein the one or more hardware processors estimate dense depth by: identifying the set (?) of patches (p) within each of the plurality of 2D segments, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D and is characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p; initializing depth of a most stable patch comprised in the set (?) of patches using plane fitting, wherein the most stable patch is characterized by a maximum cardinality of a set f_( p)of sparse 3D points whose projection lies within a boundary of a patch (p); initializing depths of the neighboring patches, wherein cardinality of the set f_( p) is more than 0 using continuous plane fitting; initializing the depths of the neighboring patches, wherein cardinality of the set f_( p) is 0 by connecting 3D surfaces of the initialized neighboring patches; initializing the depths of the neighboring patches having unsuccessful initialization by a plane sweep technique; associating a normalized weight to each patch (p) based on the initialized depth thereof; adding a patch for optimization when the associated normalized weight is below a pre-determined threshold indicative of the patch being an unstable patch, wherein the pre-determined threshold is based on cardinality of the set f_( p)of sparse 3D points whose projection lies within a boundary of a patch and area thereof; optimizing a cost function through parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise; and obtaining an optimized depth based on the optimized cost function to generate the dense depth map.
In an embodiment of the present disclosure, the one or more processors are further configured to de-noise the reference image by non-linear bilateral filtering in an intensity domain along with filtering in a spatial domain.
In an embodiment of the present disclosure, the geometric proximity corresponds to an adaptive proximity threshold based on camera locations.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
FIG.1 illustrates an exemplary block diagram of a system for on the fly indoor dense depth map generation using SfSM, in accordance with an embodiment of the present disclosure.
FIG.2A through FIG.2D illustrate an exemplary flow diagram for a computer implemented method for on the fly indoor dense depth map generation using SfSM, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims.
Autonomous navigation of Micro Aerial Vehicles (MAVs) in indoor environment is a challenge due to low-textured man-made environment. Existing Visual Simultaneous Localization and Mapping (vSLAM) address the problem of estimating camera pose along with 3-Dimensional (3D) scene structure, however, they generate a sparse 3D structure which is insufficient for navigable space detection but executing a real-time dense vSLAM is computationally very heavy. In contrast to a dense SLAM, dense depth map computation during hovering of a drone for locating free space is computationally less complex. After understanding the depth map drone can start moving using sparse vSLAM until it requires another understanding for new free space and hover again. While the drone is hovering, the baselines between consecutive images are very small. Traditional feature based vSLAM approaches or 3D reconstruction pipelines produce erroneous metric depth when the baseline is small.
The present disclosure addresses these problems by using three-dimensional reconstruction from a small motion of a camera or Structure from Small Motion (SfSM) for estimating a depth map. SfSM has advantages such as better photometric constraints, rotation matrix simplification and the like over regular motion. Although there have been works on SfSM, there exists inaccuracy in the estimation of dense depth for navigable space detection on the fly. Using sparse 3D points for geometric validation in low-textured indoor environments is erroneous due to less number of features. Also, sudden depth variation estimation is not considered. Furthermore, running plane sweep on image pixels without considering sparse point positions allows a pixel to attain a particular depth without any neighboring constraints and estimates noisy depth on a planar surface. However, this approach also is not real time. Again, Oriented FAST and rotated BRIEF (ORB) feature based SfSM system where plane sweeping technique generates dense depth directly from ORB feature matches also generates erroneous depth in low-textured environment since the ORB feature highly depends on texture present in the scene. Hence reliable dense depth for free space estimation in indoor is a challenge addressed by the methods and systems of the present disclosure.
Referring now to the drawings, and more particularly to FIG.1 through FIG.2D where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG.1 illustrates an exemplary block diagram of a system 100 for on the fly indoor dense depth map generation using SfSM in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 that are hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, graphics controllers, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) are configured to fetch and execute computer-readable instructions stored in the memory. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The I/O interface(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface(s) can include one or more ports for connecting a number of devices to one another or to another server.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, one or more modules (not shown) of the system 100 can be stored in the memory 102.
FIG.2A through FIG.2D illustrate an exemplary flow diagram for a computer implemented method 200 for on the fly indoor dense depth map generation using SfSM, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 includes one or more data storage devices or memory 102 operatively coupled to the one or more processors 104 and is configured to store instructions configured for execution of steps of the method 200 by the one or more processors 104. The steps of the method 200 will now be explained in detail with reference to the components of the system 100 of FIG.1. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to receive, at step 202, a predetermined number of estimated images from Edge Simultaneous Localization and Mapping (SLAM) with small motion. In the context of the present disclosure, the expression ‘small motion’ is indicative of a baseline less than 8mm and viewing angle of a 3D point less than 0.2º. The number of estimated images is determined empirically and typically may be around 20 to 25 images.
In an embodiment of the present disclosure, a first image in small motion is a reference image. In accordance with an embodiment of the present disclosure, the one or more processors 104 are configured to segment, at step 204, a sparse point cloud corresponding to the reference image into a plurality of 2-Dimensional (2D) segments. The 3D points in the sparse point cloud are associated with geometric (location) and photometric (color) proximity.
In accordance with the present disclosure, the sparse point cloud is segmented using a combination of nearest neighbor clustering and color similarity between the 3D points in the sparse point cloud. Accordingly, in an embodiment, the step 204 of segmenting comprises using a Kd-tree representation of the sparse point cloud to generate clusters of the 3D points at step 204a, wherein the generated clusters represent a plurality of 3D segments. The Kd-tree representation enables easy segmentation since child-parent relations are established in the tree. At step 204b, the reference image is segmented using a color based region growing 2D segmentation method to generate a first set of plurality of 2D segments. At step 204c, two or more of the first set of plurality of 2D segments are merged to generate a second set of plurality of 2D segments, where the merging is based on the geometric proximity information from the plurality of 3D segments such that 3D points present in the clusters on the same segment. Typically, the 2D segments in the first set are shorter than the 2D segments in the second set. The 3D points in a cluster are projected on the reference image plane and the shorter segments in the first segment are merged based on spatial information. Depth propagation, in accordance with the present disclosure, is erroneous only if segmentation fails both in 2D and 3D. Such cases, however, occur only when an entire segment is of similar color and no sparse point is present to differentiate abrupt depth change in 3D and occurrence of such situations is very infrequent. In an embodiment, the geometric proximity corresponds to an adaptive proximity threshold based on camera locations, wherein the threshold is adaptive considering the scale is not defined and the threshold is calculated on the fly based on nearest and farthest point. The color constraint added to the geometric proximity improves the segmenting step.
In accordance with an embodiment of the present disclosure, the step 204 of segmenting may be preceded by de-noising the reference image by applying a non-linear bilateral filter wherein filtering is performed in an intensity or range domain along with filtering in a spatial domain. The bilateral filter preserves edge properties of the reference image as it combines pixel intensities based on their spatial as well as photometric closeness.
In accordance with the present disclosure, the depth propagation estimates dense depth for every image segment using a patch based approach. Accordingly, in an embodiment, the one or more processors 104 are configured to estimate, at step 206, dense depth for each of the plurality of 2D segments sequentially, wherein each of the plurality of 2D segments comprises a set (?) of patches (p) and a set N_? of neighboring patches. At step 206a, the set (?) of patches (p) within each of the plurality of 2D segments is identified, wherein each patch (p) represents a minimal segmented 2D area forming a planar surface in 3D. Each patch corresponds to a planar 3D surface characterized as ?_p={?_p,n ?_p } , wherein ?_p represents a point on a 3D surface of each patch (p) and n ?_p represents a normal vector of the 3D surface of each patch (p) from ?_p.
Let ? represent a set of pixels in patch p and f_( p) represent a set of sparse 3D points whose projection lies within a boundary of the patch p and ?=|f_( p) |. A most stable patch is characterized by a maximum cardinality of the set f_( p). All the patches p?? are initialized in the most feasible manner. In an embodiment of the present disclosure, at step 206b, depth of the most stable patch or the patch having largest value of ? is first initialized using plane fitting i.e. by fitting a 3D planar surface using the points in the set f_( p)having reprojection error < 0.1 pixel. Subsequently, the initialization is continued for the neighboring patches. At step 206c, the depths of the neighboring patches, wherein the cardinality of the set f_( p) is more than 0 (? > 0) is initialized using continuous plane fitting. There may still exist uninitialized patches where ?=0. At step 206d, the depths of the neighboring patches, wherein the cardinality of the set f_( p) is 0 is initialized by connecting 3D surfaces of the neighboring patches initialized in step 206c. There may exist some of the plurality of 2D segments where ?=0 ? p ?? and initialization fails. At step 206e, the depths of the neighboring patching having unsuccessful initialization are initialized by a plane sweep technique. Gradients of each patch is calculated by adding intensity gradients of all pixels belonging to the patch p. The plane sweep approach is then applied for the patch p having the largest gradient. The plane sweep runs a virtual plane perpendicular to the viewing direction from maximum to minimum depths obtained from the 3D sparse point cloud. The depth with minimum photometric error initializes the dense depth for that patch. Subsequently, the initialization is continued for the neighboring patches using continuous plane fitting.
In accordance with an embodiment of the present disclosure, a normalized weight is associated with each patch (p), at step 206f, based on the initialized depth thereof. The cost C(?), in accordance with the present disclosure may be represented as given in Equation 1 given below and minimized through the parameters ?_p and n ?_p.
C(?)=?_(p ??)¦??_(p ) C_p (?_p^D+?_p^I+?_(g ) ?_p^G )+t?_((p,q)?N_?)¦?_(pq ) ? C_p ?_pq^C
(1)
where ?_(p )and ?_(pq )represent adaptive normalizing weights. C_p is a confidence weight for every patch p. ?_(g ) represents the weight for ?_p^G and t represents a balancing factor between a data term (first part of equation 1) and a regularization term (second part of equation 1). ?_p^D, ?_p^I and ?_pq^C are addressed later in the description below.
?_(p )=sß_p
(2)
where sß_p represents a variation in a projected area of 2D patches using the points in the set ? and all views v?V.
?_(pq )=1/? ?_(x??)¦?d(x)?
(3)
where ? represents a set of edge pixels between p and q, d(x) is the intensity gradient of the pixels x??. The confidence weight C_p, in accordance with the present disclosure may be represented as given in equation (4) below, which speeds up optimization by providing lower weight to stable patches.
C_p={¦(0:if (n*area(f_( p) ))=?@1/(n*area(f_( p))): ?=0@1:Otherwise)¦
(4)
where area(f_( p)) represents the 2D projected area of points in the set f_( p) and ? is a pre-determined threshold based on cardinality of the set f_( p) of sparse 3D points whose projection lies within a boundary of a patch and area thereof.
In accordance with an embodiment of the present disclosure, a patch is considered for optimization when the associated normalized weight is blow a pre-determined threshold indicative of the patch being an unstable patch. Accordingly, at step 206g, a patch with C_p=0 is considered stable if (n*area(f_( p) ))=? and is therefore excluded in optimization. Experimental results indicated that the weighted patch-based optimization performs ~ 9.7 times faster compared to full optimization. Although full optimization may reduce some artefacts with smoothness on the surface, it is noted that this improvement has almost no impact on free space calculation.
In accordance with an embodiment of the present disclosure, the cost function C(?) of equation (1) is optimized, at step 206h, through the parameters the parameters ?_p and n ?_p for each of the plurality of 2D segments sequentially, wherein the cost function is based on (i) sparse 3D point consistency configured to maintain an optimized depth close to the initialized depth of each of the 3D sparse points (ii) photo consistency configured to maintain photometric proximity between the received estimated images and (ii) regularization indicative of coplanarity with the neighboring patches considering connected surfaces, disconnected surfaces, occluded surfaces and otherwise.
In accordance with the present disclosure, the sparse 3D point consistency ?_p^D may be represented as shown in equation (5) below.
?_p^D=?_(i?f_( p))¦?1/? (?X_i-X_(i?_p ) ?) ?
(5)
where ? denotes the average reprojection error for the point i?f_( p). X_i and X_(i?_p )denote the sparse depth (initialized depth) and the depth on the surface ?_p (optimized depth) for the point i. ?_p^D provides lower weight to the sparse 3D points having high projection error.
In accordance with the present disclosure, the photo consistency may be represented by ?_p^I, the cost of intensity matching between a patch p and its warped patches p_p (v) in other views v?V using plane induced homography in a small baseline. In accordance with the present disclosure, p_p (v) may be represented as given in equation (6) below.
p_p (v)=?_v H(p,{v_p,n ?_p })
(6)
where H is a homography matrix of the surface ?_pon a view v, ?_v is the weight for view v which provides more priority to nearer frames because intensity variation is less sensitive to closer views with depth variation. The cost of intensity may be represented as given in equation (7) below.
?_p^I=?_(p??)¦?_(x??)¦Var(I_(1?_p ),…..I_(v?_p ) )
(7)
where Var(.) is a variance function and I_(i?_p )represents the intensity of the pixel x?? in a view v projected from the surface ?_p. In accordance with the present disclosure, the variance calculation is made robust by providing higher priority to images nearer to the reference image. The expression ?_p^G in equation (1) represents the cost of gradient matching and follows the variance formula of equation (7) wherein the gradient of intensity replaces intensity.
In accordance with the present disclosure, pairwise regularization works on the assumption of connectivity with the neighboring patches at edge boundaries. In accordance with the present disclosure, five types of priority based patch pair configuration are considered to cater to all possible occupancy conditions as represented in equation 8 below.
?_pq^C={¦(¦(0:for connected surfaces (linear and non-linear)@?1:for disconnected surfaces@?2:for occluded surfaces)@?3:otherwise)¦
(8)
where 0< ?1< ?2< ?3
In accordance with the present disclosure, an optimized depth maybe obtained, at step, 206i, based on the optimized cost function to generate the dense depth map.
EXPERIMENTAL RESULTS
An Intel™ i7-8700 (6 cores @3.7-4.7GHz) with 16GB RAM processor was used for implementation and a Bebop quadcopter from Parrot Corporation was used for data acquisition. A registered Kinect point cloud was considered as ground truth. All experiments were performed on 640 x 480 VGA resolution. All experiments use parameters: ? =7,?_g=3,t=1.7,?1=0.6,?2=3.5,?3=20. Two indoor (indoor office premise and corridor) datasets suitable for drone navigation were used to evaluate performance of the method of the present disclosure. Sparse reconstruction in all the cases is executed using Edge SLAM. The indoor office premise under consideration had smooth depth variation along the wall and sudden depth changes at the wall boundary. Results of the method of the present disclosure were compared with Kinect and results obtained with prior art methods disclosed by S.Im et al. in “High quality structure from small motion for rolling shutter cameras” at ICCV 2015 and by H.Ha et al. in “High quality depth from uncalibrated small motion clip” at CVPR 2016 respectively. The methods by S.Im et al. and H.Ha et al. fail to estimate sudden depth changes at object boundary and erroneous depth estimation on the planar surface due to continuous plane fitting by S.Im et al. and erroneous plan sweep by H.Ha et al. respectively. The mean error in depth estimation using the method of the present disclosure against Kinect point cloud is 0.2352 meter whereas against the method by H.Ha et al. is 2.11575 meter. The corridor under consideration had an object on the ground. The method of the present disclosure showed better accuracy in estimating depth variation on the wall as well as on the ground compared to S.Im et al. and H.Ha et al. Also, the method of the present disclosure has least artefacts and is more realistic for free space understanding of both indoor datasets. It was also observed that the method of the present disclosure achieves a running time of ~ 14 sec using 20 images on un-optimized implementation.
The present disclosure thus enables estimation of indoor dense depth map on MAV hovering using camera calibration and sparse point cloud from Edge SLAM. Segmentation of the reference image in the beginning is followed by estimating depth independently for each segment which improves the accuracy in depth estimation. The weighted patch based plane fitting approach for depth estimation through minimizing the cost function which consists of sparse points, photo consistency and regularization terms makes the method light weight and hence suitable for real-time systems like drone navigation.
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope and spirit of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 201821033606-STATEMENT OF UNDERTAKING (FORM 3) [06-09-2018(online)].pdf | 2018-09-06 |
| 2 | 201821033606-REQUEST FOR EXAMINATION (FORM-18) [06-09-2018(online)].pdf | 2018-09-06 |
| 3 | 201821033606-FORM 18 [06-09-2018(online)].pdf | 2018-09-06 |
| 4 | 201821033606-FORM 1 [06-09-2018(online)].pdf | 2018-09-06 |
| 5 | 201821033606-FIGURE OF ABSTRACT [06-09-2018(online)].jpg | 2018-09-06 |
| 6 | 201821033606-DRAWINGS [06-09-2018(online)].pdf | 2018-09-06 |
| 7 | 201821033606-COMPLETE SPECIFICATION [06-09-2018(online)].pdf | 2018-09-06 |
| 8 | 201821033606-Proof of Right (MANDATORY) [26-09-2018(online)].pdf | 2018-09-26 |
| 9 | Abstract1.jpg | 2018-10-23 |
| 10 | 201821033606-FORM-26 [25-10-2018(online)].pdf | 2018-10-25 |
| 11 | 201821033606-ORIGINAL UR 6(1A) FORM 1-011018.pdf | 2019-02-18 |
| 12 | 201821033606-ORIGINAL UR 6(1A) FORM 26-021118.pdf | 2019-04-09 |
| 13 | 201821033606-OTHERS [28-07-2021(online)].pdf | 2021-07-28 |
| 14 | 201821033606-FER_SER_REPLY [28-07-2021(online)].pdf | 2021-07-28 |
| 15 | 201821033606-COMPLETE SPECIFICATION [28-07-2021(online)].pdf | 2021-07-28 |
| 16 | 201821033606-CLAIMS [28-07-2021(online)].pdf | 2021-07-28 |
| 17 | 201821033606-FER.pdf | 2021-10-18 |
| 18 | 201821033606-US(14)-HearingNotice-(HearingDate-01-04-2024).pdf | 2024-03-01 |
| 19 | 201821033606-FORM-26 [27-03-2024(online)].pdf | 2024-03-27 |
| 20 | 201821033606-FORM-26 [27-03-2024(online)]-1.pdf | 2024-03-27 |
| 21 | 201821033606-Correspondence to notify the Controller [27-03-2024(online)].pdf | 2024-03-27 |
| 22 | 201821033606-Written submissions and relevant documents [10-04-2024(online)].pdf | 2024-04-10 |
| 1 | 2021-02-1514-57-18E_15-02-2021.pdf |