Specification
Claims:
1. A processor-implemented method (200) for a real-time exploration of a large and an unknown environment using a robot, the method (200) comprising the steps of:
receiving, via one or more hardware processors, a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot (202);
determining, via the one or more hardware processors, one or more field-view frontiers in a current field of view of the robot, from the current pose of the robot, using the current depth measurement (204);
exploring, via the one or more hardware processors, the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the exploration results in updating the initial 3-D map to obtain an intermediate 3-D map (206);
generating, via the one or more hardware processors, a random tree incrementally, starting from the current pose of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the random tree comprises one or more nodes generating one or more paths from the current pose of the robot (208);
determining, via the one or more hardware processors, one or more local frontiers in a local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, using the intermediate 3-D map (210);
exploring, via the one or more hardware processors, the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, wherein the exploration results in updating the intermediate 3-D map to obtain a successive 3-D map (212);
generating, via the one or more hardware processors, a tailored random tree to determine a global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present (214); and
exploring, via the one or more hardware processors, the global frontier, until no global frontiers are present for the exploration, wherein the exploration results in updating the successive 3-D map to obtain a final 3-D map (216).
2. The method as claimed in claim 1, wherein exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, further comprising:
(a) computing a command velocity of the robot, required to reach each of the one or more field-view frontiers in the current field of view of the robot, from the current pose of the robot (206a);
(b) finding an optimal field-view frontier among the one or more field-view frontiers in the current field of view of the robot, based on the command velocity of the robot for each of the one or more field-view frontiers in the current field of view of the robot (206b);
(c) navigating the robot towards the optimal field-view frontier, from the current pose of the robot, for the exploration, using the command velocity corresponding to the optimal field-view frontier (206c);
(d) determining the one or more field-view frontiers in the current field of view of the robot, from the optimal field-view frontier, using the current depth measurement, after the exploration of the optimal field-view frontier (206d); and
(e) repeating the steps (a) through (d), until no field-view frontiers in the current field of view of the robot are present for the exploration (206e).
3. The method as claimed in claim 1, wherein exploring the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, further comprising:
(a) identifying an optimal node among the one or more nodes present in the random tree, from which a maximum number of local frontiers of the one or more local frontiers in the local neighborhood of the robot are perceived (212a);
(b) navigating the robot towards the optimal node, using the random tree, from the current pose of the robot (212b);
(c) determining if the one or more field-view frontiers in the current field of view of the robot are present, while navigating the robot towards the optimal node, using the current depth measurement (212c);
(d) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present while navigating the robot towards the optimal node (212d);
(e) navigating the robot to the optimal node for the exploration, if the one or more field-view frontiers in the current field of view of the robot are not present while navigating the robot towards the optimal node (212e);
(f) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the optimal node, using the current depth measurement (212f);
(g) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the optimal node (212g);
(h) generating the random tree incrementally, from the current pose of the robot, using the rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present from the optimal node (212h); and
(i) repeating the steps (a) through (h), until no local frontiers in the local neighborhood of the robot are present for the exploration (212i).
4. The method as claimed in claim 1, wherein exploring the global frontier, until no global frontiers are present for the exploration, further comprising:
(a) navigating the robot to the global frontier, from the current pose of the robot, using the tailored random tree corresponding to the global frontier (216a);
(b) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the global frontier, using the current depth measurement (216b);
(c) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the global frontier (216c);
(d) determining if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier, using the successive 3-D map, if the one or more field-view frontiers in the current field of view of the robot are not present (216d);
(e) exploring the one or more local frontiers in the local neighborhood of the robot, until no local frontiers in the local neighborhood of the robot are present, if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier (216e);
(f) generating the tailored random tree to determine the global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present (216f); and
(g) repeating the steps (a) through (f), until no global frontiers are present for the exploration (216g).
5. The method as claimed in claim 1, wherein (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, forms a global set of frontiers for the exploration by the robot.
6. The method as claimed in claim 1, wherein each of (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, is a boundary between a free space and a unknown space present in the unknown environment.
7. The method as claimed in claim 6, wherein the unknown space is obtained by removing a known space from a reachable space present in the unknown environment.
8. The method as claimed in claim 7, wherein the (i) reachable space is obtained by removing an unreachable space from a total space, and (ii) the known space is a space already perceived by the robot.
9. The method as claimed in claim 1, wherein the depth measurement of the unknown environment is received through a depth sensor installed in the robot.
10. A system (100) for a real-time exploration of a large and an unknown environment using a robot, the system (100) comprising:
a memory (102) storing instructions;
one or more Input/Output (I/O) interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more I/O interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot;
determine one or more field-view frontiers in a current field of view of the robot, from the current pose of the robot, using the current depth measurement;
explore the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the exploration results in updating the initial 3-D map to obtain an intermediate 3-D map;
generate a random tree incrementally, starting from the current pose of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the random tree comprises one or more nodes generating one or more paths from the current pose of the robot;
determine one or more local frontiers in a local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, using the intermediate 3-D map;
explore the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, wherein the exploration results in updating the intermediate 3-D map to obtain a successive 3-D map;
generate a tailored random tree to determine a global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and
explore the global frontier, until no global frontiers are present for the exploration, wherein the exploration results in updating the successive 3-D map to obtain a final 3-D map.
11. The system as claimed in claim 10, wherein the one or more hardware processors (104) are configured to explore the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, by:
(a) computing a command velocity of the robot, required to reach each of the one or more field-view frontiers in the current field of view of the robot, from the current pose of the robot;
(b) finding an optimal field-view frontier among the one or more field-view frontiers in the current field of view of the robot, based on the command velocity of the robot for each of the one or more field-view frontiers in the current field of view of the robot;
(c) navigating the robot towards the optimal field-view frontier, from the current pose of the robot, for the exploration, using the command velocity corresponding to the optimal field-view frontier;
(d) determining the one or more field-view frontiers in the current field of view of the robot, from the optimal field-view frontier, using the current depth measurement, after the exploration of the optimal field-view frontier; and
(e) repeating the steps (a) through (d), until no field-view frontiers in the current field of view of the robot are present for the exploration.
12. The system as claimed in claim 10, wherein the one or more hardware processors (104) are configured to explore the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, by:
(a) identifying an optimal node among the one or more nodes present in the random tree, from which a maximum number of local frontiers of the one or more local frontiers in the local neighborhood of the robot are perceived;
(b) navigating the robot towards the optimal node, using the random tree, from the current pose of the robot;
(c) determining if the one or more field-view frontiers in the current field of view of the robot are present, while navigating the robot towards the optimal node, using the current depth measurement;
(d) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present while navigating the robot towards the optimal node;
(e) navigating the robot to the optimal node for the exploration, if the one or more field-view frontiers in the current field of view of the robot are not present while navigating the robot towards the optimal node;
(f) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the optimal node, using the current depth measurement;
(g) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the optimal node;
(h) generating the random tree incrementally, from the current pose of the robot, using the rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present from the optimal node; and
(i) repeating the steps (a) through (h), until no local frontiers in the local neighborhood of the robot are present for the exploration.
13. The system as claimed in claim 10, wherein the one or more hardware processors (104) are configured to explore the global frontier, until no global frontiers are present for the exploration, by:
(a) navigating the robot to the global frontier, from the current pose of the robot, using the tailored random tree corresponding to the global frontier;
(b) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the global frontier, using the current depth measurement;
(c) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the global frontier;
(d) determining if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier, using the successive 3-D map, if the one or more field-view frontiers in the current field of view of the robot are not present;
(e) exploring the one or more local frontiers in the local neighborhood of the robot, until no local frontiers in the local neighborhood of the robot are present, if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier;
(f) generating the tailored random tree to determine the global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and
(g) repeating the steps (a) through (f), until no global frontiers are present for the exploration.
14. The system as claimed in claim 10, wherein (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, forms a global set of frontiers for the exploration by the robot.
15. The system as claimed in claim 10, wherein each of (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, is a boundary between a free space and a unknown space present in the unknown environment.
16. The system as claimed in claim 15, wherein the unknown space is obtained by removing a known space from a reachable space present in the unknown environment.
17. The system as claimed in claim 16, wherein the (i) reachable space is obtained by removing an unreachable space from a total space, and (ii) the known space is a space already perceived by the robot.
18. The system as claimed in claim 10, wherein the one or more hardware processors (104) are configured to receive the depth measurement of the unknown environment, through a depth sensor installed in the robot.
, Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHODS AND SYSTEMS FOR EXPLORATION OF LARGE AND UNKNOWN ENVIRONMENT
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
The disclosure herein generally relates to the field of exploration, and, more particularly, to methods and systems for exploration of a large and an unknown environment using robots.
BACKGROUND
Inspection robotics play a significant role in protecting the integrity of critical and large infrastructure such as oil and gas, petrochemical, and power industries. The inspections may need to be carried out frequently to capture deformations and identify defects in the existing infrastructure. The inspections need to be accurate and comprehensive when conducted by robots, especially in dangerous and inaccessible areas. As a need for structural inspection increases, demand for the inspection robotics is expected to rise accordingly. Exploration allows the robot to safely traverse through an environment and simultaneously perceive necessary information for the inspection. However, autonomous exploration becomes a key factor, when the inspection to be performed in unknown or varying environments, where reliable paths cannot be precomputed.
Furthermore, the inspection of large infrastructures involves taking measurements in narrow spaces or at high altitudes, for which the use of micro aerial vehicles (MAVs) may be a suitable choice. However, due to their limited computational capabilities and a flight time, the exploration strategy demands a lower time-complexity. Frontiers-based strategy is widely used in the art to explore 3-dimensinal (3-D) large and unknown environments. However, most of conventional frontiers-based exploration techniques employed edge-detection and clustering methods for computing all frontiers (referred thereafter as a global set of frontiers) present in the unknown environment, which are computationally expensive and inaccurate, especially in case of large environments.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems.
In an aspect, there is provided a processor-implemented method for a real-time exploration of a large and an unknown environment using a robot, the method comprising the steps of: receiving a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot; determining one or more field-view frontiers in a current field of view of the robot, from the current pose of the robot, using the current depth measurement; exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the exploration results in updating the initial 3-D map to obtain an intermediate 3-D map; generating a random tree incrementally, starting from the current pose of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the random tree comprises one or more nodes generating one or more paths from the current pose of the robot; determining one or more local frontiers in a local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, using the intermediate 3-D map; exploring the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, wherein the exploration results in updating the intermediate 3-D map to obtain a successive 3-D map; generating a tailored random tree to determine a global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and exploring the global frontier, until no global frontiers are present for the exploration, wherein the exploration results in updating the successive 3-D map to obtain a final 3-D map.
In another aspect, there is provided a system for a real-time exploration of a large and an unknown environment using a robot, the system comprising: a memory storing instructions; one or more Input/Output (I/O) interfaces; and one or more hardware processors coupled to the memory via the one or more I/O interfaces, wherein the one or more hardware processors are configured by the instructions to: receive a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot; determine one or more field-view frontiers in a current field of view of the robot, from the current pose of the robot, using the current depth measurement; explore the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the exploration results in updating the initial 3-D map to obtain an intermediate 3-D map; generate a random tree incrementally, starting from the current pose of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the random tree comprises one or more nodes generating one or more paths from the current pose of the robot; determine one or more local frontiers in a local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, using the intermediate 3-D map; explore the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, wherein the exploration results in updating the intermediate 3-D map to obtain a successive 3-D map; generate a tailored random tree to determine a global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and explore the global frontier, until no global frontiers are present for the exploration, wherein the exploration results in updating the successive 3-D map to obtain a final 3-D map.
In yet another aspect, there is provided a computer program product comprising a non-transitory computer readable medium having a computer readable program embodied therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot; determine one or more field-view frontiers in a current field of view of the robot, from the current pose of the robot, using the current depth measurement; explore the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the exploration results in updating the initial 3-D map to obtain an intermediate 3-D map; generate a random tree incrementally, starting from the current pose of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration, wherein the random tree comprises one or more nodes generating one or more paths from the current pose of the robot; determine one or more local frontiers in a local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, using the intermediate 3-D map; explore the one or more local frontiers in the local neighborhood of the robot, accessible from each node of the one or more nodes present in the random tree, until no local frontiers in the local neighborhood of the robot are present for the exploration, wherein the exploration results in updating the intermediate 3-D map to obtain a successive 3-D map; generate a tailored random tree to determine a global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and explore the global frontier, until no global frontiers are present for the exploration, wherein the exploration results in updating the successive 3-D map to obtain a final 3-D map.
In an embodiment, exploring the one or more field-view frontiers in the current field of view of the robot comprising: (a) computing a command velocity of the robot, required to reach each of the one or more field-view frontiers in the current field of view of the robot, from the current pose of the robot; (b) finding an optimal field-view frontier among the one or more field-view frontiers in the current field of view of the robot, based on the command velocity of the robot for each of the one or more field-view frontiers in the current field of view of the robot; (c) navigating the robot towards the optimal field-view frontier, from the current pose of the robot, for the exploration, using the command velocity corresponding to the optimal field-view frontier; (d) determining the one or more field-view frontiers in the current field of view of the robot, from the optimal field-view frontier, using the current depth measurement, after the exploration of the optimal field-view frontier; and (e) repeating the steps (a) through (d), until no field-view frontiers in the current field of view of the robot are present for the exploration.
In an embodiment, exploring the one or more local frontiers in the local neighborhood of the robot, comprising: (a) identifying an optimal node among the one or more nodes present in the random tree, from which a maximum number of local frontiers of the one or more local frontiers in the local neighborhood of the robot are perceived; (b) navigating the robot towards the optimal node, using the random tree, from the current pose of the robot; (c) determining if the one or more field-view frontiers in the current field of view of the robot are present, while navigating the robot towards the optimal node, using the current depth measurement; (d) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present while navigating the robot towards the optimal node; (e) navigating the robot to the optimal node for the exploration, if the one or more field-view frontiers in the current field of view of the robot are not present while navigating the robot towards the optimal node; (f) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the optimal node, using the current depth measurement; (g) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the optimal node; (h) generating the random tree incrementally, from the current pose of the robot, using the rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present from the optimal node; and (i) repeating the steps (a) through (h), until no local frontiers in the local neighborhood of the robot are present for the exploration.
In an embodiment, exploring the global frontier, until no global frontiers are present for the exploration, comprising: (a) navigating the robot to the global frontier, from the current pose of the robot, using the tailored random tree corresponding to the global frontier; (b) determining if the one or more field-view frontiers in the current field of view of the robot are present, from the global frontier, using the current depth measurement; (c) exploring the one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, if the one or more field-view frontiers in the current field of view of the robot are present from the global frontier; (d) determining if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier, using the successive 3-D map, if the one or more field-view frontiers in the current field of view of the robot are not present; (e) exploring the one or more local frontiers in the local neighborhood of the robot, until no local frontiers in the local neighborhood of the robot are present, if the one or more local frontiers in the local neighborhood of the robot are present, from the global frontier; (f) generating the tailored random tree to determine the global frontier, from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present; and (g) repeating the steps (a) through (f), until no global frontiers are present for the exploration.
In an embodiment, (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, forms a global set of frontiers for the exploration by the robot.
In an embodiment, each of (i) the one or more field-view frontiers in the current field of view of the robot, (ii) the one or more local frontiers in the local neighborhood of the robot, and (iii) the one or more global frontiers, is a boundary between a free space and a unknown space present in the unknown environment.
In an embodiment, the unknown space is obtained by removing a known space from a reachable space present in the unknown environment.
In an embodiment, the (i) reachable space is obtained by removing an unreachable space from a total space, and (ii) the known space is a space already perceived by the robot.
In an embodiment, the depth measurement of the unknown environment, is received through a depth sensor installed in the robot.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the embodiments of the present disclosure, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 is an exemplary block diagram of a system for a real-time exploration of a large and an unknown environment using a robot, in accordance with some embodiments of the present disclosure.
FIG. 2A and FIG. 2B illustrate exemplary flow diagrams of a processor-implemented method for the real-time exploration of the large and the unknown environment using the robot, in accordance with some embodiments of the present disclosure.
FIG. 3 is an exemplary grey-scale image showing the unknown environment with field-view frontiers in a current field of view of the robot, in accordance with some embodiments of the present disclosure.
FIG. 4 is an exemplary flow diagram for exploring one or more field-view frontiers in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, in accordance with some embodiments of the present disclosure.
FIG. 5 is an exemplary grey-scale image showing the unknown environment with local frontiers in a local neighborhood of the robot, in accordance with some embodiments of the present disclosure.
FIG. 6A and FIG. 6B illustrate exemplary flow diagrams for exploring one or more local frontiers in the local neighborhood of the robot, until no local frontiers in the local neighborhood of the robot are present, in accordance with some embodiments of the present disclosure.
FIG. 7 is an exemplary grey-scale image showing the unknown environment with global frontiers, in accordance with some embodiments of the present disclosure.
FIG. 8A and FIG. 8B illustrate exemplary flow diagrams for exploring the global frontier, until no global frontiers are present, in accordance with some embodiments of the present disclosure.
FIG. 9 is a graph showing a computational time of a global set of frontiers per iteration, by conventional approaches for the exploration.
FIG. 10 is a graph showing the computational time of the global set of frontiers per iteration, by the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Frontiers-based strategy for the exploration of unknown and large environments, involves repeatedly planning a path towards a boundary region between a free space and an unknown space. The boundary region between the free space and the unknown space is termed as frontier. Most of the conventional frontiers-based exploration techniques employed edge-detection and clustering methods for computing all frontiers (referred hereafter as a global set of frontiers) present in the unknown environment, need to process a complete map of the environment frequently, and hence are computationally expensive, especially in case of the large environments, as the robot actually has to wait in the current position until no frontiers are detected. Some of the conventional frontiers-based exploration techniques attempted to reduce dependence on computing the global set of frontiers, by considering frontiers only in a current field of view (FoV) for planning. However, the exploration terminates and incomplete, if no more frontiers present in the current FoV, which may lead to inaccurate and in efficient exploration.
The present disclosure herein provides methods and systems for exploration of a large and an unknown environment using robots, solve the technical problems for exploring the unknown and the large environment, by processing the map only for limited number of times. Hence, exploring the unknown and the large environment is performed accurately and completely, with the less computation time. The present disclosure computes the global set of frontiers in three stages. In the first stage, the frontiers in the current FoV are determined for the exploration and the process is continued until, no frontiers in the current FoV are determined in the path.
In the second stage, the frontiers in the local neighborhood are determined for exploration, when no frontiers in the current FoV are determined in the first stage. If the frontiers in the local neighborhood are determined, then the present disclosure switches to explore the frontiers in the current FoV, and then the frontiers in the local neighborhood. The process is continued until, no frontiers in the current FoV and no frontiers in the local neighborhood are determined in the path. In the third and final stage, remaining frontiers (hereafter referred as ‘global frontiers’) are determined, when no frontiers in the current FoV and no frontiers in the local neighborhood are determined in the second stage. If the global frontiers are determined, then the present disclosure switches to explore the frontiers in the current FoV, then the frontiers in the local neighborhood, and further the global frontiers. The process is continued until, no frontiers in the current FoV, no frontiers in the local neighborhood, and no global frontiers are determined in the path.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 10, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary systems and/or methods.
FIG. 1 is an exemplary block diagram of a system 100 for a real-time exploration of a large and an unknown environment using a robot, in accordance with some embodiments of the present disclosure. In an embodiment, the system 100 includes or is otherwise in communication with one or more hardware processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more hardware processors 104. The one or more hardware processors 104, the memory 102, and the I/O interface(s) 106 may be coupled to a system bus 108 or a similar mechanism.
The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface(s) 106 may include a variety of software and hardware interfaces, for example, interfaces for peripheral device(s), such as a keyboard, a mouse, an external memory, a plurality of sensor devices, a printer and the like. Further, the I/O interface(s) 106 may enable the system 100 to communicate with other devices, such as web servers and external databases.
The I/O interface(s) 106 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, local area network (LAN), cable, etc., and wireless networks, such as Wireless LAN (WLAN), cellular, or satellite. For the purpose, the I/O interface(s) 106 may include one or more ports for connecting a number of computing systems with one another or to another server computer. Further, the I/O interface(s) 106 may include one or more ports for connecting a number of devices to one another or to another server.
The one or more hardware processors 104 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the one or more hardware processors 104 are configured to fetch and execute computer-readable instructions stored in the memory 102. In the context of the present disclosure, the expressions ‘processors’ and ‘hardware processors’ may be used interchangeably. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop computers, portable computers, notebooks, hand-held devices, workstations, mainframe computers, servers, a network cloud and the like.
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment, the memory 102 includes a plurality of modules 102a and a repository 102b for storing data processed, received, and generated by one or more of the plurality of modules 102a. The plurality of modules 102a may include routines, programs, objects, components, data structures, and so on, which perform particular tasks or implement particular abstract data types.
The plurality of modules 102a may include programs or computer-readable instructions or coded instructions that supplement applications or functions performed by the system 100. The plurality of modules 102a may also be used as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulates signals based on operational instructions. Further, the plurality of modules 102a can be used by hardware, by computer-readable instructions executed by the one or more hardware processors 104, or by a combination thereof. In an embodiment, the plurality of modules 102a can include various sub-modules (not shown in FIG. 1). Further, the memory 102 may include information pertaining to input(s)/output(s) of each step performed by the processor(s) 104 of the system 100 and methods of the present disclosure.
The repository 102b may include a database or a data engine. Further, the repository 102b amongst other things, may serve as a database or includes a plurality of databases for storing the data that is processed, received, or generated as a result of the execution of the plurality of modules 102a. Although the repository 102a is shown internal to the system 100, it will be noted that, in alternate embodiments, the repository 102b can also be implemented external to the system 100, where the repository 102b may be stored within an external database (not shown in FIG. 1) communicatively coupled to the system 100. The data contained within such external database may be periodically updated. For example, new data may be added into the external database and/or existing data may be modified and/or non-useful data may be deleted from the external database. In one example, the data may be stored in an external system, such as a Lightweight Directory Access Protocol (LDAP) directory and a Relational Database Management System (RDBMS). In another embodiment, the data stored in the repository 102b may be distributed between the system 100 and the external database.
In an embodiment, the system 100 may be configured in the robot for real-time exploration of the large and the unknown environment. In another embodiment, the system 100 may be connected to the robot through a wired or a wireless connection, for real-time exploration of the large and the unknown environment.
Referring to FIG. 2A and FIG. 2B, components and functionalities of the system 100 are described in accordance with an example embodiment of the present disclosure. For example, FIG. 2A and FIG. 2B illustrate exemplary flow diagrams of a processor-implemented method 200 for the real-time exploration of the large and the unknown environment using the robot, in accordance with some embodiments of the present disclosure. Although steps of the method 200 including process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any practical order. Further, some steps may be performed simultaneously, or some steps may be performed alone or independently.
At step 202 of the method 200, the one or more hardware processors 104 of the system 100 are configured to a current depth measurement of the unknown environment, an initial 3-dimensional (3-D) map of the unknown environment, and a current pose of the robot. The unknown environment defines a type of an environment or an infrastructure where the robot has to perform the exploration process. In an embodiment, the type of the environment may include but are not limited to oil and gas, petrochemical, and power industries. In an embodiment, the type of the environment may be an undefined boundary where a search and rescue to be performed. The type of exploration process may be different for each type of the environment. For example, in the petrochemical industry, the exploration process may include scanning various machineries to monitor corresponding operating parameters. In the search and rescue operation, the exploration process may include searching and rescuing the living beings (those to be rescued). The current pose of the robot includes a current position of the robot as well as a current orientation (angle) at which the robot is standing.
In an embodiment, the robot may be a ground vehicle or an aerial vehicle, based on the type of the environment. In case of the petrochemical industry, the robot may be the aerial vehicle which may traverse in the entire plant. In an embodiment, the aerial vehicle may be an unmanned aerial vehicle, a micro air vehicle (MAV) such as a drone, and alike.
The current depth measurement of the unknown environment is a real-time depth measurement of the environment that the robot can perceive. The depth measurement of the unknown environment is received in real-time through a depth sensor. In an embodiment, the depth sensor may be an RGB-D camera, a light detection and ranging (LIDAR), and a like. During the exploration, the robot perceives the unknown environment using the depth sensor. The depth sensor may be mounted on the robot with a fixed pitch and has limited field of view (FoV) described by a vertical and horizontal opening angle: ?[a_v,a_h]?^°, as well as may sense up to defined (r_sensor) units (for example, 2 meters).
In an embodiment, the initial 3-dimensional (3-D) map of the unknown environment may defines a starting location of the unknown environment, from where the exploration to be started by the robot. The initial 3-D map may be a 3-D octomap. In the context of the present disclosure, the unknown environment is defined as a 3-D space (a total space), denoted by V? R^3 is a bounded volume and approximates the environment to be explored. An unreachable space in the unknown environment, is denoted by V_ur? V and represents enclosed hollow spaces such as inner side of thick walls, which cannot be perceived by the depth sensor. A reachable space in the unknown environment, is denoted by V_r and is obtained by removing the unreachable space V_ur from the total space V. A known space in the unknown environment, is denoted by V_k? V_r which is already perceived by the robot through the depth sensor. An unknown space in the unknown environment, is denoted by V_unk and is represents the region that is still not perceived by the robot. The unknown space V_unk obtained by removing the known space V_k from the reachable space V_r present in the unknown environment. A free space is denoted by V_free? V_k and is a obstacles-free region. An occupied space in the unknown environment, is denoted by V_occ and is represents the region containing obstacles in the FoV. The occupied space V_occ is obtained by removing the free space V_free from the known space V_k present in the unknown environment. In this context, the frontiers present the boundary between the free space V_free and the unknown space V_unk.
The initial 3-D map (M) is updated continuously until the exploration is completed, with the information perceived by the depth sensor. The final 3-D map (M) approximates the environment by means of 3-D grid of voxels: V={m ?}, where i^th voxel represents cube of side length s unit with centroid at (m_i ) ? ?{m ?}. Every (m_i ) ? ?{m ?} is labeled as either the free space V_free , the occupied space V_occ , or the unknown space V_unk: (m_i ) ? ? V_free?V_occ?V_unk ?,??_i. The voxels in the free space V_free and the unknown space V_unk are jointly used to decide future path for the robot towards each frontier. Additionally, the voxels in the free space V_free and the occupied space V_occ ensures safe navigation. To explore the environment, the robot should repeatedly plain the path towards the frontiers.
At step 204 of the method 200, the one or more hardware processors 104 of the system 100 are configured to determine one or more field-view frontiers in a current field of view (FoV) of the robot, from the current pose of the robot. The one or more field-view frontiers (F_c) in a current field of view (FoV) of the robot are determined using the current depth measurement. The one or more field-view frontiers in a current field of view (FoV) are directly used to generate a command velocity without path planning and without processing the map, allowing the robot to perform faster exploration. FIG. 3 is an exemplary grey-scale image showing the unknown environment with field-view frontiers (F_c) in the current field of view of the robot, in accordance with some embodiments of the present disclosure.
At step 206 of the method 200, the one or more hardware processors 104 of the system 100 are configured to explore the one or more field-view frontiers ?(F?_c) in the current field of view of the robot, determined at step 204 of the method 200, until no field-view frontiers in the current field of view of the robot are present for the exploration. FIG. 4 is an exemplary flow diagram for exploring the one or more field-view frontiers (F_c) in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, in accordance with some embodiments of the present disclosure.
As shown in FIG. 4, at step 206a, the command velocity of the robot, required to reach each of the one or more field-view frontiers (F_c) in the current field of view of the robot, is determined from the current position of the robot. The command velocity is used to traverse the robot from one position to other in the unknown environment. The command velocity (v_i ) ? to reach some field-view frontier (m_i ) ? ?F_c from the current position (m_r ) ? of the robot (the voxel containing a center of mass of the robot) may be defined mathematically as in equation 1:
(v_i ) ? := K_p ((m_i ) ?-(? m?_r ) ?) ------------------------------ (1)
where K_p := v_max/r_sensor , and the K_p ensures that the robot attains maximum velocity v_max only when the robot is at a distance of r_sensor unit from (m_i ) ?.
At step 206b, an optimal field-view frontier among the one or more field-view frontiers (F_c) in the current field of view of the robot, is determined, based on the command velocity of the robot for each of the one or more field-view frontiers in the current field of view of the robot. Furthermore, the optimal field-view frontier is the field-view frontier that requires minimum change in a current velocity (v_c ) ? of the robot. The optimal field-view frontier selection criteria may be mathematically expressed as in equation 2:
(m_i^* ) ?? argmin-((m_i ) ? ?F_c )??(?(v_i ) ?-(v_c ) ? ?? ------------------------------ (2)
At step 206c, the robot is navigated from the current position towards the optimal field-view frontier, from the current pose of the robot, for the exploration, using the command velocity corresponding to the optimal field-view frontier computed at step 206a. At step 206d, the one or more field-view frontiers in the current field of view of the robot, from the optimal field-view frontier, is determined using the current depth measurement, after the exploration of the optimal field-view frontier. At step 206e, the steps 206a through 206d are repeated until no field-view frontiers in the current field of view of the robot are present for the exploration. The initial 3-D map with updated continuously with the path traversals and depth measurements, while the robot is exploring the field-view frontiers in the current field of view of the robot, to obtain an intermediate 3-D map.
Even though the field-view frontiers are present in vicinity of the robot, in case of narrow spaces and cluttered spaces of the environment, the frontiers may frequently get outside of the current FoV, resulting in F_c=Ø (a null set). At step 208 of the method 200, the one or more hardware processors 104 of the system 100 are configured to generate a random tree, incrementally, starting from the current position of the robot, using a rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present for the exploration. The random tree includes one or more nodes generating one or more paths from the current position of the robot.
At step 210 of the method 200, the one or more hardware processors 104 of the system 100 are configured to determine one or more local frontiers (F_l) in a local neighborhood of the robot, using the random tree generated at step 208 of the method 200 and the intermediate 3-D map obtained at step 206 of the method 200. The one or more local frontiers (F_l) in the local neighborhood of the robot are to be accessible from each node of the one or more nodes present in the random tree. The one or more local frontiers (F_l) in the local neighborhood of the robot, are the frontiers that may ne present in close vicinity of the robot, but present outside of the current FoV. The random tree is used to find the path s^* leading to the one or more local frontiers (F_l) in the local neighborhood of the robot. FIG. 5 is an exemplary grey-scale image showing the unknown environment with local frontiers (F_l) in the local neighborhood of the robot, in accordance with some embodiments of the present disclosure.
At step 212 of the method 200, the one or more hardware processors 104 of the system 100 are configured to explore the one or more local frontiers (F_l) in the local neighborhood of the robot, determined at step 210 of the method 200, until no local frontiers in the local neighborhood of the robot are present for the exploration. FIG. 6A and FIG. 6B illustrate exemplary flow diagrams for exploring the one or more local frontiers (F_l) in the local neighborhood of the robot, until no local frontiers in the local neighborhood of the robot are present, in accordance with some embodiments of the present disclosure. As shown in FIG. 6A and FIG. 6B, at step 212a, an optimal node among the one or more nodes present in the random tree, is determined, from which a maximum number of local frontiers of the one or more local frontiers (F_l) in the local neighborhood of the robot are perceived. Let the random tree R having N_T nodes, is incrementally built with i^th node is defined as in equation 3:
?_(i ?[1,N_T] )??[(m_i ) ? ?V_free,?_i^*?[0,2p],g((m_i ) ?,?_i^*)]?^T ------------------------ (3)
where g((m_i ) ?,?_i^*) represents maximum local frontiers that may be perceived from i^th node and ?_i^* denotes corresponding yaw direction. Mathematically, the g((m_i ) ?,?_i^*) and ?_i^* are may be represented as in equation 4 and equation 5 respectively:
where g((m_i ) ?,?_i^* )?max-(?_i?[0,2p] )??g((m_i ) ?,?_i ),?? ?_(i ?[1,N_T] )------------------------ (4)
?_i^*? arg-(?_i )??g((m_i ) ?,?_i^* ),?? ?_(i ?[1,N_T] ) --------------------------- (5)
To evaluate g((m_i ) ?,?_i ), for a given ?_(i )the unexplored voxels visible from ((m_i ) ?,?_i ) are counted using the intermediate 3-D map and the parameters [a_v,a_h,r_sensor]. The resultant count is denoted as in equation 6:
g((m_i ) ?,?_i )??_(Ø=-a_h/2)^(a_h/2)¦?_(Ø=-a_v/2)^(a_v/2)¦?_(r=0)^(d_max or obstacle)¦?h_i (r,?_i+Ø,?) ?------------ (6)
where h_i (r,?_i+Ø,?)? {¦(1,ifv_i (r,?_i+Ø,?) is unknown @0,otherwise)¦
The v_i (r,?_i+Ø,?) represents the voxels in the frame of the depth sensor (assumed at ?_(i ) and aligned with world frame) containing spherical coordinate (r,Ø,?). Though the maximum range of the depth sensor is r_sensor, d_max=r_sensor is used to determine the g((m_i ) ?,?_i ). Here, d_max ensure faster computation time. All the ?_(i )satisfying g((m_i ) ?,?_i^* )>g_min are accumulated in set ?_l, which is mathematically expressed as in equation 7:
?_l?{(?_(i ) )_(i=1)^(N_T ): g((m_i ) ?,?_i^* )>g_min --------------------------------------------------- (7)
where g_min represents minimum frontiers that should be visible to qualify ?_(i )as promising node for the exploration. From ?_l, ?^* is selected using the equation 8:
?^* :=?arg min-(i?[1,|?_l |])???s_(r_i ) ? -------------------------------------------------------- (8)
where s_(r_i ) is a length of the path between (m_i ) ? and (m_r ) ?, which is extracted from the random tree R.
At step 212b, the robot is navigated towards the optimal node, using the random tree, from the current pose of the robot, by using the command velocity. At step 212c, if the one or more field-view frontiers (F_c) in the current field of view of the robot are present, are determined while navigating the robot towards the optimal node, using the current depth measurement. If the one or more field-view frontiers (F_c) in the current field of view of the robot are present while navigating the robot towards the optimal node, then at step 212d, the one or more field-view frontiers (F_c) in the current field of view of the robot, until no field-view frontiers in the current field of view of the robot are present, using the similar approach as described in steps 206a to 206e. While traversing, if at any point, F_c?Ø (not a null set), then the robot falls back to the one or more field-view frontiers (F_c) to drive the robot.
At step 212e, the robot is navigated to the optimal node for the exploration, using the command velocity, if the one or more field-view frontiers (F_c) in the current field of view of the robot are not present while navigating the robot towards the optimal node (at step 212d). Then, at step 212f, if the one or more field-view frontiers (F_c) in the current field of view of the robot are present, is determined, from the optimal node, using the current depth measurement. If the one or more field-view frontiers (F_c) in the current field of view of the robot are present from the optimal node, at step 212f, then at step 212g, the exploration of the one or more field-view frontiers (F_c) in the current field of view of the robot, is performed, until no field-view frontiers in the current field of view of the robot are present, using the similar approach as described in steps 206a to 206e.
At step 212h, the random tree R is further generated incrementally, from the current position of the robot, using the rapidly exploring random tree (RRT) algorithm, after no field-view frontiers in the current field of view of the robot are present from the optimal node, at step 212g. At step 212i, the steps 212a through 212h are repeated until no local frontiers in the local neighborhood of the robot are present for the exploration. If F_l=Ø (a null set), after generating the random tree R up to? N?_T nodes, then no local frontiers in the local neighborhood of the robot are present. The intermediate 3-D map with updated continuously with the path traversals and depth measurements, while the robot is exploring the field-view frontiers in the current field of view of the robot and the one or more local frontiers in the local neighborhood of the robot, to obtain a successive 3-D map.
At step 214 of the method 200, the one or more hardware processors 104 of the system 100 are configured to generate a tailored random tree, from the current position of the robot, using the successive 3-D map obtained at step 212, after (i) no field-view frontiers in the current field of view of the robot, at step 206 of the method 200, and (ii) no local frontiers in the local neighborhood of the robot at step 212 of the method 200, are present. The tailored random tree is used to determine a global frontier using the node present in the tailored random tree. A tailored RRT algorithm may be used to generate the tailored random tree, from the current position of the robot. The tailored random tree is used to determine a global frontier using the node present in the tailored random tree. The global frontier is neither the field-view frontier in the current FoV, nor the local frontier (when the RRT algorithm is unable to determine the local frontier). If no global frontier is found, i.e. F_g=Ø (a null set)), then the exploration process is terminated. FIG. 7 is an exemplary grey-scale image showing the unknown environment with global frontiers F_g, in accordance with some embodiments of the present disclosure.
If F_g?Ø (not a null set), then the tailored RRT algorithm is used to generate the tailored random tree, using the successive 3-D map. In the tailored RRT algorithm, the tailored random tree is initialized with the path P spans most of the free space V_free generated during the exploration. Furthermore, if any frontier exists, then such frontier may be at boundary of the free space V_free. Hence, such initialization results in fast convergence to global frontiers F_g.
At step 216 of the method 200, the one or more hardware processors 104 of the system 100 are configured to explore the global frontier F_g determined at step 214 of the method 200, until no global frontiers are present for the exploration. FIG. 8A and FIG. 8B illustrate exemplary flow diagrams for exploring the global frontier F_g, until no global frontiers are present, in accordance with some embodiments of the present disclosure. As shown in FIG. 8A and FIG. 8B, at step 216b, if the one or more field-view frontiers ?(F?_(c ))in the current field of view of the robot are present, is determined, from the global frontier, using the current depth measurement. If the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot are present from the global frontier, at step 216b, then at step 216c, the one or more field-view frontiers ?(F?_(c ))in the current field of view of the robot, are explored, until no field-view frontiers in the current field of view of the robot are present, using the similar approach as described in steps 206a to 206e.
At step 216d, if the one or more local frontiers F_(l )in the local neighborhood of the robot are present, is determined, from the global frontier F_(g ), using the successive 3-D map, if the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot are not present. At step 216e, if the one or more local frontiers F_(l )in the local neighborhood of the robot are present, at step 216d, from the global frontier F_(g ), then the one or more local frontiers F_(l )in the local neighborhood of the robot, are explored, until no local frontiers in the local neighborhood of the robot are present, using the similar approach as described in steps 212a through 212i.
At step 216f, the next tailored random tree generated to determine a next global frontier F_(g ), from the current pose of the robot, using the successive 3-D map, after (i) no field-view frontiers in the current field of view of the robot and (ii) no local frontiers in the local neighborhood of the robot, are present. If next global frontier F_(g )is determined, at step 216f, then at step 216g, the steps 216a through 216g, are repeated, until no global frontiers are present for the exploration. If no global frontier is found, i.e. F_g=Ø (a null set)), at step 216f, then the exploration process is terminated. The successive 3-D map with updated continuously with the path traversals and depth measurements, while the robot is exploring the field-view frontiers ?(F?_(c )) in the current field of view of the robot, the one or more local frontiers F_(l ) in the local neighborhood of the robot, and the global frontiers F_(g ), to obtain a final 3-D map. The final 3-D is the result of the exploration process and is used to check the accuracy of the exploration process. Further, the final 3-D map is used in future cases for exploring the environments further.
Each of (i) the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot, (ii) the one or more local frontiers F_(l )in the local neighborhood of the robot, and (iii) the one or more global frontiers F_(g ), is the boundary between the free space V_free and the unknown space V_unk.The global set of frontiers is formed by combining the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot, explored at step 206 of the method 200, (ii) the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot and the one or more local frontiers F_(l ) in the local neighborhood of the robot, explored at step 212 of the method 200, and (iii) the one or more field-view frontiers ?(F?_(c )) in the current field of view of the robot, the one or more local frontiers F_(l ) in the local neighborhood of the robot, and the one or more global frontiers F_(g ), explored at step 216 of the method 200, after the exploration is terminated.
Time complexity of the field-view frontiers F_(c ) (?O(F?_(c ))): The time-complexity of the field-view frontiers F_(c )is directly proportional to the number of depth measurements in the depth sensor FoV. If there are n X m depth measurements in the current FoV, then ?O(F?_(c )) is proportional to O(nm). Each depth measurement is queried in the map, to compute F_(c ). The complexity of raising a query in the map is O(log?(q)), where q=V/S^3 . Hence, the time-complexity of raising n X m queries may be given as O(nm log?(q)). To ensure frontiers in F_(c ) are accessible from (m_r ) ? , a collision free path from (m_r ) ? to (m_i ) ??F_(c ) is required. For collision checking between (m_r ) ? and (m_i ) ?, a cylinder with radius d_safe and axis aligned to the line joining (m_r ) ? and (m_i ) ? is assumed. All the voxels inscribed by the cylinder are queried for collision checking. Now, the time-complexity of querying all voxels inscribed by the cylinder is directly proportional to r_sensor and inversely proportional to the volume to the voxel, i.e. 1/S^3 . Hence, the complexity for computing collision-free frontiers in F_(c ) may be mathematically represented as in an equation 9:
?O(F?_(c ))=O(((nmr_sensor)/S^3 )(log?(q)) --------------------------------- (9)
Time complexity of the local frontiers F_(l ) (?O(F?_(l ))): The F_(l ) exploits the merit of finite length RRT algorithm to find the frontiers. The time complexity of growing the random tree R with N_T nodes is O(N_T log??(N_T))?. Each node is assigned with a gain g((m_i ) ?,?_i^*). To compute g((m_i ) ?,?_i^*), all the voxels in FoV are queried by limiting r_sensor up to d_max. Hence, the complexity for computing ((m_i ) ?,?_i^*) is given by O(N_FoV log??(q))?, where N_FoV=nmd_max/S, represents number of voxels in FoV till d_max, and log??(q)? denotes complexity for each query. Now, the complexity for N_T nodes is given as is O(?(N?_FoV (N_T)log??(q))?. To make sure all the edges are collision free, the occupancy of all voxels in the cylinder bounding an edge is queried. Hence, the time complexity for all N_T-1 edges is given as O((N_T-1)/?S^3 log???(q))?. Therefore, the total time complexity for computing the F_(l )in single iteration and planning the path to the frontier is given as in equation 10:
?O(F?_(l ))= O(N_T log??(N_T))?+ O(?(N?_FoV (N_T ) log??(q))?+O((N_T-1)/?S^3 log???(q))? -------- (10)
Time complexity of the global frontiers F_(g ) (?O(F?_(g ))): The computation of (m_i ) ? ?F_(g )requires processing of the complete map. Hence the time complexity is proportional to the number of voxels representing the map, i.e. O(q) times the complexity for querying each voxel O(log?(q)), where q=V/S^3. The Dijkstra algorithm us used to plan the path towards the frontier in F_(g ). This has time complexity of O(E+qlog (q)) where E represents total number of edges. On the contrary, the tailored RRT based algorithm is used to find the path which has complexity of N_T log??(N_T)?. In worst case, the total number of nodes N_T may be equal to the total free voxels, i.e., N_free=V_free/S^3. Hence maximum complexity for finding the path may be given as ?O(N?_free log??(N_free)?) which is less than that of the Dijkstra algorithm, i.e, ?O(N?_free log??(N_free)?)
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202121034121-STATEMENT OF UNDERTAKING (FORM 3) [29-07-2021(online)].pdf |
2021-07-29 |
| 2 |
202121034121-REQUEST FOR EXAMINATION (FORM-18) [29-07-2021(online)].pdf |
2021-07-29 |
| 3 |
202121034121-FORM 18 [29-07-2021(online)].pdf |
2021-07-29 |
| 4 |
202121034121-FORM 1 [29-07-2021(online)].pdf |
2021-07-29 |
| 5 |
202121034121-FIGURE OF ABSTRACT [29-07-2021(online)].jpg |
2021-07-29 |
| 6 |
202121034121-DRAWINGS [29-07-2021(online)].pdf |
2021-07-29 |
| 7 |
202121034121-DECLARATION OF INVENTORSHIP (FORM 5) [29-07-2021(online)].pdf |
2021-07-29 |
| 8 |
202121034121-COMPLETE SPECIFICATION [29-07-2021(online)].pdf |
2021-07-29 |
| 9 |
202121034121-Proof of Right [26-11-2021(online)].pdf |
2021-11-26 |
| 10 |
Abstract1.jpg |
2022-02-09 |
| 11 |
202121034121-Proof of Right [11-02-2022(online)].pdf |
2022-02-11 |
| 12 |
202121034121-FORM-26 [08-04-2022(online)].pdf |
2022-04-08 |
| 13 |
202121034121-FER.pdf |
2023-03-14 |
| 14 |
202121034121-OTHERS [24-08-2023(online)].pdf |
2023-08-24 |
| 15 |
202121034121-FER_SER_REPLY [24-08-2023(online)].pdf |
2023-08-24 |
| 16 |
202121034121-COMPLETE SPECIFICATION [24-08-2023(online)].pdf |
2023-08-24 |
| 17 |
202121034121-CLAIMS [24-08-2023(online)].pdf |
2023-08-24 |
Search Strategy
| 1 |
searchdoc(13)E_14-03-2023.pdf |