Abstract: A system and method for aiding a mobile robot to navigate an environment is provided. The method comprises implementing a sparse network representing the environment. The method further comprises receiving a target image of a section of the environment as seen at a target location; capturing one or more initial images of sections of the environment; determining a match of the captured initial images to the images represented as nodes in the implemented sparse network; identifying a queue of images of connected sections from the sparse network, connecting section corresponding to the matched images to the target location; traversing the mobile robot and capturing intermediate images at regular intervals; determining a match for the captured intermediate images with a next image in queue of images; and navigating the mobile robot to a position corresponding to the matched captured intermediate images till the mobile robot reaches the target location.
The present disclosure generally relates to autonomous guided vehicles, such
as a mobile robot, implemented to move in an environment, and particularly to a system and method for aiding the mobile robot to move in the work area without a need to localize the mobile robot in the environment.
BACKGROUND
[0002] Autonomous guided vehicles (AGVs), also known as mobile robots, are
increasingly being employed for transporting goods and materials from one place to another in constrained environments, such as a factory or a warehouse. For example, mobile robots are used in warehouse environments to assist with inventory management by transporting goods from one area of the warehouse to another. In the warehouse, the mobile robot may travel from a loading area to a dropping area based on a control system and without intervention from users. In a manufacturing plant, the mobile robots can transport items such as heavy vehicle components like engines, chassis, etc. along a route on a floor of the manufacturing plant to deliver the payload from one location to another or to allow various manufacturing operations to be performed thereon. Mobile robots may offer the ability to carry payloads too heavy for a person to carry and without the supervision of a person, while also offering the flexibility to be reconfigured to follow a different route or carry different types of payloads.
[0003] Most systems involving such mobile robots implement ground markers
placed on a floor, usually in the form of a matrix, to enable the mobile robots to follow a path defined using a combination of such ground markers. The mobile robot determines its position with respect to the floor based on the ground marker in vicinity (specifically, directly underneath) thereof. In other examples, some systems may implement radio transmitters, like position transmitters installed in the environment and receivers in the mobile robot, to aid with the
navigation. All such systems require an infrastructure to be set-up, like in installing the ground markers and/or installing relevant sensors; which may be expensive and not always feasible. Moreover, in such systems, the mobile robot may sometimes deviate from the predefined path during its operation due to odometry error or similar factors; and may require to be recalibrated. In other examples, the mobile robots may implement techniques like SLAM (Simultaneous Localisation and Mapping) based navigation which lets the mobile robot to build a map and localize itself in that map at the same time. However, the said SLAM technique estimates sequential movement, which include some margin of error; and such error accumulates over time, causing substantial deviation from actual values.
[0004] In general, with conventional techniques, as described above, small
uncertainties in the environment or change in the local environment, leads to error in localization of the mobile robot within a prescribed map/environment. This leads to subsequent errors in predicting the exact location of the robot and the navigation stack is then unable to navigate the environment. Such problem of an uncertain and dynamic environment leading to incorrect localization, and then erroneous navigation has been tackled in the past. The researchers have primarily focussed on developing robust localization methods which can reject disturbances associated with uncertainty. However, a complete system not localizing the robot in terms of coordinates and depending on navigation based on the perspective nature of the viewed images has not been focused upon.
[0005] Therefore, in light of the foregoing discussion, there exists a need to overcome
problems associated with conventional techniques and provide systems and/or methods for navigation of mobile robots in different environments, or specifically assisting the mobile robot to navigate the environment without a need to localize itself.
SUMMARY
[0006] In an aspect of the present disclosure, a method for aiding a mobile robot to
navigate an environment is provided. The method comprises receiving images of sections of the environment. The method further comprises analysing the images to determine a scene change between two of the received images of the sections of the environment. The method further
comprises identifying two connected sections in the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment. The method further comprises generating a sparse network with images of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. The method further comprises storing the generated sparse network to be implemented by the mobile robot for navigating the environment.
[0007] In one or more embodiments, the method further comprises configuring a
robot device to traverse the environment to capture images, via an image capturing device, of the sections of the environment.
[0008] In one or more embodiments, the method further comprises receiving the
images from an operator device associated with the environment.
[0009] In another aspect of the present disclosure, a method for aiding a mobile robot
to navigate an environment is provided. The method comprises implementing a sparse network with images of connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. The method further comprises receiving a target image of a section of the environment which forms at least a part of a view as captured by an image capturing device of the mobile robot when the mobile robot is positioned at a target location in the environment where the mobile robot is required to be navigated to. The method further comprises configuring the image capturing device of the mobile robot to capture one or more initial images of sections of the environment from a current position thereof. The method further comprises determining a match of at least one of the one or more captured initial images to one of the images represented as nodes in the implemented sparse network. The method further comprises identifying a queue of images of connected sections in the environment from the implemented sparse network, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image. The method further comprises configuring the mobile robot to traverse the environment starting from the said current position, and the image capturing device therein to
capture intermediate images at regular intervals. The method further comprises determining a match for at least one of the captured intermediate images with a next image in the identified queue of images. The method further comprises navigating the mobile robot to a position corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location.
[0010] In one or more embodiments, navigating the mobile robot comprises
navigating the mobile robot in the environment while avoiding obstacles therein.
[0011] In one or more embodiments, determining the match between two images
comprises determining image features of the two images being matched are mapped at least to a predefined threshold.
[0012] In yet another aspect of the present disclosure, a system for aiding a mobile
robot to navigate an environment is provided. The system comprises a processing arrangement. The processing arrangement is configured to receive images of sections of the environment. The processing arrangement is further configured to analyse the images to determine a scene change between two of the received images of the sections of the environment. The processing arrangement is further configured to identify two connected sections in the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment. The processing arrangement is further configured to generate a sparse network with images of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. The processing arrangement is further configured to store the generated sparse network in a database to be implemented by the mobile robot for navigating the environment.
[0013] In still another aspect of the present disclosure, a system for aiding a mobile
robot to navigate an environment is provided. The system comprises a database having a sparse network with images of connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment stored therein. The system further comprises a processing arrangement in signal communication with the
database. The processing arrangement is configured to receive a target image of a section of the environment which forms at least a part of a view as captured by an image capturing device of the mobile robot when the mobile robot is positioned at a target location in the environment where the mobile robot is required to be navigated to. The processing arrangement is further configured to configure the image capturing device of the mobile robot to capture one or more initial images of sections of the environment from a current position thereof. The processing arrangement is further configured to determine a match of at least one of the one or more captured initial images to one of the images represented as nodes in the implemented sparse network. The processing arrangement is further configured to identify a queue of images of connected sections in the environment from the implemented sparse network, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image. The processing arrangement is further configured to configure the mobile robot to traverse the environment starting from the said current position, and the image capturing device therein to capture intermediate images at regular intervals. The processing arrangement is further configured to determine a match for at least one of the captured intermediate images with a next image in the identified queue of images. The processing arrangement is further configured to navigate the mobile robot to a position corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location.
[0014] In one or more embodiments, the mobile robot comprises a sensing
arrangement configured to determine obstacles in a path of the mobile robot, to aid the mobile robot in navigating the environment while avoiding obstacles therein.
[0015] In one or more embodiments, the system further comprises an image matching
module configured to: determine image features of the two images being matched; and confirm the match between two images when the corresponding determined image features are mapped at least to a predefined threshold.
[0016] The foregoing summary is illustrative only and is not intended to be in any
way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[0017] For a more complete understanding of example embodiments of the present
disclosure, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
[0018] FIG. 1 illustrates a schematic of an exemplary computing system that may
reside on and may be executed by a computer, and which may be connected to a network, in accordance with one or more embodiments of the present disclosure;
[0019] FIG. 2 illustrates a schematic of an exemplary processing arrangement, in
accordance with one or more embodiments of the present disclosure;
[0020] FIG. 3 illustrates a flowchart listing steps involved in a method for aiding a
mobile robot to navigate an environment by generating a sparse network therefor, in accordance with one or more embodiments of the present disclosure;
[0021] FIG. 4 illustrates a schematic of a system for aiding a mobile robot to navigate
an environment by generating a sparse network therefor, in accordance with one or more embodiments of the present disclosure;
[0022] FIG. 5 illustrates a flowchart listing steps involved in a method for aiding a
mobile robot to navigate an environment by implementing a sparse network therefor, in accordance with one or more embodiments of the present disclosure; and
[0023] FIG. 6 illustrates a schematic of a system for aiding a mobile robot to navigate
an environment by implementing a sparse network therefor, in accordance with one or more embodiments of the present disclosure.
DETAILED DESCRIPTION
[0024] In the following description, for purposes of explanation, numerous specific
details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure is not limited to these
specific details.
[0025] Reference in this specification to "one embodiment" or "an embodiment"
means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. The appearance of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Moreover, various features are described which may be exhibited by some embodiments and not by others. Similarly, various requirements are described which may be requirements for some embodiments but not for other embodiments.
[0026] Furthermore, in the following detailed description of the present disclosure,
numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood that the present disclosure may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the present disclosure.
[0027] Embodiments described herein may be discussed in the general context of
computer-executable instructions residing on some form of computer-readable storage medium, such as program modules, executed by one or more computers or other devices. By way of example, and not limitation, computer-readable storage media may comprise non-transitory computer-readable storage media and communication media; non-transitory computer-readable media include all computer-readable media except for a transitory, propagating signal. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or distributed as desired in various embodiments.
[0028] Some portions of the detailed description that follows are presented and
discussed in terms of a process or method. Although steps and sequencing thereof are disclosed in figures herein describing the operations of this method, such steps and sequencing are exemplary. Embodiments are well suited to performing various other steps or variations of the steps recited in
the flowchart of the figure herein, and in a sequence other than that depicted and described herein. Some portions of the detailed descriptions that follow are presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present disclosure, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those utilizing physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as transactions, bits, values, elements, symbols, characters, samples, pixels, or the like.
[0029] In some implementations, any suitable computer usable or computer readable
medium (or media) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-usable, or computer-readable, storage medium (including a storage device associated with a computing device) may be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fibre, a portable compact disc read-only memory (CD-ROM), an optical storage device, a digital versatile disk (DVD), a static random access memory (SRAM), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, a media such as those supporting the internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium could even be a suitable medium upon which the program is stored, scanned, compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of the present disclosure, a computer-usable or computer-readable, storage medium may be any tangible medium that can contain or store a program for use by or in connection with the instruction execution system, apparatus, or device.
[0030] In some implementations, a computer readable signal medium may include a
propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. In some implementations, such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. In some implementations, the computer readable program code may be transmitted using any appropriate medium, including but not limited to the internet, wireline, optical fibre cable, RF, etc. In some implementations, a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
[0031] In some implementations, computer program code for carrying out operations
of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object-oriented programming language such as Java®, Smalltalk, C++ or the like. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. However, the computer program code for carrying out operations of the present disclosure may also be written in conventional procedural programming languages, such as the "C" programming language, PASCAL, or similar programming languages, as well as in scripting languages such as JavaScript, PERL, or Python. In present implementations, the used language for training may be one of Python, C, C++, using open source libraries like Tensorflow™. Further, decoder in user device (as will be discussed) may use C, C++ or any processor specific ISA. Furthermore, assembly code inside C/C++ may be utilized for specific operation. Also, ASR (automatic speech recognition) and G2P decoder along with entire user system can be run in embedded Linux (any distribution), Android, iOS, Windows, or the like, without any limitations. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the internet using an Internet Service Provider). In some implementations,
electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGAs) or other hardware accelerators, micro-controller units (MCUs), or programmable logic arrays (PLAs) may execute the computer readable program instructions/code by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
[0032] In some implementations, the flowchart and block diagrams in the figures
illustrate the architecture, functionality, and operation of possible implementations of apparatus (systems), methods and computer program products according to various implementations of the present disclosure. Each block in the flowchart and/or block diagrams, and combinations of blocks in the flowchart and/or block diagrams, may represent a module, segment, or portion of code, which comprises one or more executable computer program instructions for implementing the specified logical function(s)/act(s). These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the computer program instructions, which may execute via the processor of the computer or other programmable data processing apparatus, create the ability to implement one or more of the functions/acts specified in the flowchart and/or block diagram block or blocks or combinations thereof. It should be noted that, in some implementations, the functions noted in the block(s) may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
[0033] In some implementations, these computer program instructions may also be
stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks or combinations thereof.
[0034] In some implementations, the computer program instructions may also be
loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed (not necessarily in a particular order) on the computer or other
programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts (not necessarily in a particular order) specified in the flowchart and/or block diagram block or blocks or combinations thereof.
[0035] Referring now to the example implementation of FIG. 1, there is shown a
computing system 100 that may reside on and may be executed by a computer (e.g., computer 12), which may be connected to a network (e.g., network 14) (e.g., the internet or a local area network). Examples of computer 12 may include, but are not limited to, a personal computer(s), a laptop computer(s), mobile computing device(s), a server computer, a series of server computers, a mainframe computer(s), or a computing cloud(s). In some implementations, each of the aforementioned may be generally described as a computing device. In certain implementations, a computing device may be a physical or virtual device. In many implementations, a computing device may be any device capable of performing operations, such as a dedicated processor, a portion of a processor, a virtual processor, a portion of a virtual processor, a portion of a virtual device, or a virtual device. In some implementations, a processor may be a physical processor or a virtual processor. In some implementations, a virtual processor may correspond to one or more parts of one or more physical processors. In some implementations, the instructions/logic may be distributed and executed across one or more processors, virtual or physical, to execute the instructions/logic. Computer 12 may execute an operating system, for example, but not limited to, Microsoft® Windows®; Mac® OS X®; Red Hat® Linux®, or a custom operating system. (Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States, other countries or both; Mac and OS X are registered trademarks of Apple Inc. in the United States, other countries or both; Red Hat is a registered trademark of Red Hat Corporation in the United States, other countries or both; and Linux is a registered trademark of Linus Torvalds in the United States, other countries or both).
[0036] In some implementations, the instruction sets and subroutines of computing
system 100, which may be stored on storage device, such as storage device 16, coupled to computer 12, may be executed by one or more processors (not shown) and one or more memory architectures included within computer 12. In some implementations, storage device 16 may include but is not limited to: a hard disk drive; a flash drive, a tape drive; an optical drive; a RAID array (or other array); a random-access memory (RAM); and a read-only memory (ROM). In some
implementations, network 14 may be connected to one or more secondary networks (e.g., network 18), examples of which may include but are not limited to: a local area network; a wide area network; or an intranet, for example.
[0037] In some implementations, computer 12 may include a data store, such as a
database (e.g., relational database, object-oriented database, triplestore database, etc.) and may be located within any suitable memory location, such as storage device 16 coupled to computer 12. In some implementations, data, metadata, information, etc. described throughout the present disclosure may be stored in the data store. In some implementations, computer 12 may utilize any known database management system such as, but not limited to, DB2, in order to provide multi-user access to one or more databases, such as the above noted relational database. In some implementations, the data store may also be a custom database, such as, for example, a flat file database or an XML database. In some implementations, any other form(s) of a data storage structure and/or organization may also be used. In some implementations, computing system 100 may be a component of the data store, a standalone application that interfaces with the above noted data store and/or an applet / application that is accessed via client applications 22, 24, 26, 28. In some implementations, the above noted data store may be, in whole or in part, distributed in a cloud computing topology. In this way, computer 12 and storage device 16 may refer to multiple devices, which may also be distributed throughout the network.
[0038] In some implementations, computer 12 may execute application 20 for aiding
a mobile robot to navigate an environment. In some implementations, computing system 100 and/or application 20 may be accessed via one or more of client applications 22, 24, 26, 28. In some implementations, computing system 100 may be a standalone application, or may be an applet / application / script / extension that may interact with and/or be executed within application 20, a component of application 20, and/or one or more of client applications 22, 24, 26, 28. In some implementations, application 20 may be a standalone application, or may be an applet / application / script / extension that may interact with and/or be executed within computing system 100, a component of computing system 100, and/or one or more of client applications 22, 24, 26, 28. In some implementations, one or more of client applications 22, 24, 26, 28 may be a standalone application, or may be an applet / application / script / extension that may interact with and/or be executed within and/or be a component of computing system 100 and/or application 20. Examples of client applications 22, 24, 26, 28 may include, but are not limited to, a standard and/or mobile
web browser, an email application (e.g., an email client application), a textual and/or a graphical user interface, a customized web browser, a plugin, an Application Programming Interface (API), or a custom application. The instruction sets and subroutines of client applications 22, 24, 26, 28, which may be stored on storage devices 30, 32, 34, 36, coupled to user devices 38, 40, 42, 44, may be executed by one or more processors and one or more memory architectures incorporated into user devices 38, 40, 42, 44.
[0039] In some implementations, one or more of storage devices 30, 32, 34, 36, may
include but are not limited to: hard disk drives; flash drives, tape drives; optical drives; RAID arrays; random access memories (RAM); and read-only memories (ROM). Examples of user devices 38, 40, 42, 44 (and/or computer 12) may include, but are not limited to, a personal computer (e.g., user device 38), a laptop computer (e.g., user device 40), a smart/data-enabled, cellular phone (e.g., user device 42), a notebook computer (e.g., user device 44), a tablet (not shown), a server (not shown), a television (not shown), a smart television (not shown), a media (e.g., video, photo, etc.) capturing device (not shown), and a dedicated network device (not shown). User devices 38, 40, 42, 44 may each execute an operating system, examples of which may include but are not limited to, Android®, Apple® iOS®, Mac® OS X®; Red Hat® Linux®, or a custom operating system.
[0040] In some implementations, one or more of client applications 22, 24, 26, 28
may be configured to effectuate some or all of the functionality of computing system 100 (and vice versa). Accordingly, in some implementations, computing system 100 may be a purely server-side application, a purely client-side application, or a hybrid server-side / client-side application that is cooperatively executed by one or more of client applications 22, 24, 26, 28 and/or computing system 100.
[0041] In some implementations, one or more of client applications 22, 24, 26, 28
may be configured to effectuate some or all of the functionality of application 20 (and vice versa). Accordingly, in some implementations, application 20 may be a purely server-side application, a purely client-side application, or a hybrid server-side / client-side application that is cooperatively executed by one or more of client applications 22, 24, 26, 28 and/or application 20. As one or more of client applications 22, 24, 26, 28, computing system 100, and application 20, taken singly or in any combination, may effectuate some or all of the same functionality, any description of
effectuating such functionality via one or more of client applications 22, 24, 26, 28, computing system 100, application 20, or combination thereof, and any described interaction(s) between one or more of client applications 22, 24, 26, 28, computing system 100, application 20, or combination thereof to effectuate such functionality, should be taken as an example only and not to limit the scope of the disclosure.
[0042] In some implementations, one or more of users 46, 48, 50, 52 may access
computer 12 and computing system 100 (e.g., using one or more of user devices 38, 40, 42, 44) directly through network 14 or through secondary network 18. Further, computer 12 may be connected to network 14 through secondary network 18, as illustrated with phantom link line 54. Computing system 100 may include one or more user interfaces, such as browsers and textual or graphical user interfaces, through which users 46, 48, 50, 52 may access computing system 100.
[0043] In some implementations, the various user devices may be directly or
indirectly coupled to network 14 (or network 18). For example, user device 38 is shown directly coupled to network 14 via a hardwired network connection. Further, user device 44 is shown directly coupled to network 18 via a hardwired network connection. User device 40 is shown wirelessly coupled to network 14 via wireless communication channel 56 established between user device 40 and wireless access point (i.e., WAP) 58, which is shown directly coupled to network 14. WAP 58 may be, for example, an IEEE 802.11a, 802.11b, 802.1 lg, Wi-Fi®, RFID, and/or Bluetooth™ (including Bluetooth™ Low Energy) device that is capable of establishing wireless communication channel 56 between user device 40 and WAP 58. User device 42 is shown wirelessly coupled to network 14 via wireless communication channel 60 established between user device 42 and cellular network / bridge 62, which is shown directly coupled to network 14.
[0044] In some implementations, some or all of the IEEE 802.1 lx specifications may
use Ethernet protocol and carrier sense multiple access with collision avoidance (i.e., CSMA/CA) for path sharing. The various 802.llx specifications may use phase-shift keying (i.e., PSK) modulation or complementary code keying (i.e., CCK) modulation, for example, Bluetooth™ (including Bluetooth™ Low Energy) is a telecommunications industry specification that allows, e.g., mobile phones, computers, smart phones, and other electronic devices to be interconnected using a short-range wireless connection. Other forms of interconnection (e.g., Near Field Communication (NFC)) may also be used.
[0045] For the purposes of the present disclosure, the computing system 100 may
include a processing arrangement. Herein, FIG. 2 is a block diagram of an example of a processing arrangement 200 capable of implementing embodiments according to the present disclosure. The processing arrangement 200 is implemented for issuing commands for managing and controlling operations of a mobile robot; and in particular for aiding the mobile robot to navigate an environment (as will be described later in more detail). Herein, the environment may be a warehouse environment, a manufacturing plant and the like; in which the mobile robots are typically implemented. In one embodiment, the application 20 for aiding the mobile robot to navigate the environment as described above may be executed as a part of the processing arrangement 200 as described herein. Thereby, for example in case of a warehouse, the computing system 100 may be a broader system such as the warehouse management system (WMS) as known in the art, in which the processing arrangement 200 may be executed for aiding a mobile robot to navigate an environment. Hereinafter, the terms "computing system 100" and "processing arrangement 200" have been broadly interchangeably used to represent means for aiding a mobile robot to navigate an environment, without any limitations.
[0046] In the example of FIG. 2, the processing arrangement 200 includes a
processing unit 205 for running software applications (such as, the application 20 of FIG. 1) and optionally an operating system. Memory 210 stores applications and data for use by the processing unit 205. Storage 215 provides non-volatile storage for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM or other optical storage devices. An optional user input device 220 includes devices that communicate user inputs from one or more users to the processing arrangement 200 and may include keyboards, mice, joysticks, touch screens, etc. A communication or network interface 225 is provided which allows the processing arrangement 200 to communicate with other computer systems via an electronic communications network, including wired and/or wireless communication and including an Intranet or the Internet. In one embodiment, the processing arrangement 200 receives instructions and user inputs from a remote computer through communication interface 225. Communication interface 225 can comprise a transmitter and receiver for communicating with remote devices. An optional display device 250 may be provided which can be any device capable of displaying visual information in response to a signal from the processing arrangement 200. The components of the processing arrangement 200, including the processing unit 205, the memory
210, the data storage 215, the user input devices 220, the communication interface 225, and the display device 250, may be coupled via one or more data buses 260.
[0047] In the embodiment of FIG. 2, a graphics system 230 may be coupled with the
data bus 260 and the components of the processing arrangement 200. The graphics system 230 may include a physical graphics processing unit (GPU) 235 and graphics memory. The GPU 235 generates pixel data for output images from rendering commands. The physical GPU 235 can be configured as multiple virtual GPUs that may be used in parallel (concurrently) by a number of applications or processes executing in parallel. For example, mass scaling processes for rigid bodies or a variety of constraint solving processes may be run in parallel on the multiple virtual GPUs. Graphics memory may include a display memory 240 (e.g., a framebuffer) used for storing pixel data for each pixel of an output image. In another embodiment, the display memory 240 and/or additional memory 245 may be part of the memory 210 and may be shared with the processing unit 205. Alternatively, the display memory 240 and/or additional memory 245 can be one or more separate memories provided for the exclusive use of the graphics system 230. In another embodiment, graphics system 230 includes one or more additional physical GPUs 255, similar to the GPU 235. Each additional GPU 255 may be adapted to operate in parallel with the GPU 235. Each additional GPU 255 generates pixel data for output images from rendering commands. Each additional physical GPU 255 can be configured as multiple virtual GPUs that may be used in parallel (concurrently) by a number of applications or processes executing in parallel, e.g., processes that solve constraints. Each additional GPU 255 can operate in conjunction with the GPU 235, for example, to simultaneously generate pixel data for different portions of an output image, or to simultaneously generate pixel data for different output images. Each additional GPU 255 can be located on the same circuit board as the GPU 235, sharing a connection with the GPU 235 to the data bus 260, or each additional GPU 255 can be located on another circuit board separately coupled with the data bus 260. Each additional GPU 255 can also be integrated into the same module or chip package as the GPU 235. Each additional GPU 255 can have additional memory, similar to the display memory 240 and additional memory 245, or can share the memories 240 and 245 with the GPU 235. It is to be understood that the circuits and/or functionality of GPU as described herein could also be implemented in other types of processors, such as general-purpose or other special-purpose coprocessors, or within a CPU.
[0048] Referring to FIG. 3, illustrated is a flowchart listing steps involved in a
method 300 for aiding a mobile robot to navigate an environment by generating a sparse network therefor. The steps of the method 300 are implemented by a system, as illustrated in FIG. 4. In particular, FIG. 4 illustrates a schematic of a system 400 for aiding a mobile robot to navigate an environment, in accordance with one or more embodiments of the present disclosure. Herein, the system 400 is specifically implemented for providing a sparse network representation of a given environment which may then be utilized by the mobile robot for navigating the given environment. The system 400 implements the processing arrangement 200 as described in the preceding paragraphs for the said purpose. Herein, the embodiments of the present disclosure have been described with reference to the mobile robot in terms of the problem being solved, and with reference to the mobile robot as part of the disclosed solution. It may be appreciated that the environment may be an entire work area or part of the work area, e.g., in a warehouse environment (not shown) or the like. The mobile robot may be utilized for various operations in the environment, like transferring of goods, such as cartons, in the work area, which is typical, e.g., for the warehouse environment.
[0049] At step 302, the method 300 includes receiving images of sections of the
environment. Correspondingly, in the present system 400, as shown in FIG. 4, the processing arrangement 200 is configured to receive images of sections of the environment. Herein, the "sections of the environment" may be understood as portions or regions of the environment. In the present disclosure, the multiple images as received may correspond to different regions of the environment, which when considered collectively may generally cover the entire environment, or at least a navigable path in the environment. It may be appreciated that, in some examples, the received images may be pre-processed to selectively remove duplicate images or images with largely overlapping regions, to reduce an overall size of the image dataset as implemented.
[0050] In an embodiment of the present disclosure, the processing arrangement 200
may be configured to implement a robot device (generally represented by the reference numeral 410) to traverse the environment to capture images. The said robot device 410 may include an image capturing device 412 (as shown in FIG. 4) for the purpose of capturing images of the environment. The image capturing device 412 may be in the form of a CMOS sensors, like a camera as known in the art. The image capturing device 412 in the form of a camera, which is known for its low price, ease of use and ability to capture in abundance information, is suitable for vision-based mobile robot navigation as per the embodiments of the present disclosure. The image
capturing device 412 may have a certain field-of-view (FOV) and may accordingly capture images of sections of the environment. In some examples, the processing arrangement 200 may be configured to implement some known image processing techniques to correct artifacts in the captured images, warping of the captured images, and the like, as may be contemplated by a person skilled in the art.
[0051] In an alternate or additional embodiment, the system 400 may be adapted for
receiving the images from an operator device (generally represented by reference numeral 420) associated with the environment. The operator device 420 may be in the form of a mobile device, like a smartphone, being handled by a human operator, who may selectively capture images of the sections of the environment. Herein, the said selectively captured images may correspond to different regions of the environment, which, as per the said requirement, when considered collectively may generally cover the entire environment, or at least a navigable path in the environment. As used herein, the operator device includes, but is not limited to, a cell phone, such as Apple's iPhone®, other portable electronic devices, such as Apple's iPod Touches®, Apple's iPads®, and mobile devices based on Google's Android® operating system, and any other portable electronic device that includes software, firmware, hardware, or a combination thereof that is capable of at least receiving a wireless signal, decoding if needed, and exchanging information with the processing arrangement to send the captured images thereto.
[0052] At step 304, the method 300 includes analysing the images to determine a
scene change between two of the received images of the sections of the environment. Correspondingly, in the present system 400, the processing arrangement 200 is further configured to analyse the images to determine a scene change between two of the received images of the sections of the environment. Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing; and there are known techniques to achieve the same which exploits geometric, appearance and semantic information to determine which areas in two images may have changed. In some scenarios (e.g., when the cameras that produced the images have widely spaced optical centres, or when the scene consists of deformable/articulated objects), a nonglobal transformation may need to be estimated to determine corresponding points between two images, e.g., via implementing one or more of optical flow, tracking, object recognition and
pose estimation, or structure-from-motion algorithms. These techniques may be contemplated by a person skilled in the art and thus have not been described herein for the brevity of the present disclosure.
[0053] At step 306, the method 300 includes identifying two connected sections in
the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment. Correspondingly, in the present system 400, the processing arrangement 200 is further configured to identify two connected sections in the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment. Herein, the two connected sections in the environment may be determined based on the corresponding images with a group of connected pixels with generally similar properties. In a region-based approach, all pixels that correspond to a region of the environment are grouped together and are marked to indicate that they belong to one region (this process is sometimes called segmentation). Herein, pixels are assigned to regions using some criterion that distinguishes them from the rest of the image(s). In one or more examples, the criteria implementation for segmentation process are value similarity and spatial proximity; i.e., two pixels may be assigned to the same region if they have similar intensity characteristics or if they are close to one another.
[0054] At step 308, the method 300 includes generating a sparse network with images
of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. Correspondingly, in the present system 400, the processing arrangement 200 is further configured to generate a sparse network (as represented by reference numeral 430) with images of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. Herein, the sparse network 430 may represent a scene graph, which represents the physical environment in a sparse and semantic way, and propose the scene graph construction framework. In particular, the sparse network 430 may represent the environment compactly by abstracting the environment as a graph, with the environment being considered as formed of multiple virtual spaces, and in which nodes of the graph may represent entries/exits of virtual
spaces and connections (edges) may characterize the relations between the entries/exits of two connected (consecutive) virtual spaces in the environment. Herein, each of the node represents a discrete image feature of the environment. There are some known techniques for sparse graph generation using a set of images with identified connected sections, including salient region detection approach involving image segmentation and pixels saliency, Gaussian blob calculation, Bayesian base refinement and the like. It may be appreciated that any suitable techniques may be implemented for the said purpose in consideration of the present embodiments without any limitations.
[0055] At step 310, the method 300 includes storing the generated sparse network to
be implemented by the mobile robot for navigating the environment. Correspondingly, in the present system 400, the processing arrangement 200 is further configured to store the generated sparse network 430 in a database (as represented by reference numeral 440) to be implemented by a mobile robot for navigating the environment. As used herein, the term database is used broadly to include any known or convenient means for storing data, whether centralized or distributed, relational or otherwise. It may be understood that as the proposed sparse network 430 depicts the environments as a sparse graph, the graph can cover up an extensive range of physical spaces. Further, it may be possible that the mobile robot may encounter a broad range of environments or encounter new environments in the middle; and for this purpose, the sparse network 430 may offer a quick way for accessing and updating the environment models. Thus, it may be understood that the database 440 may be designed and/or configured to offer the scalability that may be needed to store such scalable generated sparse network 430 therein.
[0056] The present disclosure further relates to implementation of the generated
sparse network 430 for the given environment to aid with navigation of the mobile robot in the same given environment. It may be appreciated that for purpose of the present disclosure, the said mobile robot (as later represented by reference numeral 610) being aided for navigation may be the same as the robot device 410 which is said to have been implemented to traverse the environment to capture images, via the image capturing device 412, of the sections of the environment.
[0057] Referring to FIG. 5, illustrated is a flowchart listing steps involved in a
method 500 for aiding a mobile robot to navigate an environment by implementing the sparse
network 430 with images of connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment. The steps of the method 500 are implemented by a system, as illustrated in FIG. 6. In particular, FIG. 6 illustrates a schematic of a system 600 for aiding a mobile robot (generally represented by reference numeral 610) to navigate an environment by implementing the sparse network 430 therefor, in accordance with one or more embodiments of the present disclosure. The system 600 implements the processing arrangement 200 as described in the preceding paragraphs for the said purpose. The system 600 is specifically implemented to utilize the sparse network 430, as stored in the database 440, of a given environment by the mobile robot 610 for navigating the said given environment; with the processing arrangement 200 being in signal communication with the database 440 for accessing the sparse network 430. The embodiments with respect to the system 600 of the present disclosure have been described with reference to the mobile robot 610 in terms of the problem being solved, and with reference to the mobile robot 610 as part of the disclosed solution. As discussed before, it may be appreciated that the environment may be an entire work area or part of the work area, e.g., in a warehouse environment (not shown) or the like. As discussed, the mobile robot 610 may be utilized for various operations in the environment, like transferring of goods, such as cartons, in the work area, which is typical, e.g., for the warehouse environment.
[0058] At step 502, the method 500 includes receiving a target image of a section of
the environment which forms at least a part of a view as captured by an image capturing device of the mobile robot when the mobile robot is positioned at a target location in the environment where the mobile robot is required to be navigated to. Correspondingly, in the present system 600, as shown in FIG. 6, the processing arrangement 200 is configured to receive a target image of a section of the environment which forms at least a part of a view as captured by an image capturing device (as represented by reference numeral 612) of the mobile robot 610 when the mobile robot 610 is positioned at a target location in the environment where the mobile robot 610 is required to be navigated to. As discussed, herein, the "sections of the environment" may be understood as portions or regions of the environment. The mobile robot 610 may include the image capturing device 612 (as shown in FIG. 6) for the purpose of capturing images of the environment, to be received by the processing arrangement 200. Further, as discussed earlier, the image capturing
device 612 may be in the form of a CMOS sensors, like a camera as known in the art. In general, it may be preferred that the specifications of the image capturing device 612 may be the same as the image capturing device 412 for the purposes of the present disclosure, as would be contemplated based on the proceeding description. Also, as may be appreciated that in case of the mobile robot 610 being the same as the robot devoice 410, the corresponding image capturing device 412, 612 would be the same in any case.
[0059] In the present embodiments, the processing arrangement 200 may generally
receive the image of the target location in the environment, where the mobile robot 610 is required to be navigated to perform a next step or complete an operation assigned thereto. Such image of the target location may be received from a control server (as represented by reference numeral 620), such as the WMS (as described earlier) which is responsible for operations of the mobile robot 610 in the environment. Further, as may be understood, herein, the image of the target location may generally correspond to a section of image which may be captured by the image capturing device 612 of the mobile robot 610 when the mobile robot 610 is positioned at the target location. This way, as may be contemplated, by comparing the captured image with the received image of the target location, the processing arrangement 200 can confirm that the mobile robot 610 is indeed positioned at the defined target location in the environment.
[0060] At step 504, the method 500 includes configuring the image capturing device
of the mobile robot to capture one or more initial images of sections of the environment from a current position thereof. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to configure the image capturing device 612 of the mobile robot 610 to capture one or more initial images of sections of the environment from a current position thereof. That is, the processing arrangement 200 may send commands to the image capturing device 612 of the mobile robot 610 to capture images in the FOV thereof.
[0061] At step 506, the method 500 includes determining a match of at least one of
the one or more captured initial images to one of the images represented as nodes in the implemented sparse network. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to determine a match of at least one of the one or more captured initial images to one of the images represented as nodes in the implemented sparse network 430. In one or more embodiments of the present disclosure, determining the match
between two images comprises determining image features of the two images being matched are mapped at least to a predefined threshold. Correspondingly, the present system 600 may include an image matching module (as represented by reference numeral 630). The image matching module 630 is configured to determine image features of the two images being matched; and confirm the match between two images when the corresponding determined image features are mapped at least to a predefined threshold. In case, none of the captured initial images may match with the any of the images as part of the implemented sparse network 430, in that case, the processing arrangement 200 may command the image capturing device 612 of the mobile robot 610 to capture other set of initial images of sections, after changing its orientation and/or moving to a different location of the environment from the current position thereof.
[0062] For these purposes, the processing arrangement 200 may implement computer
vision based image matching algorithms which are widely used to recognize, manipulate and extract details from image data, to find a similarity or multiple similarities between a set of images and eventually matching them. For instance, SIFT (Scale Invariant Feature Transform) technique is a feature detection algorithm in computer vision, which helps locate local features in an image, commonly known as the key-points of the image, and these key-points, in turn, are scale and rotation invariant that can be used for various computer vision applications, like image matching and the like. In other examples, the classical image matching algorithms like the Brute Force Matcher, FLANN Based Feature Matcher on the extracted features from images either using the classical descriptors on the RGB images (like SURF, ORB etc.) or the neural network based approaches like the Siamese Networks may be implemented without departing from the scope and the spirit of the present disclosure. Further, in the present embodiments, in an example, the "threshold" may be defined based on application area for the present system 600, with generally relatively higher threshold defined for "critical" applications and a relatively lower threshold defined for "non-critical" applications.
[0063] Now, herein, as for the purposes of actual movement of the mobile robot 610,
the mobile robot 610 may include odometry control arrangement (as represented by reference numeral 640) to control movement (navigation) thereof in the environment. As used herein, "odometry" refers to the use of data from motion sensors to estimate change in position over time. It may be appreciated that the odometry control arrangement 640 may be in the form of a controller which may be any processing device, system or part thereof that controls at least one operation of
the device. Such controller may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Such controller may be a multi-core processor, a single core processor, or a combination of one or more multi-core processors and one or more single core processors. For example, the one or more processors may be embodied as one or more of various processing devices, such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing circuitry with or without an accompanying DSP, or various other processing devices including integrated circuits such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. Further, the memory may include one or more non-transitory computer-readable storage media that can be read or accessed by other components in the device. The memory may be any computer-readable storage media, including volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disc storage, which can be integrated in whole or in part with the device. In some examples, the memory may be implemented using a single physical device (e.g., optical, magnetic, organic or other memory or disc storage unit), while in other embodiments, the memory may be implemented using two or more physical devices without any limitations. Such odometry control arrangement 640 is well known in the art and thus has not been described any further herein for the brevity of the present disclosure.
[0064] At step 508, the method 500 includes identifying a queue of images of
connected sections in the environment from the implemented sparse network, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to identify a queue of images of connected sections in the environment from the implemented sparse network 430, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image. That is, once any one of the captured initial images may be determined to match with at least one of the images as part of the utilized sparse network 430, the processing arrangement 200 may determine the next image which have connected section(s) to the matched image, and further the next image which have connected section(s) to the first determined next image, and so on, till the image with the connected section(s)
corresponding to the target image has been determined. In other words, once the goal has been defined, a search is performed on the sparse network 430 to describe a queue which consists of the nodes with the corresponding scenes (images) the mobile robot 610 needs to achieve ("see"). The queue suggests the set of images the mobile robot 610 needs to see before it can reach the goal environment. This process implementing computer vision algorithms may be contemplated by a person skilled in the art and thus not been described any further herein.
[0065] At step 510, the method 500 includes configuring the mobile robot to traverse
the environment starting from the said current position, and the image capturing device therein to capture intermediate images at regular intervals. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to configure the mobile robot 610 to traverse the environment starting from the said current position, and the image capturing device 612 therein to capture intermediate images at regular intervals. For this purpose, the processing arrangement 200 may command the said odometry control arrangement 640 to start moving the mobile robot 610 from the current position, to some extent, towards the section corresponding to the first matched image. Meanwhile, the processing arrangement 200 may further command the image capturing device 612 to keep capturing images of the sections of the environment in the FOV thereof. This way the mobile robot 610 may initiate movement from the current position thereof, towards the target location as provided in the system 600.
[0066] At step 512, the method 500 includes determining a match for at least one of
the captured intermediate images with a next image in the identified queue of images. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to determine a match for at least one of the captured intermediate images with a next image in the identified queue of images. That is, the processing arrangement 200 may keep checking the captured intermediate images to match with the next image in the identified queue of images. Say, the processing arrangement 200 may determine that the first captured intermediate image may not match with the next image (i.e., second image in the queue of images from the sparse network 430), then the processing arrangement 200 may check for the second captured intermediate image. If, in such case, the second captured intermediate image does match with the said next image; in that scenario, the process moves to the next step. As discussed earlier, this process implementing computer vision algorithms may be contemplated by a person skilled in the art and thus not been described any further herein.
[0067] At step 514, the method 500 includes navigating the mobile robot to a position
corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location. Correspondingly, in the present system 600, the processing arrangement 200 is further configured to navigate the mobile robot to a position corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location. That is, the processing arrangement 200 may keep checking for the next match for the subsequent captured intermediate images after the mobile robot 610 may have started moved from the said initial current position; and the mobile robot 610 may move to a position corresponding to the image in the sparse network 430 matched with the said the captured intermediate image. This step may be repeated a number of times till the mobile robot 610 reaches the target location in the environment with the captured (intermediate) image thereat matching with the said target image of the section of the environment. In particular, the processing arrangement 200 may be configured to flag that a local intermediate image/ environment is successfully reached. The predefined threshold stored at the mapping stage is used to flag achieving a node in terms of the environment reached. Subsequently, the mobile robot 610 tries to achieve the next intermediate local image until the goal pose is achieved. As discussed, for the purposes of actual movement of the mobile robot 610, the mobile robot 610 may implement the odometry control arrangement 640 to control its movement in the environment.
[0068] In one or more embodiments of the present disclosure, navigating the mobile
robot 610 comprises navigating the mobile robot 610 in the environment while avoiding obstacles therein. Correspondingly, in the present system 600, the mobile robot 610 includes a sensing arrangement (represented by reference numeral 650, in FIG. 6) configured to determine obstacles in a path of the mobile robot 610, to aid the mobile robot 610 in navigating the environment while avoiding obstacles therein. Herein, the sensing arrangement 650 may include different sensing areas for sensing obstacles in a traveling direction of the mobile robot 610, and the information about the sensed obstacles may be implemented by the odometry control arrangement 640 to control movement thereof in the environment, such that the mobile robot 610 avoids the sensed obstacles therein. In general, given a local intermediate image (node) to be achieved, the mobile robot 610 directs itself in the direction with locally maximum matching features while avoiding the obstacles. Herein, the search direction may relate the relative angular rotation between the intermediate local image and the image the mobile robot 610 sees.
[0069] The present disclosure helps overcome the problem associated with
conventional navigation techniques in which small uncertainties in the environment or change in the local environment, leads to error in localization of the mobile robot within a prescribed map/ environment. This leads to subsequent errors in predicting the exact location of the mobile robot and the navigation stack is then unable to navigate the environment. The problem has motivated the embodiments of the present disclosure to mitigate the need of localizing the mobile robot in terms of Euclidean coordinates; and rather localize the mobile robot with respect to the images as been captured thereby. The purpose of the present disclosure is to navigate the robot without a dense map and also not localising the robot in terms of the local Euclidean coordinates. The sparse navigation ensures the mobile robot reaches the local landmark images. A set of connected nodes to the goal allows a search to the goal environment. Further, representation of the goal environment in the form of the image viewed allows the mobile robot to reach the goal.
[0070] In particular, the systems and the methods of the present disclosure focus on
leveraging the environment images to help a mobile robot to navigate the given environment. The scene change phases of navigation (similar to exit points for rooms) are saved as nodes in a sparse graph. The goal location is fed to the mobile robot as a set of scenic images. A sequence of "to-be-achieved" nodes is then searched to achieve the goal environment. The mobile robot subsequently tries to match the local image with the image saved in the environment while avoiding the obstacles. This allows synthesis of a navigation stack where the mobile robot tries to achieve local goals by matching image scenes. The embodiments of the present disclosure do not require synthesis of a continuous map (as in traditional methods). Only a few selected features, like room exit points can be saved as images. The present disclosure mitigates the use of Euclidean geometry by not using the local coordinates. Hence, with the present disclosure, there is no need to exactly localize the mobile robot in the environment. The set of images only characterise the local direction where the mobile robot needs to move.
[0071] The present disclosure provides representation of a sparse map based on
landmark scenic images; and in particular sparse navigation based on the nodes storing the viewpoint landmark images. This requires the representation of the goal in terms of the scenic images that the robot "needs to see," when it has reached the goal; i.e., the target location. This is achieved by synthesis of navigation stack where the mobile robot targets the local viewpoints. The teachings of the present disclosure may be implemented to aid navigation of the mobile robot
where traditional methods of localization fail. Alternatively, the embodiments of the present disclosure may aid the traditional SLAM (Simultaneous Localisation and Mapping) based methods to prescribe the direction of motion of the robot under imprecise knowledge of coordinate localization.
[0072] The foregoing descriptions of specific embodiments of the present disclosure
have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiment was chosen and described in order to best explain the principles of the present disclosure and its practical application, to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated.
WE CLAIM:-
1. A method for aiding a mobile robot to navigate an environment, comprising:
receiving images of sections of the environment;
analysing the images to determine a scene change between two of the received images of the sections of the environment;
identifying two connected sections in the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment;
generating a sparse network with images of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment; and
storing the generated sparse network to be implemented by the mobile robot for navigating the environment.
2. The method as claimed in claim 1 further comprising configuring a robot device to traverse the environment to capture images, via an image capturing device, of the sections of the environment.
3. The method as claimed in claim 1 further comprising receiving the images from an operator device associated with the environment.
4. A method for aiding a mobile robot to navigate an environment by implementing a sparse network with images of connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment, the method comprising:
receiving a target image of a section of the environment which forms at least a part of a view as captured by an image capturing device of the mobile robot when the mobile robot is positioned at a target location in the environment where the mobile robot is required to be navigated to;
configuring the image capturing device of the mobile robot to capture one or more initial images of sections of the environment from a current position thereof;
determining a match of at least one of the one or more captured initial images to one of the images represented as nodes in the implemented sparse network;
identifying a queue of images of connected sections in the environment from the implemented sparse network, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image;
configuring the mobile robot to traverse the environment starting from the said current position, and the image capturing device therein to capture intermediate images at regular intervals;
determining a match for at least one of the captured intermediate images with a next image in the identified queue of images; and
navigating the mobile robot to a position corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location.
5. The method as claimed in claim 4, wherein navigating the mobile robot comprises navigating the mobile robot in the environment while avoiding obstacles therein.
6. The method as claimed in claim 4, wherein determining the match between two images comprises determining image features of the two images being matched are mapped at least to a predefined threshold.
7. A system for aiding a mobile robot to navigate an environment, the system comprising a processing arrangement configured to:
receive images of sections of the environment;
analyse the images to determine a scene change between two of the received images of the sections of the environment;
identify two connected sections in the environment from perspective of navigation from one position to another position therein, based on the determined scene change between corresponding two images of the said sections of the environment;
generate a sparse network with images of the identified connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment; and
store the generated sparse network in a database to be implemented by the mobile robot for navigating the environment.
8. A system for aiding a mobile robot to navigate an environment, the system comprising a database having a sparse network with images of connected sections in the environment represented as nodes and comprising one or more connections formed between the nodes representing a physical relationship of the corresponding represented connected sections in the environment stored therein, and a processing arrangement in signal communication with the database and configured to:
receive a target image of a section of the environment which forms at least a part of a view as captured by an image capturing device of the mobile robot when the mobile robot is positioned at a target location in the environment where the mobile robot is required to be navigated to;
configure the image capturing device of the mobile robot to capture one or more initial images of sections of the environment from a current position thereof;
determine a match of at least one of the one or more captured initial images to one of the images represented as nodes in the implemented sparse network;
identify a queue of images of connected sections in the environment from the implemented sparse network, connecting section corresponding to the matched at least one of the one or more captured initial images to the target location in the environment corresponding to the target image;
configure the mobile robot to traverse the environment starting from the said current position, and the image capturing device therein to capture intermediate images at regular intervals;
determine a match for at least one of the captured intermediate images with a next image in the identified queue of images; and
navigate the mobile robot to a position corresponding to the matched at least one of the captured intermediate images till the mobile robot has navigated to the target location.
9. The system as claimed in claim 8, wherein the mobile robot comprises a sensing arrangement configured to determine obstacles in a path of the mobile robot, to aid the mobile robot in navigating the environment while avoiding obstacles therein.
10. The system as claimed in claim 8 further comprising an image matching module configured to:
determine image features of the two images being matched; and confirm the match between two images when the corresponding determined image features are mapped at least to a predefined threshold.
| # | Name | Date |
|---|---|---|
| 1 | 202211002875-FORM 1 [18-01-2022(online)].pdf | 2022-01-18 |
| 2 | 202211002875-DRAWINGS [18-01-2022(online)].pdf | 2022-01-18 |
| 3 | 202211002875-DECLARATION OF INVENTORSHIP (FORM 5) [18-01-2022(online)].pdf | 2022-01-18 |
| 4 | 202211002875-COMPLETE SPECIFICATION [18-01-2022(online)].pdf | 2022-01-18 |
| 5 | 202211002875-FORM 18 [28-01-2022(online)].pdf | 2022-01-28 |
| 6 | 202211002875-FORM-26 [24-03-2022(online)].pdf | 2022-03-24 |
| 7 | 202211002875-GPA-080422.pdf | 2022-04-11 |
| 8 | 202211002875-Correspondence-080422.pdf | 2022-04-11 |
| 9 | 202211002875-Proof of Right [27-04-2022(online)].pdf | 2022-04-27 |
| 10 | 202211002875-Others-290422.pdf | 2022-05-02 |
| 11 | 202211002875-Correspondence-290422.pdf | 2022-05-02 |
| 12 | 202211002875-RELEVANT DOCUMENTS [28-09-2022(online)].pdf | 2022-09-28 |
| 13 | 202211002875-POA [28-09-2022(online)].pdf | 2022-09-28 |
| 14 | 202211002875-FORM 13 [28-09-2022(online)].pdf | 2022-09-28 |
| 15 | 202211002875-AMENDED DOCUMENTS [28-09-2022(online)].pdf | 2022-09-28 |
| 16 | 202211002875-GPA-171022.pdf | 2022-12-07 |
| 17 | 202211002875-Correspondence-171022.pdf | 2022-12-07 |
| 18 | 202211002875-FER.pdf | 2024-08-09 |
| 19 | 202211002875-FER_SER_REPLY [07-02-2025(online)].pdf | 2025-02-07 |
| 20 | 202211002875-DRAWING [07-02-2025(online)].pdf | 2025-02-07 |
| 21 | 202211002875-CORRESPONDENCE [07-02-2025(online)].pdf | 2025-02-07 |
| 22 | 202211002875-COMPLETE SPECIFICATION [07-02-2025(online)].pdf | 2025-02-07 |
| 23 | 202211002875-CLAIMS [07-02-2025(online)].pdf | 2025-02-07 |
| 24 | 202211002875-ABSTRACT [07-02-2025(online)].pdf | 2025-02-07 |
| 1 | SearchHistoryE_15-07-2024.pdf |