Abstract: Disclosed herein is a method and system for localizing vehicle in a classified vector map. The method comprises receiving classified vector map data for an environment within Field of View (FOV) of the vehicle. Based on classified vector map data, an occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle is generated. The method comprises identifying scan data of the one or more big structures in the FOV of one or more sensors configured in the vehicle, based on reflection lines comprising one or more reflection points. Based on the occupancy data and the scan data, the method comprises localizing the vehicle in the classified vector map. The present disclosure uses classified vector map data consisting of one or more big structures which are accurately annotated and hence helps in accurate localization of the vehicle in the classified vector map. FIG. 1
Claims:1. A method for localizing a vehicle in a classified vector map, the method comprising:
receiving, by a vehicle localization system, a classified vector map data for an environment within a Field of View (FOV) of the vehicle;
generating, by the vehicle localization system, an occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle based on the classified vector map data;
identifying, by the vehicle localization system, a scan data of the one or more big structures in the FOV of one or more sensors configured in the vehicle, based on reflection lines comprising one or more reflection points, wherein the one or more reflection points are formed using the one or more sensors, wherein the one or more reflection points correspond to surface of the one or more big structures; and
localizing, by the vehicle localization system, the vehicle in the classified vector map based on the occupancy data and the scan data.
2. The method as claimed in claim 1, wherein generating the occupancy data of the one or more big structures comprises:
identifying data points for each of one or more structures in the FOV of the vehicle based on annotation data associated with each of the one or more structures in the classified vector map data;
determining the data points in each of the one or more structures for aligning in a straight line and spanning of greater than a first predefined length;
forming a hypothetical straight line upon connecting the determined data points, wherein the hypothetical straight line forms surface of the one or more big structures; and
generating the occupancy data of the one or more big structures by projecting the hypothetical straight line based on a predefined resolution factor and coordinate value of each sub point from start and end sub point in the straight line corresponding to the surface of the one or more big structures.
3. The method as claimed in claim 2, wherein the coordinate value of each sub point in the hypothetical straight line forming the surface of the one or more big structures is obtained from a vector data point representing the surface of the corresponding big structures.
4. The method as claimed in claim 1, wherein identifying the scan data of the one or more big structures comprises:
identifying one or more reflection lines comprising one or more reflection points corresponding to one or more structures in the FOV of the sensor in the vehicle;
selecting the one or more reflection lines of greater than the first predefined length among the one or more reflection lines corresponding to each of the one or more structures;
determining an orientation value and a centroid value of each of the selected one or more reflection lines;
grouping the selected one or more reflection lines based on the similarity of centroid and orientation, wherein the selected one or more reflection lines with similar orientation value and centroid value are identified as the one or more big structures; and
identifying the scan data of the one or more big structures by selecting a predefined number of the selected reflection lines within each group having same orientation value and centroid value and leveling ‘z’ values of the corresponding reflection points of the selected reflection lines.
5. The method as claimed in claim 1 further comprises generating a reference navigation path from a current vehicle position to a destination position upon localizing the vehicle in the classified vector map, wherein the reference navigation path is generated based on current environment data of the vehicle and speed of the vehicle, wherein the current environment data comprises at least one of data related to terrain and obstacles in the reference navigation path.
6. The method as claimed in claim 5 further comprises:
identifying a current velocity for the vehicle to navigate in the reference navigation path based on a predefined velocity detection technique; and
providing the current velocity to the vehicle for navigating in the reference navigation path.
7. A vehicle localization system for localizing a vehicle in a classified vector map, the vehicle localization system comprising:
a processor; and
a memory communicatively coupled to the processor, wherein the memory stores the processor-executable instructions, which, on execution, causes the processor to:
receive a classified vector map data for an environment within a Field of View (FOV) of the vehicle;
generate an occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle based on the classified vector map data;
identify a scan data of the one or more big structures in the FOV of one or more sensors configured in the vehicle, based on reflection lines comprising one or more reflection points, wherein the one or more reflection points are formed using the one or more sensors, wherein the one or more reflection points correspond to surface of the one or more big structures; and
localize the vehicle in the classified vector map based on the occupancy data and the scan data.
8. The vehicle localization system as claimed in claim 7, wherein to generate the occupancy data of the one or more major structures the instructions cause the processor to:
identify data points for each of one or more structures in the FOV of the vehicle based on annotation data associated with each of the one or more structures in the classified vector map data;
determine the data points in each of the one or more structures for aligning in a straight line and spanning of a first predefined length;
form a hypothetical straight line upon connecting the determined data points, wherein the hypothetical straight line forms surface of the one or more big structures; and
generate the occupancy data of the one or more big structures by projecting the hypothetical straight line based on a predefined resolution factor and coordinate value of each sub point from start and end sub point in the straight line corresponding to the surface of the one or more big structures.
9. The vehicle localization system as claimed in claim 8, wherein the processor obtains the coordinate value of each sub point in the hypothetical straight forming the surface of the one or more big structures from a vector data point representing the surface of the corresponding big structures.
10. The vehicle localization system as claimed in claim 7, wherein to identify the scan data of the one or more major structures the instructions cause the processor to:
identify one or more reflection lines comprising one or more reflection points corresponding to one or more structures in the FOV of the sensor in the vehicle;
select the one or more reflection lines of greater than the first predefined length among the one or more reflection lines corresponding to each of the one or more structures;
determine an orientation value and a centroid value of each of the selected one or more reflection lines;
group the selected one or more reflection lines based on the similarity of centroid and orientation, wherein the selected one or more reflection lines with similar orientation value and centroid value are identified as the one or more big structures; and
identify the scan data of the one or more big structures by selecting a predefined number of the selected reflection lines within each group having same orientation value and centroid value and leveling ‘z’ values of the corresponding reflection points of the selected reflection lines.
, Description:TECHNICAL FIELD
The present subject matter is generally related to autonomous vehicle and more particularly, but not exclusively, to method and system for localizing a vehicle in a classified vector map.
BACKGROUND
Nowadays, navigating Autonomous Vehicle (AV)/driverless vehicle is becoming an important requirement in different application areas. AVs are the vehicles that are capable of sensing environment around them for navigation without any human intervention. An AV senses the environment with the help of sensors, such Laser and Light Detection and Ranging (LIDAR), configured in the AV. The AVs may also use Global Positioning System (GPS), computer vision systems, and the like for the navigation purpose.
Generally, AVs make use of a planned navigation path for navigating from a source point to a destination point. AVs, while following the planned navigation path on a map, need to localize on the map. The map used for navigation path planning may be a vector map with road or lane information. However, to localize the vehicle in the map, a localization technique may require a detailed raster map. The localization technique may use the detailed raster map along with LIDAR or laser scan data to determine position of the vehicle on the map.
The raster map generation needs special preparation and apparatus. Also, the map is huge in size and hence requires lot of storage and deployment infrastructure. Also, the raster map generation requires significant costs and efforts.
The information disclosed in this background of the disclosure section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
SUMMARY
Disclosed herein is a method for localizing a vehicle in a classified vector map. The method comprises receiving, by a vehicle localization system, a classified vector map data for an environment within Field of View (FOV) of the vehicle. Further, the method comprises generating an occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle based on the classified vector map data. The method further comprises identifying a scan data of the one or more big structures in the FOV of one or more sensors configured in the vehicle, based on reflection lines comprising one or more reflection points. The one or more reflection points are formed using the one or more sensors, wherein the one or more reflection points correspond to surface of the one or more big structures. Based on the occupancy data and the scan data, the method comprises localizing the vehicle in the classified vector map.
Further, the present disclosure discloses a system for localizing a vehicle in a classified vector map. The vehicle localization system comprises a processor and a memory. The memory is communicatively coupled to the processor, wherein the memory stores the processor-executable instructions, which, on execution, causes the processor to receive a classified vector map data for an environment within Field of View (FOV) of the vehicle. The processor generates an occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle based on the classified vector map data. The processor also identifies a scan data of the one or more big structures in the FOV of one or more sensors configured in the vehicle, based on reflection lines comprising one or more reflection points. The one or more reflection points are formed using the one or more sensors, wherein the one or more reflection points correspond to surface of the one or more big structures. Thereafter, the processor localizes the vehicle in the classified vector map based on the occupancy data and the scan data.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, explain the disclosed principles. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the figures to reference like features and components. Some embodiments of system and/or methods in accordance with embodiments of the present subject matter are now described, by way of example only, and regarding the accompanying figures, in which:
Fig.1 shows an exemplary architecture for localizing a vehicle in a classified vector map in accordance with some embodiments of the present disclosure;
Fig.2a shows block diagram of a vehicle localization system in accordance with some embodiments of the present disclosure;
Fig.2b shows top view of an exemplary classified vector map for environment in FOV of vehicle in accordance with some embodiments of the present disclosure;
Fig.2c shows representation of data points on the big structures in accordance with some exemplary embodiments of the present disclosure;
Fig.2d shows top view of occupancy data generated for big structures in accordance with some exemplary embodiments of the present disclosure;
Fig.2e shows exemplary representation of environment around FOV of a sensor in the vehicle in accordance with some embodiments of the present disclosure;
Fig.2f shows reflection lines of one or more structures in accordance with some embodiments of the present disclosure;
Fig.2g shows a graph representing a reflection line for determining orientation value and centroid value in accordance with some embodiments of the present disclosure;
Fig.2h shows top view of scan data identified for big structures in accordance with some embodiments of the present disclosure;
Fig.3 shows a flowchart illustrating method of localizing a vehicle in a classified vector map in accordance with some embodiments of the present disclosure; and
Fig.4 illustrates a block diagram of an exemplary computer system for implementing embodiments consistent with the present disclosure.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative systems embodying the principles of the present subject matter. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and executed by a computer or processor, whether such computer or processor is explicitly shown.
DETAILED DESCRIPTION
In the present document, the word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment or implementation of the present subject matter described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
While the disclosure is susceptible to various modifications and alternative forms, specific embodiment thereof has been shown by way of example in the drawings and will be described in detail below. It should be understood, however that it is not intended to limit the disclosure to the specific forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternative falling within the scope of the disclosure.
The terms “comprises”, “comprising”, “includes”, “including” or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a setup, device, or method that comprises a list of components or steps does not include only those components or steps but may include other components or steps not expressly listed or inherent to such setup or device or method. In other words, one or more elements in a system or apparatus proceeded by “comprises… a” does not, without more constraints, preclude the existence of other elements or additional elements in the system or method.
The present disclosure relates to method and system for localizing a vehicle in a classified vector map. The system may receive a classified vector map data for an environment within Field of View (FOV) of the vehicle. The classified vector map data comprises data of one or more structures, such as symbols used to represent the structures, based on attributes of the structures. The symbols are annotated for each structure in the environment. Upon receiving the classified vector data, the system may generate occupancy data for the one or more big structures with wide vertical surface in FOV of the vehicle, based on the classified vector map data. The wide vertical surface may be in terms of width and height being above a predefined threshold value. The one or more structures may be considered as big structures if the height and width of the one or more structures is above a predefined threshold value. The occupancy data provides a 2-Dimensional (2D) map of only the big structures around the vehicle. Thereafter, the system may identify scan data of the one or more big structures. The scan data provides information of the same big structures around the vehicle which are captured from LIDAR sensors configured in the vehicle. Based on the occupancy data and the scan data, the system localizes the vehicle in the classified vector map. Once the vehicle is localized, the vehicle may navigate in the planned navigation path for the vehicle. In this manner, the present disclosure uses the classified vector map data to accurately localize the vehicle in the classified vector map.
Fig.1 shows an exemplary architecture for localizing a vehicle in a classified vector map in accordance with some embodiments of the present disclosure.
The architecture 100 may include a vehicle 101, a vehicle localization system 105 and a database 103. The database 103 may be configured to store classified vector maps. In some embodiments, the vehicle localization system 105 may be configured within the vehicle 101 103 as shown in the Fig.1. In some other embodiments, the vehicle localization system 105 may be remotely associated with the vehicle 101, via a wireless communication network (not shown). In some embodiments, the vehicle 101 may be an autonomous vehicle 101 or a non-autonomous vehicle 101. As an example, the vehicle 101 may be a bike, a car, a truck, a bus and the like.
The vehicle localization system 105 may include a processor 107, an Input/Output (I/O) interface 109 and a memory 111. The I/O interface 109 may be configured to receive the classified vector map data from the database 103. The classified vector map data may include data of one or more structures along with symbols used to represent the structures. The symbols may be annotated based on attributes of the structures. As an example, if the structure is “road” line data which is one of the forms of the classified vector map data may be used to represent the features of the road. The symbols used in this scenario may be dashed or dotted lines or combination of colours. Upon receiving the classified vector map, the processor 107 may generate occupancy data for one or more big structures in Field of View (FOV) of the vehicle 101 using the classified vector map data. The one or more big structures are the structures with height and width greater than a predefined threshold value. As an example, the predefined threshold value may be 1 meter. The one or more big structures are identified based on the classified vector map data. The occupancy data provides a 2D map of only the big structures around the vehicle 101. The system also identifies scan data of the one or more big structures. The scan data provides information of the same big structures around the vehicle 101 which are captured from LIDAR sensors configured in the vehicle 101. Based on the occupancy data and the scan data, the system localizes the vehicle 101 in the classified vector map.
In an embodiment, once the vehicle 101 is localized in the classified vector map, the processor 107 generates a reference navigation path for the vehicle 101 to navigate from a current vehicle 101 position to a destination position. The reference navigation path may be generated based on current environment data of the vehicle 101 and speed of the vehicle 101. As an example, the current environment data may include, but not limited to, data related to terrain and obstacles in the reference navigation path. Further, the processor 107 may also provide current velocity to the vehicle 101 for navigating in the reference navigation path. The current velocity may be identified based on a predefined velocity detection technique.
Fig.2a shows block diagram of a vehicle localization system in accordance with some embodiments of the present disclosure.
In some implementations, the vehicle localization system 105 may include data and modules. As an example, the data is stored in a memory 111 configured in the vehicle localization system 105 as shown in the Fig.2a. In one embodiment, the data may include a classified vector map data 201, occupancy data 203, scan data 205, reference navigation path data 207 and other data 209. In the illustrated Fig.2a, modules are described herein in detail.
In some embodiments, the data may be stored in the memory 111 in form of various data structures. Additionally, the data can be organized using data models, such as relational or hierarchical data models. The other data 209 may store data, including temporary data and temporary files, generated by the modules for performing the various functions of the vehicle localization system 105.
In some embodiments, the data stored in the memory 111 may be processed by the modules of the vehicle localization system 105. The modules may be stored within the memory 111. In an example, the modules communicatively coupled to the processor 107 configured in the vehicle localization system 105, may also be present outside the memory 111 as shown in Fig.2a and implemented as hardware. As used herein, the term modules may refer to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
In some embodiments, the modules may include, for example, a receiving module 211, an occupancy data generation module 213, a scan data identification module 215, a vehicle localization module 217, a navigation module 219 and other modules 221. The other modules 221 may be used to perform various miscellaneous functionalities of the vehicle localization system 105. It will be appreciated that such aforementioned modules may be represented as a single module or a combination of different modules.
As used herein, the term module refers to an Application Specific Integrated Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. In an embodiment, the other modules 221 may be used to perform various miscellaneous functionalities of the vehicle localization system 105. It will be appreciated that such modules may be represented as a single module or a combination of different modules. Furthermore, a person of ordinary skill in the art will appreciate that in an implementation, the one or more modules may be stored in the memory 111, without limiting the scope of the disclosure. The said modules when configured with the functionality defined in the present disclosure will result in a novel hardware.
In an embodiment, the receiving module 211 may be configured to receive the classified vector map data 201 from the database 103 associated with the vehicle localization system 105. The classified vector map data 201 may include data of one or more structures in FOV of the vehicle 101. The receiving module 211 may retrieve the classified vector map for the captured environment of the vehicle 101 from the database 103. The environment may be captured using an environment capturing unit such as a LIDAR associated with the other modules 221, and the captured environment data may be stored as environment data. The classified vector map data 201 may provide information of structures in the FOV of the vehicle 101. The information may include, but is not limited to, a type of the structure, symbol associated with the structure and the like.
In an embodiment, the occupancy data generation module 213 may be configured to generate occupancy data of one or more big structures with wide vertical surface in the FOV of the vehicle 101, based on the classified vector map data 201. The occupancy data is a 2-Dimensional (2D) map of only the big structures around the vehicle. An exemplary top view of a classified vector map 230 for the environment surrounding the vehicle 101, and within the FOV of the vehicle 101 is shown in Fig.2b.
As shown in Fig.2b, the classified vector map 230 may depict one or more structures such as buildings (233,235,237) and road 231. In an embodiment, the occupancy data generation module 213 may identify data points for each of the one or more structures in the FOV of the vehicle 101, based on annotation data associated with each of the one or more structures in the classified vector map data 201. As an example, if the structure is a building, then the annotation data may include geometric point values describing the building.
As an example, and for the purpose of illustration, only three structures are being considered in the FOV of the vehicle 101. The three structures may include three buildings, ‘building 1’ 233, ‘building 2’ 235 and ‘building 3’ 237. The ‘building 1’ 233 and ‘building 2’ 235 may be on left side of the vehicle 101 and ‘building 3’ 237may be on right side of the vehicle 101. The data points may be identified for each of the buildings. The identified data points may also be referred to as vector data points.
Once the data points are identified, the occupancy data generation module 213 determines the data points in each of the one or more structures for aligning in a straight line and spanning of a first predefined length. In this scenario, the data points determined for each building, which forms straight line while spanning the first predefined length is as shown in Fig.2c. The first predefined length may be ‘1 meter’. As shown in Fig.2c, only the data points of ‘building 1’ 233 and ‘building 3’ 237 form a straight line while spanning a distance that is greater than the first predefined length, which is ‘1 meter’ in the current example.
Thereafter, the occupancy data generation module 213 forms hypothetical straight-line which forms surface of the one or more big structures. The hypothetical straight line formed for the buildings is as shown in Fig.2d. Once the hypothetical straight lines are formed, the occupancy data generation module 213 generates the occupancy data 203 of the one or more big structures as shown in Fig.2d, which is stored in the memory 111. The occupancy data 203 is generated by projecting the hypothetical straight line of only the one or more big structures. The hypothetical straight lines are projected based on a predefined resolution factor and coordinate value of each sub point, from start and end sub point in the straight line, corresponding to the surface of the one or more big structures. As shown in Fig.2d, the hypothetical straight lines comprise one or more sub points from a start sub point 238 to end subpoint 239. As an example, the coordinate value of the start sub point may be ‘(12.34, 67.23)’ in meters and the coordinate value of the end sub point may be ‘(13.32, 74.56)’. As an example, the predefined resolution factor may be ‘20 pixels per meter’. Based on these values, the coordinate values of each subpoint connecting the start sub point and the end sub point in the straight line may be identified.
In an embodiment, the scan data identification module 215 may be configured to identify scan data 205 of the one or more big structures. The scan data 205 may provide information of the same big structures around the vehicle 101, which are captured from one or more sensors configured in the vehicle 101. As an example, the sensor may be a LIDAR sensor 250 configured in the vehicle 101. In an embodiment, the sensor such as LIDAR may be configured in the vehicle 101 to measure distance of one or more structures in FOV of the sensor 250, from the sensor 250, by illuminating the one or more structures with laser light. Once the one or more structures are illuminated by laser light, the light is reflected in the form of reflection lines corresponding to the one or more structures. The reflection lines comprise one or more reflection points. An exemplary representation of the environment around FOV of the LIDAR sensor 250 in the vehicle 101 is shown in Fig.2e. Fig.2e shows representation of the vehicle 101 on a road 231 comprising one or more structures such as buildings and street lamps on left side and right side of the vehicle 101. As an example, there may be two buildings, ‘building 1’ 233 and ‘building 2’ 235 on left side of the vehicle 101 and three street lamps ‘street lamp 1’ 243, ‘street lamp 2’ 245, and ‘street lamp 3’ 247. Similarly, there may be two buildings on right side of the vehicle 101, such as ‘building 3’ 237 and ‘building 4’ 241 and one street lamp, ‘street lamp 4’ 249. Each of the one or more structures reflects the laser light, which forms reflection lines as shown in Fig. 2e. Once the reflection lines are formed, the scan data identification module 215 may identify one or more reflection lines that span a length greater than a predefined length. As an example, the predefined length may be ‘1 meter’. Further, only the reflection line corresponding to ‘building 2’ 235 on left side of the vehicle 101 may be of the predefined length. Similarly, the reflection line corresponding to ‘building 3’ 237 on right side of the vehicle 101 may be of the predefined length. Such reflection lines may be selected which are as shown in Fig.2f. Thereafter, the scan data identification module 215 determines an orientation value and a centroid value of each of the selected one or more reflection lines. The orientation value may be identified by plotting the line on a graph as shown in Fig.2g.
As shown in Fig.2g, and for the purpose of illustration only one of the reflection lines is plotted on the graph. Each reflection line may comprise x, y, and z values. The similar methodology may be adopted for all the reflection lines for identifying the orientation and centroid value. For calculation of orientation value, any two points in the reflection line may be considered such as x1, y1 and x2, y2 wherein the z value of the line may not be considered. Based on these points, the orientation value may be determined using equation 1: tan -1 (y1-y2)/(x1-x2). As an example, the centroid value may be identified by averaging (x, y) values of all points in the reflection line being plotted on the graph.
Once the centroid value and the orientation value is identified for each of the selected lines, the scan data identification module 215 identifies one or more lines with similar centroid value and orientation value. The lines with similar centroid value and orientation value may be grouped, but these lines may have different ‘z’ values. In an embodiment, the grouping may be performed when the obtained number of reflection lines corresponding to the one or more structures are greater than a predefined number of reflection lines. Fig.2f shows the group of lines with similar centroid and orientation value. As shown in Fig.2f, the number of reflection lines corresponding to ‘building 2’ 235 with similar orientation value and centroid value and hence grouped as ‘group 1’. Similarly, number of reflection lines corresponding to the ‘building 3’ 237 are with similar orientation and centroid value and hence grouped as ‘group 2’. Such lines with similar orientation value and centroid value are identified as big structures. In this scenario, the ‘building 2’ 235 and ‘building 3’ 237 may be identified as big structures since the reflection lines corresponding to these buildings have similar orientation value and the centroid value. Thereafter, the scan data identification module 215 may identify the scan data 205 of the one or more big structures. The scan data 205 may be identified by selecting a predefined number of the selected reflection lines within each group having same orientation value and centroid value and transforming the orientation value and the centroid value to a common plane. The transformation may be done by performing leveling of “z” value in current plane to the “z” value of the common plane. As an example, “z” values of the reflection lines selected from ‘group 1’ and z values of the reflection lines selected from ‘group 2’ may be transformed to a common “z” value which is a predefined “z” value so that all the selected reflection lines from both the groups are in the same plane. Fig.2h shows 2D top view representation of the scan data 205 being identified for the corresponding big structures.
In an embodiment, the vehicle localization module 217 may be configured to localize the vehicle 101 in the classified vector map. The vehicle localization module 217 may localize the vehicle 101 in the classified vector map based on the occupancy data 203 and the scan data 205, which provides information of the one or more big structures in FOV of the vehicle 101.
In an embodiment, the navigation module 219 may be configured to navigate the vehicle 101 from a source point to a destination point, once the vehicle 101 is localized in the classified vector map using a reference navigation path. The reference navigation path may be generated based on current environment data of the vehicle and speed of the vehicle 101. As an example, the current environment data may include, but not limited to, data related to terrain and obstacles in the reference navigation path. Further, the navigation module 219 may also provide current velocity to the vehicle 101 for navigating the vehicle 101 in the reference navigation path. The generated reference navigation path may be stored as reference navigation path data 207. The current velocity may be identified based on a predefined velocity detection technique.
Fig.3 shows a flowchart illustrating method of localizing a vehicle is a classified vector map in accordance with some embodiments of the present disclosure.
As illustrated in Fig.3, the method 300 includes one or more blocks illustrating a method of localizing a vehicle 101 in a classified vector map. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, and functions, which perform specific functions or implement specific abstract data types.
The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method. Additionally, individual blocks may be deleted from the methods without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof.
At block 301, the method 300 may include receiving a classified vector map data 201 for an environment within Field of View (FOV) of the vehicle 101. The classified vector data comprises data of one or more structures such as symbols used to represent the structures depending on attributes of the structures.
At block 303, the method 300 may include generating an occupancy data 203 of one or more big structures with wide vertical surface in the FOV of the vehicle 101 based on the classified vector map data 201. The wide vertical surface may be in terms of width and height above a predefined threshold value. As an example, the predefined threshold value may be 1 meter. The one or more structures may be considered as big structures if the height and width of the one or more structures is above the predefined threshold value. The occupancy data 203 is a 2D map of the one or more big structures around the vehicle 101.
At block 305, the method 300 may include identifying a scan data 205 of the one or more big structures in the FOV of one or more sensors configured in the vehicle 101 based on reflection lines comprising one or more reflection points. The one or more reflection points may be formed using the one or more sensors. The one or more reflection points correspond to surface of the one or more big structures.
At block 307, the method 300 may include localizing the vehicle 101 in the classified vector map. The vehicle 101 may be localized in the classified vector data based on the occupancy data 203 and the scan data 205.
In an embodiment, once the vehicle 101 is localized, the method may include navigating the vehicle 101 from a source point to a destination point a reference navigation path.
Computer System
Fig.4 illustrates a block diagram of an exemplary computer system 400 for implementing embodiments consistent with the present disclosure. In an embodiment, the computer system 400 may be vehicle localization system 105, which is used for localizing a vehicle is classified vector map. The computer system 400 may include a central processing unit (“CPU” or “processor”) 402. The processor 402 may comprise at least one data processor for executing program components for executing user or system-generated business processes. The processor 402 may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc.
The processor 402 may be disposed in communication with one or more input/output (I/O) devices (411 and 412) via I/O interface 401. The I/O interface 401 may employ communication protocols/methods such as, without limitation, audio, analog, digital, stereo, IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2, BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, Video Graphics Array (VGA), IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For Mobile Communications (GSM), Long-Term Evolution (LTE) or the like), etc. Using the I/O interface 401, the computer system 400 may communicate with one or more I/O devices 511 and 412. In some implementations, the I/O interface 401 may be used to connect to a database 103 to receive classified vector map data.
In some embodiments, the processor 402 may be disposed in communication with a communication network 409 via a network interface 403. The network interface 403 may communicate with the communication network 409. The network interface 403 may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), Transmission Control Protocol/Internet Protocol (TCP/IP), token ring, IEEE 802.11a/b/g/n/x, etc.
The communication network 409 can be implemented as one of the several types of networks, such as intranet or Local Area Network (LAN) and such within the organization. The communication network 409 may either be a dedicated network or a shared network, which represents an association of several types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), etc., to communicate with each other. Further, the communication network 409 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, etc.
In some embodiments, the processor 402 may be disposed in communication with a memory 405 (e.g., RAM 413, ROM 414, etc. as shown in FIG. 4) via a storage interface 404. The storage interface 404 may connect to memory 405 including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as Serial Advanced Technology Attachment (SATA), Integrated Drive Electronics (IDE), IEEE-1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs (RAID), solid-state memory devices, solid-state drives, etc.
The memory 405 may store a collection of program or database components, including, without limitation, user /application 406, an operating system 407, a web browser 408, mail client 415, mail server 416, web server 417 and the like. In some embodiments, computer system 400 may store user /application data 406, such as the data, variables, records, etc. as described in this invention. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as OracleR or SybaseR.
The operating system 407 may facilitate resource management and operation of the computer system 400. Examples of operating systems include, without limitation, APPLE MACINTOSHR OS X, UNIXR, UNIX-like system distributions (E.G., BERKELEY SOFTWARE DISTRIBUTIONTM (BSD), FREEBSDTM, NETBSDTM, OPENBSDTM, etc.), LINUX DISTRIBUTIONSTM (E.G., RED HATTM, UBUNTUTM, KUBUNTUTM, etc.), IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc.), APPLER IOSTM, GOOGLER ANDROIDTM, BLACKBERRYR OS, or the like. A user interface may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces may provide computer interaction interface elements on a display system operatively connected to the computer system 500, such as cursors, icons, check boxes, menus, windows, widgets, etc. Graphical User Interfaces (GUIs) may be employed, including, without limitation, APPLE MACINTOSHR operating systems, IBMTM OS/2, MICROSOFTTM WINDOWSTM (XPTM, VISTATM/7/8, 10 etc.), UnixR X-Windows, web interface libraries (e.g., AJAXTM, DHTMLTM, ADOBE® FLASHTM, JAVASCRIPTTM, JAVATM, etc.), or the like.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present invention. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., non-transitory. Examples include Random Access Memory (RAM), Read-Only Memory (ROM), volatile memory, nonvolatile memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash drives, disks, and any other known physical storage media.
Advantages of the embodiment of the present disclosure are illustrated herein.
In an embodiment, the present disclosure provides method and system to localize vehicle in a classified vector map.
In an embodiment, the present disclosure uses classified vector map which is an open forum vector map data and hence there is no need to generate raster map which may require huge storage and deployment infrastructure.
In an embodiment, the present disclosure uses the classified vector map which is annotated accurately, hence helps in accurate localization of the vehicle in the classified vector map.
In an embodiment, the present disclosure localizes the vehicle based on annotation data of big structures around the vehicle and hence helps is accurate localization of the vehicle.
The terms "an embodiment", "embodiment", "embodiments", "the embodiment", "the embodiments", "one or more embodiments", "some embodiments", and "one embodiment" mean "one or more (but not all) embodiments of the invention(s)" unless expressly specified otherwise.
The terms "including", "comprising", “having” and variations thereof mean "including but not limited to", unless expressly specified otherwise. The enumerated listing of items does not imply that any or all the items are mutually exclusive, unless expressly specified otherwise.
The terms "a", "an" and "the" mean "one or more", unless expressly specified otherwise.
A description of an embodiment with several components in communication with each other does not imply that all such components are required. On the contrary, a variety of optional components are described to illustrate the wide variety of possible embodiments of the invention.
When a single device or article is described herein, it will be clear that more than one device/article (whether they cooperate) may be used in place of a single device/article. Similarly, where more than one device or article is described herein (whether they cooperate), it will be clear that a single device/article may be used in place of the more than one device or article or a different number of devices/articles may be used instead of the shown number of devices or programs. The functionality and/or the features of a device may be alternatively embodied by one or more other devices which are not explicitly described as having such functionality/features. Thus, other embodiments of the invention need not include the device itself.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based here on. Accordingly, the embodiments of the present invention are intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Referral Numerals:
Reference Number Description
100 architecture
101 Vehicle
103 Database
105 Vehicle localization system
107 Processor
109 I/O Interface
111 Memory
201 Classified vector map data
203 Occupancy data
205 Scan data
207 Reference navigation path data
209 Other data
211 Receiving module
213 Occupancy data generation module
215 Scan data identification module
217 Vehicle localization module
219 Navigation module
221 Other modules
230 Classified vector map
231 Road
232 FOV of vehicle
233 Building 1
235 Building 2
237 Building 3
101 Vehicle
238 Start Sub Point
239 End Sub Point
241 Building 4
243 Street Lamp 1
245 Street Lamp 2
247 Street Lamp 3
249 Street Lamp 4
250 LIDAR sensor
400 Exemplary computer system
401 I/O Interface of the exemplary computer system
402 Processor of the exemplary computer system
403 Network interface
404 Storage interface
405 Memory of the exemplary computer system
406 User /Application
407 Operating system
408 Web browser
409 Communication network
411 Input devices
412 Output devices
413 RAM
414 ROM
415 Mail Client
416 Mail Server
417 Web Server
| # | Name | Date |
|---|---|---|
| 1 | 201941012142-STATEMENT OF UNDERTAKING (FORM 3) [28-03-2019(online)].pdf | 2019-03-28 |
| 2 | 201941012142-REQUEST FOR EXAMINATION (FORM-18) [28-03-2019(online)].pdf | 2019-03-28 |
| 3 | 201941012142-POWER OF AUTHORITY [28-03-2019(online)].pdf | 2019-03-28 |
| 4 | 201941012142-FORM 18 [28-03-2019(online)].pdf | 2019-03-28 |
| 5 | 201941012142-FORM 1 [28-03-2019(online)].pdf | 2019-03-28 |
| 6 | 201941012142-DRAWINGS [28-03-2019(online)].pdf | 2019-03-28 |
| 7 | 201941012142-DECLARATION OF INVENTORSHIP (FORM 5) [28-03-2019(online)].pdf | 2019-03-28 |
| 8 | 201941012142-COMPLETE SPECIFICATION [28-03-2019(online)].pdf | 2019-03-28 |
| 9 | abstract 201941012142.jpg | 2019-04-01 |
| 10 | 201941012142-Request Letter-Correspondence [05-04-2019(online)].pdf | 2019-04-05 |
| 11 | 201941012142-Power of Attorney [05-04-2019(online)].pdf | 2019-04-05 |
| 12 | 201941012142-Form 1 (Submitted on date of filing) [05-04-2019(online)].pdf | 2019-04-05 |
| 13 | 201941012142-RELEVANT DOCUMENTS [29-06-2021(online)].pdf | 2021-06-29 |
| 14 | 201941012142-PETITION UNDER RULE 137 [29-06-2021(online)].pdf | 2021-06-29 |
| 15 | 201941012142-OTHERS [29-06-2021(online)].pdf | 2021-06-29 |
| 16 | 201941012142-FORM 3 [29-06-2021(online)].pdf | 2021-06-29 |
| 17 | 201941012142-FER_SER_REPLY [29-06-2021(online)].pdf | 2021-06-29 |
| 18 | 201941012142-CORRESPONDENCE [29-06-2021(online)].pdf | 2021-06-29 |
| 19 | 201941012142-COMPLETE SPECIFICATION [29-06-2021(online)].pdf | 2021-06-29 |
| 20 | 201941012142-CLAIMS [29-06-2021(online)].pdf | 2021-06-29 |
| 21 | 201941012142-ABSTRACT [29-06-2021(online)].pdf | 2021-06-29 |
| 22 | 201941012142-FER.pdf | 2021-10-17 |
| 23 | 201941012142-PatentCertificate28-10-2025.pdf | 2025-10-28 |
| 24 | 201941012142-IntimationOfGrant28-10-2025.pdf | 2025-10-28 |
| 1 | 2020-12-2820-03-01E_30-12-2020.pdf |