Sign In to Follow Application
View All Documents & Correspondence

Lidar And Radar Based Tracking And Mapping System And Method Thereof

Abstract: A system implemented in a vehicle for tracking and mapping of one or more objects to identify free space is disclosed. The system has an input unit having lidar sensors and radar sensors that sense objects in a region surrounding the vehicle, and a processing unit that: receives data from lidar sensors and radar sensors and maps the data in corresponding grid maps of corresponding sensors; tracks objects in regions corresponding to the sensors and performs estimation for objects not sensed by any of the sensors; fuses the grid maps by converting them from sensor frame to vehicle frame to generate a fused grid map; and integrates the fused grid map with any or a combination of track management and scan matching to perform classification of the one or more objects into static objects or dynamic objects and identification of free space in the fused grid map.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 June 2019
Publication Number
29/2019
Publication Type
INA
Invention Field
PHYSICS
Status
Email
info@khuranaandkhurana.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-08-01
Renewal Date

Applicants

KPIT Technologies Limited
Plot -17, Rajiv Gandhi Infotech Park, MIDC-SEZ, Phase-III, Maan, Hinjawadi, Taluka-Mulshi, Pune 411057, Maharashtra, India.

Inventors

1. DAS, Soumyo
KPIT Technologies Ltd., Plot -17, Rajiv Gandhi Infotech Park, MIDC-SEZ, Phase-III, Maan, Hinjawadi, Taluka-Mulshi, Pune 411057, Maharashtra, India.
2. DEY, Rastri
KPIT Technologies Ltd., Plot -17, Rajiv Gandhi Infotech Park, MIDC-SEZ, Phase-III, Maan, Hinjawadi, Taluka-Mulshi, Pune 411057, Maharashtra, India.

Specification

Claims:1. A system implemented in a vehicle for tracking of one or more objects to identify free space, said system comprising: an input unit comprising: one or more lidar sensors and one or more radar sensors to sense surrounding of the vehicle, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; a processing unit comprising a processor coupled with a memory, the memory storing instructions executable by the processor to: receive lidar data from the one or more lidar sensors and radar data from the one or more radar sensors and map the received lidar data and the received radar data in corresponding one or more grid maps of the one or more lidar sensors and the one or more radar sensors; track the one or more objects in one or more regions corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; and fuse the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more objects into static objects or dynamic objects and identification of free space in the fused grid map. 2. The system of claim 1, wherein the one or more lidar sensors and the one or more radar sensors are configured on surface of the vehicle to sense the objects in corresponding one or more majorly non-overlapping regions to capture 360 degree view around the vehicle. 3. The system of claim 1, wherein the processor eliminates one or more data points pertaining to ground, from each grid map, by computing a surface normal using at least three data points selected from the lidar data and wherein the at least three data points are spaced at a distance less than a pre-defined threshold among each other. 4. The system of claim 3, wherein the processor eliminates the one or more data points pertaining to the ground by computing height of each data point from the ground and considering target distance height of the lidar sensor with the computed surface normal . 5. The system of claim 1, wherein when the one or more objects are tracked in the one or more regions, the processor performs track initialization and management based on: a. track initialization and management to ensure that the track is maintained while at least one object of the one or more object transitions from regions of a first sensor to region of a second sensor, wherein the first sensor and the second sensor are selected from the one or more lidar sensors and the one or more radar sensors; b. weighted fusion based velocity estimation of the tracked one or more objects based on lidar and radar tracking time; and c. occlusion identification based on the one or more objects sensed by the one or more radar sensors. 6. The system of claim 1, wherein the processor further synthesizes an environment to create an environment map, and wherein the environment map is memorized to be used for performing the classification of the one or more objects for identification of free space in the fused grid map. 7. The system of claim 1, wherein when at least one object of the one or more objects is a pedestrian, the at least one object is classified using: a. size of a point cloud pertaining to the pedestrian, obtained from the lidar data, with respect to longitudinal, lateral distance from the vehicle and zone of the point cloud; b. structure and availability of the point cloud in one or more channels of the one or more lidar sensors; c. a deterministic velocity vector of the point cloud indicating velocity vector of the pedestrian; and d. history of trajectory of the point cloud. 8. The system of claim 1, wherein the processor reconstructs and maps one or more cluster points, obtained from lidar data, on one or more data points obtained from radar data for mapping of the one or more objects on the fused grid to form complete surroundings around the host vehicle. 9. A method, carried out according to instructions stored in a computer implemented in a vehicle for tracking of one or more objects to identify free space, comprising: receiving lidar data from one or more lidar sensors and radar data from one or more radar sensors and mapping the received lidar data and the received radar data in a grid, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; tracking the one or more objects in one or more region corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; and fusing the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more static or dynamic objects and identification of free space in the fused grid map. , Description:FIELD OF DISCLOSURE Present disclosure relates to vehicle navigation systems. More particularly, it relates to a system for tracking various objects around a vehicle. BACKGROUND OF THE DISCLOSURE A reliable target detection and tracking system is a key element in vehicle automation. Tracking systems use numerous sensors such as radar sensors and LIDAR (Light Detection and Ranging, interchangeably termed as lidar herein) sensors for tracking targets or objects that are important for manoeuvring of the host vehicle. While tracking an object moving from one zone of sensing devices to another, radar sensing provides minimal data points of the target (object) while lidar sensing provides point cloud with background noise and ground reflection. Hence optimal track management strategy and free space detection for the target when the target is moving with high velocity and manoeuvring is a problem due to background objects, probable clutters or false positives. Further, various existing systems are not able to provide 360 degree target tracking and mapping even while using high computing power, thereby compromising with the accuracy. Another problem in existing approaches is synchronization and classification of data points captured by lidar sensors and radar sensors. There is therefore a need in the art for a system and a method to overcome above-mentioned and other disadvantages of the existing approaches for target tracking and free space detection. OBJECTS OF THE INVENTION Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below. It is an object of the present disclosure to provide for a system that integrates track management with grid mapping to enable 360 degree target tracking and mapping. It is an object of the present disclosure to provide for a system that uses less computation power and is more responsive. It is an object of the present disclosure to provide for a system that has greater accuracy in terms of tracking surrounding objects than camera based systems. It is an object of the present disclosure to provide for a system that eliminates ground data and consequent errors due to rough or turbulent ground surfaces. It is an object of the present disclosure to provide for a system that helps in surround view creation or tracking of non-sensing region of any of the sensors (blind zone area tracking). It is an object of the present disclosure to provide for a system that identifies various occlusions with improved accuracy. It is an object of the present disclosure to provide for a system that improves zone or track initialization over conventional averaging techniques. It is an object of the present disclosure to provide for a system that has improved segregation of static and dynamic targets. It is an object of the present disclosure to provide for a system that provides for an improved approach for pedestrian classification. It is an object of the present disclosure to provide a system that enhances the possibility of scanning complex environment of crowded city and un-predictable movement of surrounding traffic vehicles and pedestrians. It is an object of the present disclosure to provide a system that tracks the non-linear and highly manoeuvring movement of targets and provide detailed information for free space availability for host vehicle navigation. It is an object of the present disclosure to provide a system that has increased range of detection than camera based systems. SUMMARY This summary is provided to introduce simplified concepts of a lidar and radar based tracking system and method thereof, which are further described below in the Detailed Description. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended for use in determining or limiting the scope of the claimed subject matter. An aspect of the present disclosure provides a system implemented in a vehicle for tracking of one or more objects to identify free space, said system comprising of an input unit comprising: one or more lidar sensors and one or more radar sensors to sense surrounding of the vehicle, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; a processing unit comprising a processor coupled with a memory, the memory storing instructions executable by the processor to: receive lidar data from the one or more lidar sensors and radar data from the one or more radar sensors and map the received lidar data and the received radar data in corresponding one or more grid maps of the one or more lidar sensors and the one or more radar sensors; track the one or more objects in one or more regions corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; fuse the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more objects into static objects or dynamic objects and identification of free space in the fused grid map. In an embodiment, the one or more lidar sensors and the one or more radar sensors are configured on surface of the vehicle to sense the objects in corresponding one or more majorly non-overlapping regions to capture 360 degree view around the vehicle. In an embodiment, the processor eliminates one or more data points pertaining to ground, from each grid map, by computing a surface normal using at least three data points selected from the lidar data and at least three data points are spaced at a distance less than a pre-defined threshold among each other. In an embodiment, the processor eliminates the one or more data points pertaining to the ground by computing height of each data point from the ground and considering target distance height of the lidar sensor with the computed surface normal. In an embodiment, when the one or more objects are tracked in the one or more regions, the processor performs track initialization based on: target information, tracking time, sensor type (lidar or radar) etc. to ensure that the track is created properly which necessitate the track management; weighted fusion based velocity estimation of the tracked one or more objects based on lidar and radar tracking time; and occlusion identification based on the one or more objects sensed by the one or more radar sensors. In an embodiment, the processor further synthesizes an environment to create an environment map, and the environment map is memorized to be used for performing the classification of the one or more objects and thereby determining availability of free space. In an embodiment, when at least one object of the one or more objects is a pedestrian, the at least one object is classified using: size of a point cloud pertaining to the pedestrian, obtained from the lidar data, with respect to longitudinal, lateral distance from the vehicle and zone of the point cloud; structure and availability of the point cloud in one or more channels of the one or more lidar sensors; a deterministic velocity vector of the point cloud indicating velocity vector of the pedestrian; and history of trajectory of the point cloud. In an embodiment, the processor reconstructs and maps one or more cluster points, obtained from lidar data, on one or more data points obtained from radar data for mapping of the one or more objects on the fused grid to form complete surroundings around the host vehicle. Another aspect of the present disclosure relates to a method carried out according to instructions stored in a computer implemented in a vehicle for tracking of one or more objects to identify free space, comprising: receiving lidar data from one or more lidar sensors and radar data from one or more radar sensors and mapping the received lidar data and the received radar data in a grid, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; tracking the one or more objects in one or more region corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; fusing the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more static or dynamic objects and identification of free space . Various objects, features, aspects and advantages of the present disclosure will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like features. Within the scope of this application it is expressly envisaged that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments, unless such features are incompatible. BRIEF DESCRIPTION OF DRAWINGS The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limitation of the present disclosure, and wherein: FIG. 1A illustrates overall working of lidar and radar based tracking system in accordance with an exemplary embodiment of the present disclosure. FIG. 1B illustrates architecture of the system in accordance with an exemplary embodiment of the present disclosure. FIG. 2 illustrates exemplary modules of a processing unit in accordance with an embodiment of the present disclosure. FIG. 3 illustrates a grid based 360 degree surround view system in accordance with an exemplary embodiment of the present disclosure. FIG. 4 illustrates environment ground data elimination based on surface normal plane computation and height of point from ground in accordance with an exemplary embodiment of the present disclosure. FIG. 5A illustrates grid fusion in accordance with an exemplary embodiment of the present disclosure. FIG. 5B illustrates representation of environment synthesis and memorization in accordance with an exemplary embodiment of the present disclosure. FIG. 6 illustrates joint track management and scan matching for dynamic target classification in accordance with an exemplary embodiment of the present disclosure. FIG. 7A illustrates point cloud distribution for a pedestrian in accordance with an exemplary embodiment of the present disclosure. FIG. 7B illustrates re-mapping of lidar cluster to radar feedback and tracked object to establish efficiency of whole grid in accordance with an exemplary embodiment of the present disclosure. FIG. 8 illustrates a method of performing lidar and radar based tracking in accordance with an exemplary embodiment of the present disclosure. DETAILED DESCRIPTION The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and firmware and/or by human operators. Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic. As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise. Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. These exemplary embodiments are provided only for illustrative purposes and so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. The invention disclosed may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Various modifications will be readily apparent to persons skilled in the art. The general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the invention. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). Also, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention. Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named element. Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The term “machine-readable storage medium” or “computer-readable storage medium” includes, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).A machine-readable medium may include a non-transitory medium in which data may be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc. Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks. Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system. Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references below to the "invention" may in some cases refer to certain specific embodiments only. In other cases it will be recognized that references to the "invention" will refer to subject matter recited in one or more, but not necessarily all, of the claims. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention. Various terms as used herein are shown below. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing. Present disclosure relates to a system for tracking various objects around a vehicle. More particularly, it relates to a lidar and radar based tracking system that uses sensor data fusion for tracking of objects and free space detection around the vehicle. An aspect of the present disclosure provides a system implemented in a vehicle for tracking of one or more objects to identify free space, said system comprising: an input unit comprising: one or more lidar sensors and one or more radar sensors to sense surrounding of the host vehicle, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; a processing unit comprising a processor coupled with a memory, the memory storing instructions executable by the processor to: receive lidar data from the one or more lidar sensors and radar data from the one or more radar sensors and map the received lidar data and the received radar data in corresponding one or more grid maps of the one or more lidar sensors and the one or more radar sensors; track the one or more objects in one or more regions corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; fuse the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more objects into static objects or dynamic objects and identification of free space. In an embodiment, the one or more lidar sensors and the one or more radar sensors are configured on surface of the vehicle to sense the objects in corresponding one or more majorly non-overlapping regions to capture360 degree view around the host vehicle. In an embodiment, the processor eliminates one or more data points pertaining to ground, from each grid map, by computing a surface normal plane using at least three data points selected from the lidar data and the at least three data points are spaced at a distance less than a pre-defined threshold among each other. In an embodiment, the processor eliminates the one or more data points pertaining to the ground by computing height of each data point from the ground and considering target distance height of the lidar sensor with the computed surface normal plane. In an embodiment, when the one or more objects are tracked in the one or more regions, the processor performs track initialization based on: target information, track history, sensor type associated with track initialization to ensure that the track is maintained properly ; weighted fusion based velocity estimation of the tracked one or more objects based on lidar and radar tracking time; and occlusion identification based on the one or more objects sensed by the one or more radar sensors in addition to occlusion identified by lidar. In an embodiment, the processor further synthesizes an environment to create an environment map, and the environment map is memorized and used for performing the classification of the one or more objects for identification of free space in the fused grid map. In an embodiment, when at least one object of the one or more objects is a pedestrian, the at least one object is classified using: size of a point cloud pertaining to the pedestrian, obtained from the lidar data, with respect to relative longitudinal, lateral distance from the vehicle and zone of the point cloud; structure and availability of the point cloud in one or more channels of the one or more lidar sensors; a deterministic velocity vector of the point cloud indicating velocity vector of the pedestrian; and history of trajectory of the point cloud. In an embodiment, the processor reconstructs and maps one or more cluster points, obtained from lidar data, on one or more data points obtained from radar data for mapping of the one or more objects on the fused grid map to form complete surroundings around the host vehicle. Another aspect of the present disclosure relates to a method carried out according to instructions stored in a computer implemented in a vehicle for tracking of one or more objects to identify free space, comprising: receiving lidar data from one or more lidar sensors and radar data from one or more radar sensors and mapping the received lidar data and the received radar data in a grid, wherein each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region; tracking the one or more objects in one or more region corresponding to the one or more lidar sensors and the one or more radar sensors and performing state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors; fusing the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map, wherein the fused grid map is integrated with any or a combination of track management and scan matching to perform classification of the one or more static or dynamic objects and identification of free space in the fused grid map. FIG. 1A illustrates overall working of lidar and radar based tracking system and FIG. 1B illustrates architecture of the system in accordance with an exemplary embodiment of the present disclosure. In an aspect, lidar and radar based tracking system (interchangeably termed as system 100 herein) includes an input 102, a processing unit 104 and an output unit 114. The input unit 102 has one or more lidar sensors (interchangeably termed as lidars herein) and one or more radar sensors (interchangeably termed as radars herein) to sense surrounding of a vehicle. Blocks 152 and 156 forms 3600 SVTS (surround view tracking system) 154 such that each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region. The sensors are configured on surface of the vehicle to sense the objects in corresponding one or more majorly non-overlapping regions to capture 360 degree view around the vehicle. Processing unit 104 receives the data from the input unit 102.At block 158, segmentation clustering and feature extraction is performed, where the lidar data point cloud is converted to Cartesian co-ordinate system. Further, features of dimension, extreme and corner of the targets are identified using robust segment fitting and probabilistic dimension derivation. At block 160, environment ground data is eliminated from the lidar data based on height of data points w.r.t ground and surface normal computation. Thereafter, at step 106, the processing unit 104 maps the received lidar data and the received radar data in corresponding one or more grid maps of the one or more lidar sensors and the one or more radar sensors using zone track management 164 using time synchronization 162. At step 108, the processing unit 104 tracks the one or more objects in one or more regions corresponding to the one or more lidar sensors and the one or more radar sensors and performs state estimation for the one or more objects that are not sensed by any of the one or more lidar sensors and the one or more radar sensors. In an embodiment, track and state estimation in respective regions may be achieved by zone tracking confidence establishment, which is in integral part for centralized track management. Zone tracking confidence establishment is useful for track management in non-sensing region (region not covered by any perception sensors). It includes techniques such as non-sensing region identification, zone classification and region based tracking, estimation technique selection, tracking time and sensing confidence. The zone track management 164 provides feedback to segmentation clustering and feature extraction block 158 which further reduces computation burden by scanning the area adjacent to existing tracked object for clustering and thereby improves clustering phenomenon. Other clusters for new objects are segmented based on nearest neighbour mapping and segmentation. At block 162, lidar and radar data synchronization is performed. The sensed data from the lidar sensors (after segment clustering and feature extraction 158 and environment ground data elimination 160) and the radar sensors are time synchronized based on sequential approach. Further, track management and prediction updates may be performed based on information available from the sensors. In context of the present example, adapted initialization for surround view tracking is performed at block 166 for integrated fusion of radar, lidar and vehicle sensors for a zone. The one or more objects are tracked in the one or more regions by performing track initialization which further uses lidar and radar track management for tracking of the one or more one or more objects to obtain local radars and local lidars tracks, which is further explained below with reference to track initialization module 212. At step 110, the processing unit 104 fuses the one or more grid maps of the one or more lidar sensors and the one or more radar sensors by converting said one or more grid maps from sensor frame to vehicle frame to generate a fused grid map at block 176 (using inputs from blocks 182, 178 and 180). At step 112, the processing unit 104 integrates the fused grid map with any or a combination of track management and scan matching for dynamic target classification 168 for classification of the one or more objects into static objects or dynamic objects and identification of free space in the fused grid map. The output of block 168 may be used for pedestrian classification in pedestrian point model 172. In an aspect, when the one or more objects are tracked in the one or more regions, the processor 104 performs track initialization based on: target information, track history, sensor type involved in track initialization; weighted fusion based velocity estimation of the tracked one or more objects based on lidar and radar tracking time; and occlusion identification based on the one or more objects sensed by the one or more radar sensors in addition to occlusion identified by Lidar sensors. According to an embodiment, the system 100 integrates sensed signals from radar and lidar pre-processed data. The technique uses initialization of track based on post-processing of sensed signals from radar and lidar sensors, and identification and classification of target feature from cluster signals from lidar and radar sensed signals. The system 100 includes multi target track management and sensor data fusion comprising synchronization, track initialization, centralized track management, fused grid map and target classification in grid. Furthermore, the system 100 determines availability of free space. In an embodiment, environment ground data is eliminated (at block 160) based on surface normal computation. At block 174, based on grid fusion 176 integrated with track management, availability of free space is determined. The output unit 114 may be a display device or any other audio-visual device that provides indicates detected free space to the user. According to an embodiment, the system 100 uses an out of sequence strategy for cascaded track management. The strategy involves update of the sensor fusion with signals from recipient multiple sensors with varied time intervals. The out of sequence strategy deals with problem of difference of signal recipient by varied sensors. It decides discretion for sensor fusion or dependence on individual sensor and thereby state and covariance update at specific instances. At block 162, the signals, which are outcome of different sensors are received at different intervals are synchronized for data fusion and validation. The discrepancy of the signal receiving timing is resolved by the following strategies: the signal from each sensor is tackled by time synchronization mechanism where the synchronization is based on data points from front lidar. Front lidar and rear lidar are synchronized during installation and data from other side sensors is be mapped with respect to front lidar time frame i.e. the data of other sensors is processed in multiple of sensing time frame of front lidar. As front lidar executes at 0.08sec the processing delay of other side sensors will be significantly less. FIG. 2 illustrates exemplary modules of a processing unit in accordance with an embodiment of the present disclosure. Present disclosure elaborates upon a system implemented in a host vehicle for tracking of one or more objects to identify free space. As elaborated in FIG.1 above, the system comprises an input unit 102 that provides lidar data and radar data to a processing unit 104. In an aspect, the processing unit 104 may comprise one or more processor(s) 202. The one or more processor(s) 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) 202 are configured to fetch and execute computer-readable instructions stored in a memory 206 of the processing unit 104. The memory 206 may store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory 206 may comprise any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. The processing unit 104 may also comprise an interface(s) 204. The interface(s) 204 may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 204 may facilitate communication of processing unit 104 with various devices coupled to the processing unit 104 such as the input unit 102 and the output unit 106. The interface(s) 204 may also provide a communication pathway for one or more components of the processing unit 104. Examples of such components include, but are not limited to, processing engine(s) 208 and data 210. The processing engine(s) 208 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) 208 may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) 208 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) 208. In such examples, the processing unit 104 may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to processing unit 104 and the processing resource. In other examples, the processing engine(s) 208 may be implemented by electronic circuitry. The data 222 may comprise data that is either stored or generated as a result of functionalities implemented by any of the components of the processing engine(s) 208. In an exemplary embodiment, the processing engine(s) 208 may comprise a ground data elimination module 210, track initialization module 212, a track management and state estimation module 214, a map fusion module 214, a fused map and track management integration module 218 (also referred to as integration module 218, hereinafter) and other modules 220. It would be appreciated that modules being described are only exemplary modules and any other module or sub-module may be included as part of the system 100 or the processing unit 104. These modules too may be merged or divided into super-modules or sub-modules as may be configured. Ground Data Elimination Module 210 In an aspect, the ground data elimination module 210 receives lidar data from the one or more lidar sensors and radar data from the one or more radar sensors and maps the received lidar data and the received radar data in corresponding one or more grid maps of the one or more lidar sensors and the one or more radar sensors. As illustrated in FIG.1A, the input unit 102 provides lidar data and radar data to the processing unit 104 for use by ground data elimination module 210 as described above. Referring to FIG. 3, grid based 360 degree surround view system is established multiple lidar and radar sensors. In an example, lidar sensors with 180 degree beam angle are mounted in front of vehicle and at the rear of vehicle, whereas two radar sensors with 45 degrees beam angle are mounted in the sideways of vehicle. Thus, the input unit 102 has one or more lidar sensors and one or more radar sensors to sense surrounding of the vehicle such that each of the one or more lidar sensors and one or more radar sensors sense the one or more objects in a corresponding region. The one or more lidar sensors and the one or more radar sensors are configured on surface of the vehicle to sense the objects in corresponding one or more majorly non-overlapping regions to capture 360 degree view around the vehicle. In an embodiment, the ground data elimination module 210 eliminates one or more data points pertaining to ground, from each grid map, by computing a surface normal plane using at least three data points selected from the lidar data. The at least three data points are spaced at a distance less than a pre-defined threshold among each other. Further, elimination of the one or more data points pertaining to the ground is performed by computing height of each data point from the ground and considering target distance height of the lidar sensor with the computed surface normal plane. Thus, ground data may be eliminated based on an integrated approach of mathematical computation of height of the individual points from ground considering the target distance-height of lidar sensor with normal of plane created from data points and computing a surface normal plane using at least three data points selected from any or a combination of the lidar data. In an embodiment, the ground data elimination module 210 performs environment ground data elimination based on height of ground data points and surface normal computation as illustrated in FIG. 4. In context of the present example, high-level sensor fusion is performed based on the movement of target objects. Firstly, background subtraction is performed based on not tracked object outside grid. Considering, a lidar sensor is mounted on vehicle front or rear at height (H), under right-angled triangle (OP_1 Q): OP_1=H/sin?(?1) .. (1) In case P_1P1 being a ground point, R_1r1 should be approximately equal to OP_1OP1 (approximately). Similarly, for a non-ground point (for instance P2), a right-angled triangle (OQR), OR=H/sin?(?2).. (2) In such a case, R_2for a point P_2P2 is smaller than OR. That is R_2

Documents

Application Documents

# Name Date
1 201921023759-STATEMENT OF UNDERTAKING (FORM 3) [14-06-2019(online)].pdf 2019-06-14
2 201921023759-REQUEST FOR EXAMINATION (FORM-18) [14-06-2019(online)].pdf 2019-06-14
3 201921023759-REQUEST FOR EARLY PUBLICATION(FORM-9) [14-06-2019(online)].pdf 2019-06-14
4 201921023759-FORM-9 [14-06-2019(online)].pdf 2019-06-14
5 201921023759-FORM 18 [14-06-2019(online)].pdf 2019-06-14
6 201921023759-FORM 1 [14-06-2019(online)].pdf 2019-06-14
7 201921023759-DRAWINGS [14-06-2019(online)].pdf 2019-06-14
8 201921023759-DECLARATION OF INVENTORSHIP (FORM 5) [14-06-2019(online)].pdf 2019-06-14
9 201921023759-COMPLETE SPECIFICATION [14-06-2019(online)].pdf 2019-06-14
10 201921023759-FORM-26 [25-06-2019(online)].pdf 2019-06-25
11 201921023759-Proof of Right (MANDATORY) [01-07-2019(online)].pdf 2019-07-01
12 Abstract1.jpg 2019-07-08
13 201921023759-FORM-26 [08-07-2019(online)].pdf 2019-07-08
14 201921023759-ORIGINAL UR 6(1A) FORM 26-170719.pdf 2019-08-16
15 201921023759-Request Letter-Correspondence [19-11-2019(online)].pdf 2019-11-19
16 201921023759-CORRESPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(19-11-2019).pdf 2019-11-19
17 201921023759-ORIGINAL UR 6(1A) FORM 1-080719.pdf 2019-12-05
18 201921023759-Form 3-100120.pdf 2020-01-11
19 201921023759-FER.pdf 2021-10-19
20 201921023759-OTHERS [26-10-2021(online)].pdf 2021-10-26
21 201921023759-Information under section 8(2) [26-10-2021(online)].pdf 2021-10-26
22 201921023759-FORM 3 [26-10-2021(online)].pdf 2021-10-26
23 201921023759-FER_SER_REPLY [26-10-2021(online)].pdf 2021-10-26
24 201921023759-DRAWING [26-10-2021(online)].pdf 2021-10-26
25 201921023759-CORRESPONDENCE [26-10-2021(online)].pdf 2021-10-26
26 201921023759-CLAIMS [26-10-2021(online)].pdf 2021-10-26
27 201921023759-ABSTRACT [26-10-2021(online)].pdf 2021-10-26
28 201921023759-FORM 3 [03-03-2022(online)].pdf 2022-03-03
29 201921023759-PatentCertificate01-08-2023.pdf 2023-08-01
30 201921023759-IntimationOfGrant01-08-2023.pdf 2023-08-01

Search Strategy

1 search_strategy_8E_24-05-2021.pdf

ERegister / Renewals

3rd: 10 Oct 2023

From 14/06/2021 - To 14/06/2022

4th: 10 Oct 2023

From 14/06/2022 - To 14/06/2023

5th: 10 Oct 2023

From 14/06/2023 - To 14/06/2024

6th: 10 Oct 2023

From 14/06/2024 - To 14/06/2025