Abstract: A method and system for guiding an autonomous vehicle to extract drivable road region is provided. The method involves capturing the road region ahead of the autonomous vehicle using a plurality of sensors. The road region is captured as a three-dimensional point cloud. Thereafter, a plurality of images of the road ahead the autonomous vehicle is captured using a camera. The captured road region and the plurality of images are mapped and compared. The mapping involves comparing the point cloud with plurality of pixels in the images. Based on the mapping, a training data is dynamically updated to incorporate current road conditions and a drivable road region is predicted. Finally, based on the drivable region, the autonomous vehicle is controlled and guided through the road. FIG. 2
Claims:WE CLAIM
1. A method for guiding an autonomous vehicle, the method comprising:
capturing a road region texture ahead of the autonomous vehicle using a plurality of sensors, wherein the road region is captured as a three-dimensional point cloud;
capturing a plurality of images of the road ahead of the autonomous vehicle using a camera;
mapping the road region to the plurality of images, wherein the three-dimensional point cloud is mapped to a plurality of pixels within the plurality of images;
predicting a drivable region of the road based on the mapping and a training data, wherein the training data comprises dynamically updated information about the road; and
controlling the autonomous vehicle based on the predicted drivable region of the road.
2. The method of claim 1, wherein at least one sensor of the plurality of sensors is a Light Detection and Ranging (LiDAR) sensor.
3. The method of claim 1, wherein at least one sensor of the plurality of sensors is an Inertial Measurement Unit (IMU).
4. The method of claim 1, wherein the three-dimensional point cloud is based on slope of the road.
5. The method of claim 1, wherein mapping the road region to the plurality of images further comprises:
converting rectangular coordinates of the three-dimensional point cloud to spherical coordinates;
obtaining pixels per degree information from the plurality of pixels of the plurality of images; and
comparing boundaries of the spherical coordinates with the pixels per degree information to obtain a free region in the plurality of images, wherein the free region is associated with the drivable region of the road.
6. The method of claim 1, wherein predicting the drivable region of the road further comprises:
creating a Gaussian Mixture Model (GMM) based on the mapping, wherein the GMM describes at least one of a spatial color condition and a lighting condition of the road;
updating the training data dynamically using the GMM model, wherein the updating is based on a deviation of the GMM model from a past training data; and
extracting the drivable region of the road based on the GMM model and the updated training data.
7. The method of claim 6, wherein extracting the drivable region of the road further comprises:
predicting a free region of the road based on the updated training data;
limiting the GMM model with a predetermined optimum threshold for probability of the free region; and
identifying the drivable region based on the GMM model.
8. The method of claim 1, wherein controlling the vehicle further comprises maneuvering the vehicle in case the predicted drivable region of the road is above a threshold value, wherein the threshold value defines an available value of the drivable region of the road through which the autonomous vehicle can drive-through.
9. The method of claim 1, wherein controlling the vehicle further comprises halting the vehicle in case the predicted drivable region of the road is below a threshold value, wherein the threshold value defines an available value of the drivable region of the road through which the autonomous vehicle is restricted to drive-through due to obstacle.
10. A system for guiding an autonomous vehicle, the system comprising:
a plurality of sensors configured to capture a road region ahead of the autonomous vehicle, wherein the texture of the road is captured as a three-dimensional point cloud;
a camera for capturing a plurality of images of the road ahead of the autonomous vehicle;
a processor coupled to the plurality of sensors and the camera;
a memory communicatively coupled to the processor and having processor instructions stored thereon, causing the processor, on execution to:
map the road region to the plurality of images, wherein the three-dimensional point cloud is mapped to a plurality of pixels within the plurality of images; and
predict a drivable region of the road based on the mapping and a training data, wherein the training data comprises dynamically updated information about the road; and
a vehicle control system configured to control the autonomous vehicle based on the predicted drivable region of the road.
11. The system of claim 10, wherein at least one sensor of the plurality of sensors is a Light Detection and Ranging (LiDAR) sensor.
12. The system of claim 10, wherein at least one sensor of the plurality of sensors is an Inertial Measurement Unit (IMU).
13. The system of claim 10, wherein the processor instructions further cause the processor to:
convert rectangular coordinates of the three-dimensional point cloud to spherical coordinates;
obtain pixels per degree information from the plurality of pixels of the plurality of images; and
compare boundaries of the spherical coordinates with the pixels per degree information to obtain a free region in the plurality of images, wherein the free region is associated with the drivable region of the road.
14. The system of claim 10, wherein the processor instructions further cause the processor to:
create a Gaussian Mixture Model (GMM) based on the mapping, wherein the GMM describes at least one of a spatial color condition and a lighting condition of the road;
update the training data dynamically using the GMM model; wherein the update is based on a deviation of the GMM model from past training data; and
extract the drivable region of the road based on the GMM model and the updated training data.
15. The system of claim 14, wherein processor instructions further cause the processor to:
predict a free region of the road based on the updated training data;
limit the GMM model with a predetermined optimum threshold for probability of the free region; and
identify the drivable region based on the GMM model.
16. A vehicle control device for guiding an autonomous vehicle, the vehicle control device comprising:
a plurality of sensors a plurality of sensors configured to capture a road region ahead of the autonomous vehicle, wherein the road region is captured as a three-dimensional point cloud;
a camera for capturing a plurality of images of the road ahead of the autonomous vehicle;
a processor communicatively coupled to the plurality of sensors and the camera; and
a memory communicatively coupled to the processor and having processor instructions stored thereon, causing the processor, on execution to:
map the road region to the plurality of images, wherein the three-dimensional point cloud is mapped to a plurality of pixels within the plurality of images;
predict a drivable region of the road based on the mapping and a training data, wherein the training data comprises a dynamically updated information about the road; and
control the autonomous vehicle based on the predicted drivable region of the road.
17. The vehicle control device of claim 16, wherein the processor instructions further cause the processor to:
convert rectangular coordinates of the three-dimensional point cloud to spherical coordinates;
obtain pixels per degree information from the plurality of pixels of the plurality of images; and
compare boundaries of the spherical coordinates with the pixels per degree information to obtain a free region in the plurality of images, wherein the free region is associated with the drivable region of the road.
18. The vehicle control device of claim 16, wherein the processor instructions further cause the processor to:
create a Gaussian Mixture Model (GMM) based on the mapping, wherein the GMM describes at least one of a spatial color condition and a lighting condition of the road;
update the training data dynamically using the GMM model; wherein the update is based on a deviation of the GMM model from past training data; and
extract the drivable region of the road based on the GMM model and the updated training data.
19. The vehicle control device of claim 18, wherein processor instructions further cause the processor to:
predict a free region of the road based on the updated training data;
limit the GMM model with a predetermined optimum threshold for probability of the free region; and
identify the drivable region based on the GMM model.
Dated this 18th day of August, 2017
Swetha SN
Of K&S Partners
Agent for the Applicant
, Description:TECHNICAL FIELD
This disclosure relates generally to guiding autonomous vehicles and more particularly to method, system, and device for guiding autonomous vehicles based on dynamic extraction of road region.
| # | Name | Date |
|---|---|---|
| 1 | 201741029405-STATEMENT OF UNDERTAKING (FORM 3) [18-08-2017(online)].pdf | 2017-08-18 |
| 2 | 201741029405-REQUEST FOR EXAMINATION (FORM-18) [18-08-2017(online)].pdf | 2017-08-18 |
| 3 | 201741029405-POWER OF AUTHORITY [18-08-2017(online)].pdf | 2017-08-18 |
| 4 | 201741029405-FORM 18 [18-08-2017(online)].pdf | 2017-08-18 |
| 5 | 201741029405-FORM 1 [18-08-2017(online)].pdf | 2017-08-18 |
| 6 | 201741029405-DRAWINGS [18-08-2017(online)].pdf | 2017-08-18 |
| 7 | 201741029405-DECLARATION OF INVENTORSHIP (FORM 5) [18-08-2017(online)].pdf | 2017-08-18 |
| 8 | 201741029405-COMPLETE SPECIFICATION [18-08-2017(online)].pdf | 2017-08-18 |
| 9 | abstract-201741029405.jpg | 2017-08-21 |
| 10 | 201741029405-REQUEST FOR CERTIFIED COPY [21-08-2017(online)].pdf | 2017-08-21 |
| 11 | 201741029405-Annexure [23-08-2017(online)].pdf | 2017-08-23 |
| 12 | 201741029405-Proof of Right (MANDATORY) [09-12-2017(online)].pdf | 2017-12-09 |
| 13 | Correspondence by Agent_Form 1_13-12-2017.pdf | 2017-12-13 |
| 14 | 201741029405-REQUEST FOR CERTIFIED COPY [20-12-2017(online)].pdf | 2017-12-20 |
| 15 | 201741029405-PETITION UNDER RULE 137 [17-02-2021(online)].pdf | 2021-02-17 |
| 16 | 201741029405-OTHERS [17-02-2021(online)].pdf | 2021-02-17 |
| 17 | 201741029405-Information under section 8(2) [17-02-2021(online)].pdf | 2021-02-17 |
| 18 | 201741029405-FORM 3 [17-02-2021(online)].pdf | 2021-02-17 |
| 19 | 201741029405-FER_SER_REPLY [17-02-2021(online)].pdf | 2021-02-17 |
| 20 | 201741029405-DRAWING [17-02-2021(online)].pdf | 2021-02-17 |
| 21 | 201741029405-COMPLETE SPECIFICATION [17-02-2021(online)].pdf | 2021-02-17 |
| 22 | 201741029405-CLAIMS [17-02-2021(online)].pdf | 2021-02-17 |
| 23 | 201741029405-ABSTRACT [17-02-2021(online)].pdf | 2021-02-17 |
| 24 | 201741029405-FER.pdf | 2021-10-17 |
| 25 | 201741029405-US(14)-HearingNotice-(HearingDate-21-08-2023).pdf | 2023-07-10 |
| 26 | 201741029405-POA [25-07-2023(online)].pdf | 2023-07-25 |
| 27 | 201741029405-FORM 13 [25-07-2023(online)].pdf | 2023-07-25 |
| 28 | 201741029405-Correspondence to notify the Controller [25-07-2023(online)].pdf | 2023-07-25 |
| 29 | 201741029405-AMENDED DOCUMENTS [25-07-2023(online)].pdf | 2023-07-25 |
| 30 | 201741029405-Written submissions and relevant documents [05-09-2023(online)].pdf | 2023-09-05 |
| 31 | 201741029405-FORM-26 [05-09-2023(online)].pdf | 2023-09-05 |
| 32 | 201741029405-FORM 3 [05-09-2023(online)].pdf | 2023-09-05 |
| 33 | 201741029405-PatentCertificate03-11-2023.pdf | 2023-11-03 |
| 34 | 201741029405-IntimationOfGrant03-11-2023.pdf | 2023-11-03 |
| 1 | 2020-08-1712-07-11E_17-08-2020.pdf |