Specification
Claims:We claim:
1. A method (400) for navigating vehicles based on road conditions determined in real-time, the method (400) comprising:
receiving (402), by a navigation device (202), a first dataset comprising an image of a section of a road within a Field of View (FOV) of a camera attached to a vehicle and a second dataset associated with the road;
detecting (404), by the navigation device (202), edges and a vanishing point in the image based on the first dataset using an image processing technique;
correcting (406), by the navigation device (202), road perspectivity in the image based on at least one of the edges, the vanishing point, or the FOV;
determining (408), by the navigation device (202), surface anomalies in the road based on a set of parameters, based on the second dataset and a Time of Flight technique (ToF), wherein the set of parameters comprises volume associated with the surface anomalies;
creating (410), by the navigation device (202), a digital elevation model for the image based on the road perspectivity, the surface anomalies, and an output of processing the image by a Convolutional Neural Network (CNN), wherein the CNN is trained to identify road conditions from a plurality of test images; and
assigning (412) in real-time, by the navigation device (202), a value from a predefined value range to each of a plurality of grids in the image based on the digital elevation model to generate a digital elevation image, wherein each value in the predefined value range represents condition of the road in an associated grid from the plurality of grids, and wherein the digital elevation image is used to navigate the vehicle.
2. The method of claim 1, further comprising:
capturing the first dataset and the second dataset using at least one of a plurality of sensors attached to the vehicle, wherein the plurality of sensors comprises at least one of a camera, a Kinect like sensor, a distance sensor, or an Infrared (IR) sensor; and
determining a relationship between the first dataset and the second dataset by mapping Two Dimensional (2D) points in a 2D image with Three dimensional (3D) points in the real-world.
3. The method (400) of claim 1, further comprising detecting the vanishing point comprises extending (504a) at least two of the plurality of edges till an intersection point of the at least two of the plurality of edges is identified.
4. The method of claim 1, further comprising generating (804) a topographic map for the road based on the value assigned to each of the plurality of grids, wherein the topographic map represents the road conditions.
5. The method (400) of claim 1, further comprising:
navigating the vehicle based on the digital elevation image comprising the plurality of grids having values assigned from the predefined value range, by:
determining (1102)a plurality of vehicle attributes associated with the vehicle, wherein the plurality of vehicle attributes comprises wheel width of the vehicle, ground clearance height of the vehicle, a center of gravity (CG) of the vehicle, wheel base of the vehicle, and number of wheels of the vehicle;
identifying (1104) a potential path from a plurality of paths for navigating the vehicle on the road based on the value assigned to each of the plurality of grids in the digital elevation image and the plurality of vehicle attributes associated with the vehicle;
determining (1106) a confidence score associated with the potential path based on a condition of the potential path and a vehicle action while navigating the potential path; and
generating (1108) at least one warning when the confidence score associated with the potential path is less than a predefined threshold value.
6. A system (200) for navigating vehicles based on road conditions determined in real-time, the system (200) comprising:
a processor (206); and
a memory (204) communicatively coupled to the processor (206), wherein the memory (204) stores processor instructions, which, on execution, causes the processor (206) to:
receive (402) a first dataset comprising an image of a section of a road within a Field of View (FOV) of a camera attached to a vehicle and a second dataset associated with the road;
detect (404) edges and a vanishing point in the image based on the first dataset using an image processing technique;
correct (406) road perspectivity in the image based on at least one of the edges, the vanishing point, or the FOV;
determine (408) surface anomalies in the road based on a set of parameters, based on the second dataset and a Time of Flight technique (ToF), wherein the set of parameters comprises volume associated with the surface anomalies;
create (410) a digital elevation model for the image based on the road perspectivity, the surface anomalies, and an output of processing the image by a Convolutional Neural Network (CNN), wherein the CNN is trained to identify road conditions from a plurality of test images; and
assign (412), in real-time, a value from a predefined value range to each of a plurality of grids in the image based on the digital elevation model to generate a digital elevation image, wherein each value in the predefined value range represents condition of the road in an associated grid from the plurality of grids, and wherein the digital elevation image is used to navigate the vehicle.
7. The system (200) of claim 6, wherein the processor instructions further cause the processor (206) to:
capture the first dataset and the second dataset using at least one of a plurality of sensors attached to the vehicle, wherein the plurality of sensors comprises at least one of a camera, a Kinect like sensor, a distance sensor, or an Infrared (IR) sensor; and
determine a relationship between the first dataset and the second dataset by mapping Two Dimensional (2D) points in a 2D image with Three dimensional (3D) points in the real-world.
8. The system (200) of claim 6, wherein the processor instructions further cause the processor (206) to detect the vanishing point by extending (504a) at least two of the plurality of edges till an intersection point of the at least two of the plurality of edges is identified
9. The system (200) of claim 6, wherein the processor instructions further cause the processor (206) to generate (804) a topographic map for the road based on the value assigned to each of the plurality of grids, wherein the topographic map represents the road conditions.
10. The system (200) of claim 6, wherein the processor instructions further cause the processor (206) to:
navigate the vehicle based on the digital elevation image comprising the plurality of grids having values assigned from the predefined value range, by:
determining (1102) a plurality of vehicle attributes associated with the vehicle, wherein the plurality of vehicle attributes comprises wheel width of the vehicle, ground clearance height of the vehicle, a center of gravity (CG) of the vehicle, wheel base of the vehicle, and number of wheels of the vehicle;
identifying (1104) a potential path from a plurality of paths for navigating the vehicle on the road based on the value assigned to each of the plurality of grids in the digital elevation image and the plurality of vehicle attributes associated with the vehicle;
determining (1106) a confidence score associated with the potential path based on a condition of the potential path and a vehicle action while navigating the potential path; and
generating (1108) at least one warning when the confidence score associated with the potential path is less than a predefined threshold value.
Dated this 6th Day of October, 2020
Swetha S N
IN/PA-2123
Of K & S Partners
Agent for the Applicant
, Description:FORM 2
THE PATENTS ACT 1970
[39 OF 1970]
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10; Rule 13]
Title: METHOD AND SYSTEM FOR NAVIGATING VEHICLES BASED ON ROAD CONDITIONS DETERMINED IN REAL-TIME
Name and Address of the Applicant: WIPRO LIMITED, Doddakannelli, Sarjapur Road, Bangalore 560035, Karnataka, India
Nationality: India
The following specification particularly describes the invention and the manner in which it is to be performed.
-1-
DESCRIPTION
Technical Field
The present invention relates to road condition detection systems. In particular, the present invention relates to method and system for navigating vehicles based on road conditions determined in real-time.
Background
Over time, roads may develop a lot of aberrations, for example, potholes and humps, which may cause damage to various vehicle components or the vehicle as a whole, when the vehicle encounters these aberrations (i.e., potholes or humps). The reason behind these aberrations may be heavy rains, oil spills, and heavy vehicle movement. In addition to the vehicle getting damaged, the vehicle may also end up colliding with other vehicles, while avoiding a pothole or a hump. This may lead to injuries to drivers, pedestrians, or passengers. Further, roads usually include speed breakers that may have inconsistent dimensions. Traversing these speed breakers, which are mostly not clearly visible, may also lead to accidents and damage to vehicles.
Moreover, in poor light conditions or during night, these problems may further get aggravated. Such road conditions (for example, potholes) may also pose aggravated risks during rains, when a road gets filled up with water. In such situations, it may become difficult for a driver or even an autonomous vehicle to determine depth of potholes or condition of the road otherwise to accurately and efficiently determine desired driving path or speed for the vehicle.
Various conventional systems and methods are available for navigating the vehicles based on road conditions. However, the conventional systems and methods do not determine the vehicle path in real-time while the vehicle is moving, based on the current road conditions.
SUMMARY
In one embodiment, a method for navigating vehicles based on road conditions determined in real-time is disclosed. In one embodiment, the method may include receiving a first dataset including an image of a section of a road within a Field of View (FOV) of a camera attached to a vehicle and a second dataset associated with the road. The method may further include detecting edges and a vanishing point in the image based on the first dataset using an image processing technique. The method may further include correcting road perspectivity in the image based on at least one of the edges, the vanishing point, and the FOV. The method may further include determining surface anomalies in the road based on a set of parameters, based on the second dataset and a Time of Flight technique (ToF). It should be noted that the set of parameters may include volume associated with the surface anomalies. The method may further include creating a digital elevation model for the image based on the road perspectivity, the surface anomalies, and an output of processing the image by a Convolutional Neural Network (CNN). The CNN may be trained to identify road conditions from a plurality of test images. The method may further include assigning, in real-time, a value from a predefined value range to each of a plurality of grids in the image based on a digital elevation model to generate a digital elevation image. It should be noted that each value in the predefined value range may represent condition of the road in an associated grid from the plurality of grids, and the digital elevation image may be used to navigate the vehicle.
In yet another embodiment, a system for navigating vehicles based on road conditions determined in real-time is disclosed. The system includes a processor and a memory communicatively coupled to the processor, wherein the memory stores processor instructions, which, on execution, causes the processor to receive a first dataset including an image of a section of a road within a FOV of a camera attached to a vehicle and a second dataset associated with the road. The processor instructions further cause the processor to detect edges and a vanishing point in the image based on the first dataset using an image processing technique. The processor instructions further cause the processor to correct road perspectivity in the image based on at least one of the edges, the vanishing point, and the FOV. The processor instructions further cause the processor to determine surface anomalies in the road based on a set of parameters, based on the second dataset and a ToF. The set of parameters may include volume associated with the surface anomalies. The processor instructions further cause the processor to create a digital elevation model for the image based on the road perspectivity, the surface anomalies, and an output of processing the image by a CNN. The CNN may be trained to identify road conditions from a plurality of test images. The processor instructions further cause the processor to assign, in real-time, a value from a predefined value range to each of a plurality of grids in the image based on a digital elevation model to generate a digital elevation image. It should be noted that each value in the predefined value range may represent condition of the road in an associated grid from the plurality of grids, and the digital elevation image may be used to navigate the vehicle.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles.
FIG. 1 is an exemplary environment in which various embodiments may be employed.
FIG. 2 is a block diagram of a system for navigating vehicles based on road conditions determined in real-time, in accordance with an embodiment.
FIG. 3 is a functional block diagram of various modules within a memory of a navigation device configured to navigate vehicles based on road conditions determined in real-time, in accordance with an embodiment.
FIG. 4 is a flowchart of a method for navigating vehicles based on road conditions determined in real-time, in accordance with an embodiment.
FIG. 5 is a flowchart of a method for detecting edges and vanishing point in an image, in accordance with an embodiment.
FIG. 6 is an exemplary image of a section of a road within a Field of View (FOV) of a camera and detected edges of the road, in accordance with an exemplary embodiment.
FIGs. 7A-7C illustrate road perspectivity correction for an image captured by a camera installed on a vehicle, in accordance with an exemplary embodiment.
FIG. 8 is a flowchart of a method for generating topographic maps for a road, in accordance with an embodiment.
FIGs. 9A-9C depict digital elevation images and a topographic map for various road sections of a road, in accordance with an exemplary embodiment.
FIG. 10 depicts determining of road condition in real-time , in accordance with an embodiment.
FIG. 11 is a flowchart of a method for navigating vehicles based on road conditions determined in real-time, in accordance with another embodiment.
FIGs. 12A-12D are exemplary representations for determining a plurality of vehicle attributes associated with a vehicle, in accordance with an exemplary embodiment.
FIGs. 13A- 13D are exemplary representations for identifying a potential path from a plurality of paths for navigating a vehicle on the road, in accordance with an embodiment.
DETAILED DESCRIPTION
Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope and spirit being indicated by the following claims. Additional illustrative embodiments are listed below.
An exemplary environment 100 in which various embodiments may be employed, is illustrated in FIG. 1. The environment 100 depicts a section of a road. The environment 100 may further include vehicles 102 and 104 moving on the road in same direction. In some embodiments, there may also be other vehicles moving on the same road in the same or opposite direction as the vehicles 102 and 104. Further, the vehicles 102 and 104 may be user driven vehicles or autonomous vehicles. The section of the road may include surface anomalies or aberrations, for example potholes 106a, 106b, and a speed breaker 108. Other section of the road may also include different surface irregularities such as, a hump, an object, debris, water, and other surface irregularities.
Consider a scenario where the vehicle 102 is moving on a straight path with a high speed. The ride quality of the vehicle 102 on the road may be affected by the pothole 106a, as it may be difficult for the vehicle 102 or for driver of the vehicle 102 to determine depth of the pothole 106a, especially when is it is filled with water. The vehicle 102 may either have to abruptly change its path or in case the vehicle 102 is moving at a high speed, the vehicle 102 may have to cross the pothole 106a. In the latter case, the pothole 106a may have an impact on the vehicle 102. For example, depending on the depth of the pothole 106a and the speed of the vehicle 102, the pothole 106a may damage parts of the vehicle 102, when the vehicle 102 hits the pothole 106a. In the former case, as a result of the abrupt change in path, the vehicle 102 may end up colliding with another vehicle or the passengers within the vehicle 102 may get hurt.
Further, considering another scenario where the vehicle 104 is moving with high speed on the road towards the speed breaker 108. Now depending on the surface and height of the speed breaker 108, the vehicle 104 or the passenger inside the vehicle 104 may get adversely impacted. To avoid this situation, the vehicle 104 may either need to abruptly decrease the speed or may have to come to a grinding halt.
The above discussed situations or scenarios may be avoided by employing a navigation device in the vehicles 102 and 104. The navigation device may determine the road condition and may accordingly suggest a path and speed for navigating the vehicles 102 and 104. The navigation device is further explained in detail in conjunction with FIG. 2 to FIG. 13.
Referring now to FIG. 2, a block diagram of a system 200 for navigating vehicles based on road conditions determined in real-time is illustrated, in accordance with an embodiment. The system 200 may include a navigation device 202 that may determine road condition and may accordingly provide recommendations to a vehicle (for example, the vehicles 102 and 104). In particular, the navigation device 202 may recommend a potential path and speed for navigating the vehicle on the potential path. The navigation device 202 may be integrated within the vehicle or may be located remotely from the vehicle. Examples of the navigation device 202 may include, but are not limited to a desktop, a laptop, a notebook, a netbook, a tablet, a smartphone, a mobile phone, an application server, or the like.
The navigation device 202 may include a memory 204, a processor 206, and a display 208. The memory 204 and the processor 206 of the navigation device 202 may perform various functions including receiving an image of a section of a road, detecting edges and vanishing point in the image, correcting road perspectivity in the image, determining surface anomalies in the road, creating a digital elevation model and topographic maps, identifying potential path, providing recommendations, and generating warnings. The memory 204 may store instructions that, when executed by the processor 206, may cause the processor 206 to navigate the vehicle 102 based on determined road conditions in real-time. The memory 204 may be a non-volatile memory or a volatile memory. Examples of non-volatile memory, may include, but are not limited to a flash memory, a Read Only Memory (ROM), a Programmable ROM (PROM), Erasable PROM (EPROM), and Electrically EPROM (EEPROM) memory. Examples of volatile memory may include but are not limited to Dynamic Random Access Memory (DRAM), and Static Random-Access memory (SRAM).
The display 208 may further include a user interface 210. A user or the administrator may interact with the navigation device 202 and vice versa through the user interface 210. By way of an example, the display 208 may be used to display results of analysis performed by the navigation device 202 (i.e. a potential path and speed for navigating the vehicle) to the user. By way of another example, the user interface 210 may be used by the user to provide inputs to the navigation device 202.
As will be described in greater detail in conjunction with FIG. 2 to FIG. 13, in order to determine road condition for navigating the vehicle 102, the navigation device 202 may extract a plurality of vehicle attributes, from a server 212, which may further include a database 214.
An image of a section of a road, and a set of parameters associated with surface anomalies may also be received by the navigation device 202 from one or more of a plurality of sensors 216 placed at various locations within the vehicle. Examples of sensors 216 may include, but are not limited to a camera, a Kinect like sensor, a distance sensor, an Infrared (IR) sensor, and the like. The plurality of sensors 216 may be communicatively coupled to the navigation device 202, via a network 218. The network 218 may be a wired or a wireless network and the examples may include, but are not limited to the Internet, Bluetooth, Near Field Communication (NFC), Wireless Local Area Network (WLAN), Wi-Fi, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), and General Packet Radio Service (GPRS).
Referring now to FIG. 3, a block diagram of various modules within the memory 204 of the navigation device 202 configured to navigate vehicles based on road conditions determined in real-time is illustrated, in accordance with an embodiment. The memory 204 may include various modules for performing multiple operations in order to navigate the vehicles. The memory 204 may determine road conditions in real-time and subsequently provide a potential path to navigate a vehicle based on determined road conditions. Initially, the navigation device 202 may receive a camera image 302 (i.e., an image of a section of a road within field of view (FOV) of a camera and captured by the camera) and sensor data 304. The modules within the memory 204 for determining the road conditions may include a road edge detection module 306, a perspectivity correction module 308, a surface anomaly detection module 310, a digital elevation model creating module 312, and a topographic map generation module 314.
The road edge detection module 306 may receive the camera image 302, which is the image of a section of the road. Further, the road edge detection module 306 may be configured to detect edges and a vanishing point in the camera image 302. To this end, the road edge detection module 306 may use an image processing technique, for example, but not limited to Local Adaptive Share Voting (LASV). The vanishing point is nothing but an intersection point of road edges. In other words, where two lines in the image cross a single point, the single point may be referred to as the vanishing point. Therefore, the road edge detection module 306 may first detect edges in the camera image 302 and may then extend the edges till an intersection point of the edges is identified. Further, the road edge detection module 306 may be communicatively connected to the perspectivity correction module 308. This is further explained in detail in conjunction with FIG. 5 and FIG. 6.
The perspectivity correction module 308 may be configured to receive the results generated by the road edge detection module 306. Further, the perspectivity correction module 308 may correct road perspectivity in the camera image 302 based on at least one of the edges, the vanishing point, and the FOV. The perspectivity correction module 308 may correct the road perspectivity vertically as well as horizontally. This is further explained in detail in conjunction with FIGs. 7A to 7C.
The surface anomaly detection module 310 may be configured to receive the sensor data 304 and may determine surface anomalies or irregularities in the road based on analysis of the sensor data 304. In an embodiment, the surface anomalies or irregularities in the road may be determined based on a set of parameters, such as, but not limited to volume associated with the surface anomalies. To determine the surface anomalies, the surface anomaly detection module 310 may use a Time of Flight (ToF) technique. This is further explained in detail in conjunction with FIG. 4.
The digital elevation model creating module 312 may be operatively coupled to the perspectivity correction module 308 and the surface anomaly detection module 310. The digital elevation module 312 may create a digital elevation model for the image based on the road perspectivity, the surface anomalies, and an output of processing the camera image 302 by a Convolutional Neural Network (CNN). This is further explained in detail in conjunction with FIG. 10.
The topographic map generation module 314 may receive the digital elevation model from the digital elevation model creating module 312. Further, in some embodiments, a digital elevation image may be generated based on the digital elevation model. To generate the digital elevation image, the topographic map generation module 314 may assign a value from a predefined value range to each of a plurality of grids in the camera image 302. Each value in the predefined value range may represent a specific condition of the road in an associated grid from the plurality of grids. A topographic map for the road may be generated by the topographic map generation module 314 based on the value assigned to each of the plurality of grids in the digital elevation image. The topographic map may thus represent road conditions and may be used for safely navigating a vehicle on the road.
Further, the memory 204 of the navigation device 202 may receive vehicle information 316 in order to identify a potential path for navigation of the vehicle. To this end, the memory 204 may further include a path selection module 318, a confidence score determining module 320, and a warning generator 322. The vehicle information 316 may be received by the path selection module 318, which may further include an attributes determining module 318a. In order to find the potential path, the attributes determining module 318a may determine a plurality of vehicle attributes associated with the vehicle using the vehicle information 316. The plurality of vehicle attributes may include, but are not limited to wheel width of the vehicle, ground clearance height of the vehicle, a Center of Gravity (CG) of the vehicle, wheelbase of the vehicle, and number of wheels of the vehicle. In short, the path selection module 318 may identify a potential path from a plurality of paths for navigating the vehicle on the road based on the topographic map and the plurality of vehicle attributes associated with the vehicle.
The warning generator 322 may be communicatively coupled to the path selection module 318 via the confidence score determining module 320. The confidence score determining module 320 may determine a confidence score associated with the potential path. The confidence score may be determined based on a vehicle action while navigating the potential path and condition of the potential path. Further, a warning may be generated by the warning generator 322, when the confidence score associated with the potential path is less than a predefined threshold value. This is further explained in detail in conjunction with FIG. 11.
Referring now to FIG. 4, a method for navigating vehicles based on road conditions determined in real-time is depicted via a flowchart 400, in accordance with an embodiment. Each step of the flowchart 400 may be performed by one or more of the modules 306-314 within the memory 204. At step 402, a first dataset and a second dataset may be received. The first dataset may include an image of a section of a road within a Field of View (FOV) of a camera attached to a vehicle. And, the second dataset may be associated with the road. At step 404, edges and a vanishing point in the image may be detected based on the first dataset. It may be noted that an image processing technique may be used to detect the edges and the vanishing point. This is further explained in conjunction with FIG. 5 and FIG. 6.
At step 406, road perspectivity may be corrected in the image. It may be noted that at least one of the edges, the vanishing point, and the FOV, may be considered to correct the road perspectivity. Road perspectivity correction is be further explained in conjunction with FIG. 7A - FIG. 7C. At step 408, surface anomalies in the road may be determined. The surface anomalies may also correspond to surface irregularities. To determine the surface anomalies, a set of parameters and the second dataset may be considered. Further, the ToF technique may be used to determine the surface irregularities. It should be noted that the set of parameters includes volume associated with the surface anomalies.
In some embodiments distance information may be captured by a sensor using the ToF. The distance may be calculated based on time taken in irradiating infrared light from the sensor and returning back to the sensor. The sensor may be selected from, but may not be limited to, a Kinect like sensor, a distance sensor, and an Infrared (IR) sensor. The sensor may convert the irradiated infrared light into a pulse wave. Further, coordinates of a point of reflection of the infrared light may be obtained to generate a Three-Dimensional (3-D) point cloud, which is a point cloud in a 3-D rectangular coordinate system, such as (x, y, z). Based on the 3-D point cloud, it may be possible to detect surfaces and people on the road. Moreover, 3-D point cloud may be colored and thus image processing may be performed in colored 3-D point clouds. Open source libraries may be used for the image processing.
At step 410, a digital elevation model may be created for the image based on the road perspectivity, the surface anomalies, and an output of processing the image through a Convolutional Neural Network (CNN), which is a deep learning network. It should be noted that the CNN may be trained to identify road conditions from a plurality of test images. By way of an example, the CNN may be trained to classify potholes and non-potholes by analyzing an image. In some embodiments, various pre-trained Neural Network Models, such as, but not limited to ResNet50, ResNet152, ResNet50V2, ResNet152V2, InceptionV3, InceptionResNetV2 and DenseNet201 may also be used. In some embodiments, in order to create the digital elevation model, a relationship between the first dataset and the second dataset may be determined. To determine the relationship, Two Dimensional (2D) points in a 2D image may be mapped with 3D points in the real-world.
Thereafter, at step 412, a value may be assigned to each of a plurality of grids in the image from a predefined value range. The predefined value range, for example, may be from -10 to +10. In this case, negative sign may indicate depth, while the + sign may indicate height. Thus, each value in the predefined value range represents condition of the road in an associated grid from the plurality of grids. The value may be assigned based on the digital elevation model in order to generate a digital elevation image. Further, the generated digital elevation image may be used to navigate the vehicle. This is further explained in detail in conjunction with FIG. 8 and FIG. 9.
The plurality of grids may be a combination of pixels. However, the number of pixels in the grids may gradually decrease from nearest to farthest row of the FOV, as the perspectivity correction is performed. In some embodiments, minimum grid size may be considered as 3×3 pixels for the farthest row i.e. 0^th row positions. The number of pixels for rest of the rows may gradually increase, such that, each grid is square in shape.
Referring now to FIG. 5, a method for detecting edges and a vanishing point in an image is depicted via a flowchart 500, in accordance with an embodiment. At step 502, a first dataset that includes an image of a section of a road within a FOV of a camera attached to a vehicle and a second dataset associated with the road may be received. At step 504, the edges and the vanishing point in the image may be detected based on the first dataset. An image processing technique may be used to detect the edges and the vanishing point. The step 504 may further include a step 504a, where at least two of the edges may be extended till an intersection point of the at least two of the edges is identified.
Referring now to FIG. 6, an exemplary image 600 of a section of a road within a Field of View (FOV) of a camera and detected edges of the road is illustrated, in accordance with an exemplary embodiment. For an unmarked road, edge detection may be a way to determine road area and structure of the road. Therefore, the navigation device 202 may identify edges 602a and 602b in the image 600. The image 600 may be captured using the camera attached to a vehicle. The image 600 may be processed using an image processing technique to identify the edges 602a and 602b and a vanishing point 604. As illustrated in FIG. 6, the edges 602a and 602b may be further extended till the vanishing point 604 is detected In some embodiments, an LASV technique may be used to process the image 600. In some other embodiments, some other standard image processing techniques may be used. As is clearly visible in FIG. 6, the road width in the image 600 is not same throughout the length of the road section. In other words, the edges 602a and 602b of road are not parallel as they cross at a single point, i.e., the vanishing point 604. However, in reality, the road width may almost be the same, and the edges may be parallel to each other. Thus, in order to get a real or actual view, the road perspectivity correction may be performed, which is further explained in conjunction with FIGs. 7A to 7C.
Referring now to FIGs. 7A-7C, road perspectivity correction for an image captured by a camera installed on a vehicle is illustrated, in accordance with an exemplary embodiment. FIG. 7A represents the image of a road section as captured by the camera. A portion of the area of the road section, represented by ‘ABCD’ (or the FOV) may be considered for road perspectivity correction, such that, A, B, C, D, may be image coordinates. In real world, the width ‘AD’ is equal to the width ‘BC’. However, as is apparent from FIG. 7A, in the image 600 the width ‘AD’ (or a section 702) and ‘BC’ ( a section 704) are different. The section 702 represents the farthest width (‘AD’) of the road for the FOV and the section 704 represents the nearest width (‘BC’) of the road for the FOV. The magnitude of this difference may depend upon the angle of the camera with respect to the road surface (or the camera down angle). Perspectivity per row, for the portion ‘ABCD’ of the image, may be calculated as per equation (1), given below. Since the camera angle for a given vehicle would be fixed, this would be a onetime calculation for every vehicle. Additionally, perspectivity per row may depend upon camera height, camera down angle, and camera resolution:
Perspectivity per Row=f (Length of Road,Width of Road,Image Coordinates A,B,C,D ) … (1)
FIG. 7B represents vertical perspectivity correction. The road perspectivity correction may be performed by interpolating pixels in each row keeping a fixed distance ‘BC’ throughout. Therefore ‘A’ is shifted to ‘A1’, and ‘D’ is shifted to ‘D1’, as depicted in FIG. 7B. Here, the pixels may be inserted in every row from bottom to top in order to form a rectangle. Hence, the road area becomes ‘A1BCD1.
In addition to correcting the vertical perspectivity of the image, horizontal perspectivity correction may also be performed. This is depicted in FIG. 7C. For horizontal perspectivity correction ‘EFGH’ is considered as the FOV. In FIG. 7C, ‘A’ is the source of light in terms of position of the camera, ‘AB’ is the camera height and ‘BC0’ is a blind zone for the camera. Further, ‘a’ represents the down angle of the camera, which may be changed according to speed of the vehicle as the down angle ‘a’ controls the FOV. It should be noted that height and position of the camera is constant and a user may be required to input four points of the FOV, which are E, F, G, and H in the current case. In an exemplary embodiment, the road perspectivity correction may be performed using the equations 2, 3, 4, 5 and, 6 given below:
?AC_W B= tan^(-1) (AB/(BC_W )) … (2)
?C_0 AC_W=(90^o-?AC_W B)-a=(90^o-tan^(-1) (AB/BD))-a … (3)
?AngleDistribution=
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202041043506-STATEMENT OF UNDERTAKING (FORM 3) [06-10-2020(online)].pdf |
2020-10-06 |
| 2 |
202041043506-REQUEST FOR EXAMINATION (FORM-18) [06-10-2020(online)].pdf |
2020-10-06 |
| 3 |
202041043506-PROOF OF RIGHT [06-10-2020(online)].pdf |
2020-10-06 |
| 4 |
202041043506-POWER OF AUTHORITY [06-10-2020(online)].pdf |
2020-10-06 |
| 5 |
202041043506-FORM 18 [06-10-2020(online)].pdf |
2020-10-06 |
| 6 |
202041043506-FORM 1 [06-10-2020(online)].pdf |
2020-10-06 |
| 7 |
202041043506-DRAWINGS [06-10-2020(online)].pdf |
2020-10-06 |
| 8 |
202041043506-DECLARATION OF INVENTORSHIP (FORM 5) [06-10-2020(online)].pdf |
2020-10-06 |
| 9 |
202041043506-COMPLETE SPECIFICATION [06-10-2020(online)].pdf |
2020-10-06 |
| 10 |
202041043506-Power of Attorney [08-10-2020(online)].pdf |
2020-10-08 |
| 11 |
202041043506-Form 1 (Submitted on date of filing) [08-10-2020(online)].pdf |
2020-10-08 |
| 12 |
202041043506-Covering Letter [08-10-2020(online)].pdf |
2020-10-08 |
| 13 |
202041043506-FORM 3 [12-05-2021(online)].pdf |
2021-05-12 |
| 14 |
202041043506-FORM 3 [12-05-2021(online)]-1.pdf |
2021-05-12 |
| 15 |
202041043506-FER.pdf |
2022-05-11 |
| 16 |
202041043506-POA [19-09-2022(online)].pdf |
2022-09-19 |
| 17 |
202041043506-OTHERS [19-09-2022(online)].pdf |
2022-09-19 |
| 18 |
202041043506-Information under section 8(2) [19-09-2022(online)].pdf |
2022-09-19 |
| 19 |
202041043506-FORM 3 [19-09-2022(online)].pdf |
2022-09-19 |
| 20 |
202041043506-FORM 13 [19-09-2022(online)].pdf |
2022-09-19 |
| 21 |
202041043506-FER_SER_REPLY [19-09-2022(online)].pdf |
2022-09-19 |
| 22 |
202041043506-COMPLETE SPECIFICATION [19-09-2022(online)].pdf |
2022-09-19 |
| 23 |
202041043506-CLAIMS [19-09-2022(online)].pdf |
2022-09-19 |
| 24 |
202041043506-AMENDED DOCUMENTS [19-09-2022(online)].pdf |
2022-09-19 |
| 25 |
202041043506-US(14)-HearingNotice-(HearingDate-01-10-2024).pdf |
2024-09-05 |
| 26 |
202041043506-Correspondence to notify the Controller [06-09-2024(online)].pdf |
2024-09-06 |
| 27 |
202041043506-Written submissions and relevant documents [16-10-2024(online)].pdf |
2024-10-16 |
Search Strategy
| 1 |
SearchHistoryE_10-05-2022.pdf |