Sign In to Follow Application
View All Documents & Correspondence

System And Method For Detecting Objects And Calculate Distance Between A Vehicle And The Objects

Abstract: A method and system for computing a distance between a vehicle and an object is disclosed. The system may capture a left image and a right image by using a stereo image capturing unit. The system may further detect an object present in a left image and a right image. The system may further determine a Region of Interests (ROIs) in the left image and the right image. The system may further compute a left centroid and a right centroid of the ROIs corresponding to the left image and the right image respectively. The system may further generate a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively. The system may further process the left sub-pixel image and the right sub-pixel image. The system may further compute a distance between the vehicle and the object.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 April 2014
Publication Number
41/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
ip@legasis.in
Parent Application
Patent Number
Legal Status
Grant Date
2021-10-29
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai 400021, Maharashtra, India

Inventors

1. C R, Manoj
Tata Consultancy Services Limited, Salarpuria G. R. Tech Park, JAL Block, Mahadevapura, K R Puram, Bangalore 560 066, Karnataka, India
2. Udhbhav
Tata Consultancy Services Limited, Salarpuria G. R. Tech Park, JAL Block, Mahadevapura, K R Puram, Bangalore 560 066, Karnataka, India
3. PATIL, Prabhudev
Tata Consultancy Services Limited, Salarpuria G. R. Tech Park, JAL Block, Mahadevapura, K R Puram, Bangalore 560 066, Karnataka, India

Specification

DESC:FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
A SYSTEM AND METHOD FOR COMPUTING DISTANCE BETWEEN A VEHICLE AND AN OBJECT

APPLICANT:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application claims priority to Indian Provisional Patent Application No. 1297/MUM/2014, filed on April 7, 2014, the entirety of which is hereby incorporated by reference.

TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to a system and a method for image processing, and more particularly to the system and the method for computing distance between a vehicle and an object appearing in way of the vehicle using image processing.

BACKGROUND
[003] In an era of digital world, there are various digital systems that have been integrated or deployed in a vehicle in order to assist driver in different driving conditions and thereby ensuring driving safety. Examples of the digital systems may include, but not limited to, a stereo camera and a sensor. One of the factors that may be responsible for road accidents is reduced range of vision. In order to overcome such road accidents, the digital systems may constantly monitor the vehicle surroundings as well as driving behavior of a driver to detect potentially dangerous situations at an early stage. In critical driving conditions, such digital systems warn and actively support the driver and, if necessary, intervene automatically in an effort to avoid a collision or to mitigate the consequences of the collision or an accident.
[004] One of safety features employed in the vehicle for ensuring the safety of the driver may include a conventional stereo camera based system that captures the object located within vicinity of the vehicle and thereby calculates distance between the object and the vehicle for avoiding the collision or the accident by notifying the driver. However the conventional stereo camera based system is only capable of detecting the objects located at shorter distance from the vehicle, and hence may have a limitation of finding proper disparity corresponding to an object located at a larger distance from a current location of the vehicle.
SUMMARY
[005] Before the present systems and methods, are described, it is to be understood that this application is not limited to the particular systems, and methodologies described, as there can be multiple possible embodiments which are not expressly illustrated in the present disclosures. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope of the present application. This summary is provided to introduce concepts related to compute a distance between a vehicle and an object appearing in way of the vehicle and the concepts are further described below in the detailed description. This summary is not intended to identify essential features of the disclosure nor is it intended for use in determining or limiting the scope of the disclosure.
[006] In one implementation, a system for computing distance between a vehicle and an object appearing in way of the vehicle is disclosed. In one aspect, the system may comprise a processor and a memory coupled to the processor. The processor may execute a plurality of modules present in the memory. The plurality of modules may comprise an image capturing module, a Region of Interest (RoI) determination module, an interpolation module, a matching module and a distance computation module. The image capturing module may capture a left image and a right image by using a stereo image capturing unit. The Region of Interest (RoI) determination module may detect an object present in a left image and a right image based on a pre-defined feature descriptor table. The pre-defined feature descriptor table may comprise a plurality of objects, a plurality of window sizes of the plurality of objects and pre¬-defined ranges of a distance between the vehicle and the plurality of objects. The RoI determination module may further determine a Region of Interests (ROIs) in the left image and the right image. The interpolation module may compute a left centroid and a right centroid of the ROIs corresponding to the left image and the right image respectively. The interpolation module may further generate a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. In one embodiment, the left centroid and the right centroid are interpolated based on type of the object detected and the window size. For example, if the object is detected as ‘vehicle’ in the left image and the right image which falls within the ‘Range 1’ (wherein ‘Range 1’ indicates that the window size corresponding to the object detected i.e. vehicle is 128x64 pixels), then the left sub-pixel image and the right sub-pixel left image are generated by interpolating the left centroid and the right centroid, wherein the left centroid and the right centroid is interpolated by using 0.5 sub pixel resolution determined from the pre-defined feature descriptor table. The matching module may further process the generated left sub-pixel image and the right sub-pixel image using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. The distance computation module for computing a distance between the vehicle and the object based on a triangulation technique using difference between the target centroid and the reference centroid, a focal length of the stereo image capturing unit, and a base line of the stereo image capturing unit.
[007] In another implementation, a method for computing distance between a vehicle and an object appearing in way of the vehicle is disclosed. In order to compute the distance, initially, a left image and a right image may be captured by using a stereo image capturing unit. After capturing the left image and the right image, an object present in a left image and a right image may be detected based on a pre-defined feature descriptor table. In one aspect, the pre-defined feature descriptor table comprises a plurality of objects, a plurality of window sizes of the plurality of objects and the corresponding pre-defined ranges of distance between the vehicle and the plurality of objects based on window sizes. Subsequent to the detection of the object, a Region of Interests (RoIs) may be determined in the left image and the right image. After determining the RoIs, a left centroid and a right centroid of the RoIs corresponding to the left image and the right image respectively may be computed. After computation of the left centroid and the right centroid, a left sub-pixel image and a right sub-pixel image may be generated by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. In one embodiment, the left centroid and the right centroid are interpolated based on type of the object detected and the window size. For example, if the object is detected as ‘vehicle’ in the left image and the right image which falls within the ‘Range 1’ (wherein ‘Range 1’ indicates that the window size corresponding to the object detected i.e. vehicle is 128x64 pixels), then the left sub-pixel image and the right sub-pixel left image are generated by interpolating the left centroid and the right centroid, wherein the left centroid and the right centroid is interpolated by using 0.5 sub pixel resolution determined from the pre-defined feature descriptor table. Subsequently, the left sub-pixel image and the right sub-pixel image may be processed by using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. Once the target centroid is determined, a distance between the vehicle and the object may be computed based on a triangulation technique using difference between the target centroid and the reference centroid, a focal length of the stereo image capturing unit, and a base line of the stereo image capturing unit. In one aspect, the aforementioned method for computing the distance between the vehicle and the object is performed by using a processor executing programmed instructions stored in a memory.
[008] In yet another implementation, non-transitory computer readable medium embodying a program executable in a computing device for computing a distance between a vehicle and an object appearing in way of the vehicle is disclosed. The program may comprise a program code for capturing a left image and a right image by using a stereo image capturing unit. The program may comprise a program code for detecting an object present in a left image and a right image based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of distance between the vehicle and the plurality of objects corresponding to plurality of window sizes. The program may comprise a program code for determining a Region of Interests (RoIs) in the left image and the right image. The program may comprise a program code for computing a left centroid and a right centroid of the RoIs corresponding to the left image and the right image respectively. The program may comprise a program code for generating a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. In one embodiment, the left centroid and the right centroid are interpolated based on type of the object detected and the window size. For example, if the object is detected as ‘vehicle’ in the left image and the right image which falls within the ‘Range 1’ (wherein ‘Range 1’ indicates that the window size corresponding to the object detected i.e. vehicle is 128x64 pixels), then the left sub-pixel image and the right sub-pixel left image are generated by interpolating the left centroid and the right centroid, wherein the left centroid and the right centroid is interpolated by using 0.5 sub pixel resolution determined from the pre-defined feature descriptor table. The program may comprise a program code for processing the left sub-pixel image and the right sub-pixel image using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. The program may comprise a program code for computing a distance between the vehicle and the object based on a triangulation technique using difference between the target centroid and the reference centroid, a focal length of the stereo image capturing unit, and a base line of the stereo image capturing unit.

BRIEF DESCRIPTION OF THE DRAWINGS
[009] The foregoing detailed description of embodiments is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the disclosure, there is shown in the present document example constructions of the disclosure; however, the disclosure is not limited to the specific methods and apparatus disclosed in the document and the drawings.
[0010] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0011] Figure 1 illustrates a network implementation of a system for computing distance between a vehicle and an object appearing in way of the vehicle is shown, in accordance with an embodiment of the present disclosure.
[0012] Figure 2 illustrates the system, in accordance with an embodiment of the present disclosure.
[0013] Figure 3 illustrates a method for computing distance between the vehicle and the object, in accordance with an embodiment of the present disclosure.
[0014] The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
DETAILED DESCRIPTION
[0015] Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, systems and methods are now described. The disclosed embodiments are merely exemplary of the disclosure, which may be embodied in various forms.
[0016] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure is not intended to be limited to the embodiments illustrated, but is to be accorded the widest scope consistent with the principles and features described herein.
[0017] System(s) and Method(s) for computing a distance between a vehicle and an object appearing in way of the vehicle are disclosed. The vehicle may comprise the stereo image capturing unit for capturing a plurality of images of the plurality of objects that may appear in the roadway area in front of vehicle path while the vehicle is in transit and is mounted within the vehicle. It may be understood that the stereo image capturing unit comprises two cameras i.e. a left camera and a right camera. Both the left camera and the right camera are capable of capturing the plurality of images. In one embodiment an image, of the plurality of images, captured by the left camera is referred to as a left image. On the other hand, an image captured by the right camera is referred to as a right image. It may be understood that an object of the plurality of objects may be located at varying distance from the vehicle. In order to compute the distance, the system and method disclosed further detects the object present in the left image and the right image. In one aspect, the object may be detected as one of a pedestrian, a vehicle, and a tree.
[0018] Upon detecting the object, a Region of Interest (RoI) may be determined in the left image and the right image. It may be understood that the RoI is determined by using a Histogram of Oriented Gradient (HOG) technique or an Enhanced Histogram of Oriented Gradient (EHOG) technique. Subsequent to the determination of the RoI, a disparity in the left image and the right image may be identified within the ROI. In one aspect, the disparity may be identified by interpolating the ROI in the left image and the right image using a block matching technique. After identification of the disparity, a triangulation method may be used using a fifth order polynomial curve fitting approach in order to calculate the distance between the object and the vehicle. The distance may then be displayed to the driver on a display unit in order to alert the driver that the object is located within the vicinity of the vehicle at the distance as calculated. The alert may then facilitate in avoiding a collision or to mitigate the consequences of the collision or the accident. Thus, in this manner, the system and the method may calculate the distance between the object and the vehicle, thereby facilitating the driver assistance and safety.
[0019] While aspects of described system and method for computing a distance between a vehicle and an object appearing in way of the vehicle may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system.
[0020] Referring now to Figure 1, a network implementation 100 of a system 102 for computing distance between a vehicle and an object appearing in way of the vehicle, in accordance with an embodiment of the present disclosure. In one aspect, the system 102 captures a left image and a right image by using a stereo image capturing unit 108. After capturing the left image and the right image, the system 102 detects an object present in a left image and a right image based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of a distance between the vehicle and the plurality of objects. Subsequent to the detection of the object, the system 102 determines a Region of Interests (RoIs) in the left image and the right image. After determining the RoIs, the system 102 computes a left centroid and a right centroid of the RoIs corresponding to the left image and the right image respectively. After computation of the left centroid and the right centroid, the system 102 generates a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. Subsequently, the system 102 processes the left sub-pixel image and the right sub-pixel image by using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. Once the target centroid is determined, the system 102 computes a distance between the vehicle and the object based on a triangulation technique using difference between the target centroid and the reference centroid, a focal length of the stereo image capturing unit 108, and a base line of the left stereo image capturing unit 108.
[0021] Although the present subject matter is explained considering that the system 102 is implemented in a vehicle 106, it may be understood that the system may also be implemented as a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a mainframe computer, a server, a network server, a tablet, a mobile phone, and the like. Further, the server 104 may track the activities of the system 102, and the system 102 is communicatively coupled to the server through a network 104.
[0022] In one implementation, the network 104 may be a wireless network, a wired network or a combination thereof. The network 104 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 104 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further the network 104 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0023] Referring now to Figure 2, the system 102 is illustrated in accordance with an embodiment of the present disclosure. In one embodiment, the system 102 may include at least one processor 202, an input/output (I/O) interface 204, and a memory 206. The at least one processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 206.
[0024] The I/O interface 204 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 204 may allow the system 102 to interact with the user directly or through the client devices 104. Further, the I/O interface 204 may enable the system 102 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 204 can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 204 may include one or more ports for connecting a number of devices to one another or to another server.
[0025] The memory 206 may include any computer-readable medium and computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. The memory 206 may include modules 208 and data 210.
[0026] The modules 208 include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. In one implementation, the modules 208 may include an image capturing module 212, a Region of Interest (ROI) determination module 214, an interpolation module 216, a matching module 217, a distance computation module 218, a display module 220, and other modules 222. The other modules 222 may include programs or coded instructions that supplement applications and functions of the system 102. The modules 208 described herein may be implemented as software modules that may be executed in the cloud-based computing environment of the system 102.
[0027] The data 210, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the modules 208. The data 210 may also include a system database 224 and other data 226. The other data 226 may include data generated as a result of the execution of one or more modules in the other modules 222.
[0028] It has been observed that a plurality of objects may appear in way of the vehicle 106 in transit. It may be further understood that the plurality of objects may be located at varying distances from the vehicle’s current position. Some of the objects may be located on either side of a road on which the vehicle 106 is moving while some other objects may be located on the road in front of the vehicle 106. It has been further observed that a driver of the vehicle 106 may be distracted while driving and thus loose his/her concentration which becomes a cause for an accident. The distraction may occur due to the presence of sign boards, bill boards located near to the road. In order to avoid the accident due to such distraction, the system 102 facilitates to caution the driver with an alert in real-time. The alert indicates that an object is appearing in the way of the vehicle 106. The system 102 generates the alert at a pre-defined distance from the object. For generating the alert, it is necessary to compute a distance between the vehicle 106 and the object appearing in the way. The detail implementation for computing the distance is described below.
[0029] In one implementation, at first, a user may use the client devices to access the system 102 via the I/O interface 204 for generating the alert. The user may register themselves using the I/O interface 204 in order to use the system 102. In one aspect, the user may accesses the I/O interface 204 of the system 102 for computing the distance between the vehicle 106 and the object appearing in the way of the vehicle 106. In order to compute the distance, the system 102 may employ the image capturing module 212, the RoI determination module 214, the interpolation module 216, the matching module 217, the distance computation module 218, and the display module 220. The detailed working of the plurality of modules is described below.
[0030] Initially, the image capturing module 212 captures a left image and a right image by using a stereo image capturing unit 108 deployed on the vehicle 106. It may be understood that the stereo image capturing unit 108 comprises a left camera and a right camera. The left camera is capable of capturing an image hereinafter referred to as a left image and the right camera is capable of capturing an image hereinafter referred to as a right image. In one aspect, the left image and the right image may comprise a plurality of objects present on the path on which the vehicle 106 is moving. Examples of the plurality of objects may include, but not limited to, a pedestrian, a vehicle, or a tree. The plurality of objects present in the left image and the right image may be placed at a different range or at varying distance from the vehicle 106. It may be understood that the plurality of objects, captured by the stereo image capturing unit 108, are detected based on methodology described in Indian Patent Application Number 794/MUM/2014 which is incorporated herein as a reference.
[0031] Upon capturing of the plurality of images, the RoI determination module 214 detects an object, of the plurality of images, present in a left image and a right image. The object may be detected based on a pre-defined feature descriptor table. The pre-defined feature descriptor table comprises a plurality of objects, a plurality of window sizes of the plurality of objects, and pre-defined ranges of a distance between the vehicle and the plurality of objects. It may be understood that the pre-defined feature descriptor table stores the pre-defined ranges based on the plurality of objects and the plurality of window sizes of the plurality of objects. For example, if the object is detected as a ‘vehicle’ which is having the window size of 128x64, then, it falls in range between up to 20 meters (Range 1). Similarly, the window size of 64 x64 is approximately fall in range between 20 to 50 meters (Range 2). Likewise, the window size of 32 x32 is approximately fall in range greater than 50 meters (Range 3). In one aspect, the RoI determination module 214 detects the object either as the pedestrian or the vehicle based on a window size of the object. Similarly, if the object is detected as a ‘pedestrian’ having the window size of 64x128, then, it falls in range up to 15 meters (Range 1). Similarly, the window size of 32 x64 is approximately fall in range between 15 to 30 meters (Range 2). Likewise, the window size of 32 x32 is approximately fall in range greater than 30 meters (Range 3). In one aspect, the RoI determination module 214 detects the object either as the pedestrian or the vehicle based on a window size of the object
[0032] After detecting the object, the RoI determination module 214 further assigns an identification number to the object. The identification number is assigned based on the object detected. For example, if the object is detected as the ‘vehicle’, the identification number ‘1’ is assigned to the object. On the other hand, if the object is detected as the ‘pedestrian’, the identification number ‘2’ is assigned to the object. Once the identification number is assigned, the RoI determination module 214 further stores the identification number corresponding to the object as detected, in the system database 224 for future reference. Subsequently, the RoI determination module 214 detects Region of Interests (RoIs) in the left image and the right image. In one aspect, the RoI may be determined by using a Histogram of Oriented Gradient (HOG) technique or an Enhanced Histogram of Oriented Gradient (EHOG) technique.
[0033] In one embodiment, in order to determine the RoI, an image (the left image and the right image) is processed at different stages. The RoI may be identified by segmenting the image into a plurality of image slices. The plurality of image slices may be segmented based on a size of the object and the distance of targeted detection. It may be understood that the image may be segmented to reduce noise from the image, isolate individual elements of the image, join disconnected parts of the image, sharpen edges of the image, and smoothening the image using smoothening filters. After segmenting the image into the plurality of image slices, the RoI determination module 214 may further compute gradients corresponding to each image slice of the plurality of image slices. Based on the scales of object at different distances, multiple window sizes are selected for the computation of the HOG and further feature descriptors are trained with the windows sizes. In one aspect, the feature descriptors may be applied corresponding to each image slice in order to determine the RoI in the image.
[0034] Subsequent to determination of the RoI, the interpolation module 216 computes a left centroid and a right centroid of the ROIs corresponding to the left image and the right image respectively. It may be understood that the left centroid and the right centroid indicate a pixel corresponding to RoI of the left image and the right image respectively. After computing the left centroid and the right centroid, the interpolation module 216 generates a left sub-pixel image and a right sub-pixel image. The left sub-pixel image and the right sub-pixel image may be generated by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. In one aspect, the left sub-pixel image and the right sub-pixel image are generated by using the pre-defined ranges, of the distance between the vehicle and the plurality of objects, and the object detected.
[0035] In order to understand the functioning of the interpolation 216 for generating the left sub-pixel image and the right sub-pixel image, consider an example where the left image L1 comprises an object O1 and the right image R1 comprises an object O2 are captured by a VGA resolution camera V1. In accordance with the aforementioned description, if the object O1 and the object O2 are detected as ‘vehicle’, detected by the RoI determination module 214 as aforementioned, in the L1 and R1 which falls within the ‘Range 1’ (wherein ‘Range 1’ indicates that the window size corresponding to the object detected i.e. vehicle is 128x64 pixels), then the left sub-pixel image and the right sub-pixel left image are generated by interpolating the left centroid and the right centroid detected, wherein the left centroid and the right centroid is interpolated by using 0.5 sub pixel resolution which is determined from the pre-defined feature descriptor table. It may be understood that left centroid and the right centroid is interpolated by using 0.5 sub pixel resolution because the left image L1 and the right image R1 is captured by the VGA resolution camera V1 but the sub pixel resolution may vary depending upon the type of resolution of the stereo image capturing unit used for capturing the left image and the right image. Similarly, the object O1 and the object O2 are detected as ‘vehicle’ in the L1 and R1 which falls within the ‘Range 2’ (wherein ‘Range 2’ indicates that the window size corresponding to the object detected i.e. vehicle is 64 x64 pixels), then the left sub-pixel image and the right sub-pixel left image is generated by interpolating the left centroid and the right centroid using 0.25 sub pixel resolution which is determined from the pre-defined feature descriptor table. Likewise, if the object O1 and the object O2 are detected as ‘vehicle’ in the L1 and R1 which falls within the ‘Range 3’ (wherein ‘Range 3’ indicates that the window size corresponding to the object detected i.e. vehicle is 32 x32 pixels, then the left sub-pixel image and the right sub-pixel image is generated by interpolating the left centroid and the right centroid using 0.1 sub pixel resolution which is determined from the pre-defined feature descriptor table. Thus, in this manner, the left sub-pixel image and the right sub-pixel image are dynamically generated by interpolating the left centroid and the right centroid respectively based on determination of the type of the object detected (i.e. vehicle or pedestrian) and the window size corresponding to the object. This dynamic generation of sub-pixel image (i.e. left sub-pixel image and right sub-pixel image) helps to compute the distance accurately even at longer distances. In addition to the above, the dynamic generation of sub-pixel image also facilitates to reduce computational complexity because the system 102 interpolates the left centroid and the right centroid based upon determining the distance of the object from the vehicle. For example, if the object is within the range of 20 meters from the vehicle, then the system 102 is capable of accurately computing the distance by interpolating the left centroid and the right centroid only by 0.5 sub-pixel resolution.
[0036] Once the left sub-pixel image and the right sub-pixel image are generated, the matching module 217 further processes the left sub-pixel image and the right sub-pixel image. In one aspect, the left sub-pixel image and the right sub-pixel image are processed by using a block matching technique in order to determine a target centroid R(x,y), of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. In one aspect, the target centroid R(x,y) is computed by:
[0037] …………………………….…(1)
[0038] wherein (T/(x/,y/) is determined by:
[0039] …………….(2)
[0040] and wherein I/(x+x/,y+y/) is determined by:
[0041] ………..….(3) where ‘I’ indicates source image in which the a matching is expected with the template image, ‘T’ indicates patch image compared to the area in source image, ‘W’ indicates width of the template image, and ‘H’ indicates height of the template image.
[0042] It may be understood that the block matching technique is applied on the target centroid R(x,y) and the reference centroid for determining the disparity between the left sub-pixel image and the right sub-pixel image. In order to determine the disparity, the block matching technique divides the left sub-pixel image and the right sub-pixel image into a plurality of blocks. Upon dividing the left sub-pixel image, a block BRC is determined which includes the reference centroid. Once BRC in the left sub-pixel image is determined, BRC is matched with each block (BL1, BL2, BL3…..BLN) of the plurality of blocks associated to the right sub-pixel image. After matching BRC with BL1, BL2, BL3…..BLN, block matching technique facilitates to determine closest match between BRC and one of BL1, BL2, BL3…..BLN. The closest match is determined based on minimal value of Normalized Correlation Coefficient (NCC) obtained from the block matching technique. Upon determining the block (amongst BL1, BL2, BL3…..BLN) closest with BRC, the target centroid R(x,y) and thereby disparity between the left sub-pixel image and the right sub-pixel image is determined.
[0043] After determining the disparity, the distance computation module 218 computes the distance between the object (present in the left image and the right image) and the vehicle 106 by using a triangulation method. In one aspect, the distance may be calculated by using a below formulation:
[0044] …………………………………………………………….…(4)
[0045] where (B) is a base line of the stereo image capturing unit 108, (f) is a focal length of the stereo image capturing unit 108, and (xl-xr) is the disparity which is a difference, between the target centroid and the reference centroid. In one embodiment, the distance indicates a theoretical distance and actual distance may be calculated by using a function i.e. y=f(x), where ‘y’ is the actual distance and ‘x’ is the disparity. It may be understood that, the function indicates the 5th order polynomial curve obtained by using pre-defined relation between theoretical distances and actual distances, wherein the theoretical distances and actual distances are obtained offline. In one aspect, the triangulation method may implement a 5th order polynomial curve fitting approach in order to compute the distance between the object and the vehicle 106.
[0046] After calculating the distance, the display module 220 may then display the distance to the driver on a display unit thereby alerting the driver that the object is located within the vicinity of the vehicle at the distance as calculated in order to avoid a collision or to mitigate the consequences of the collision or an accident. Thus, in this manner, the system 102 may be capable for computing the distance between the vehicle 106 and the plurality of objects placed at varying distances from the vehicle 106.
[0047] Referring now to Figure 3, a method 300 for computing a distance between a vehicle and an object appearing in way of the vehicle is shown, in accordance with an embodiment of the present disclosure. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 300 may be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0048] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the disclosure described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be considered to be implemented in the above described in the system 102.
[0049] At block 302, a left image and a right image may be captured by using a stereo image capturing unit 108. In one implementation, the left image and the right image may be captured by the image capturing module 212.
[0050] At block 304, an object present in a left image and a right image may be detected based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of a distance between the vehicle and the plurality of objects. In one implementation, the left image and the right image may be detected by the RoI determination module 214.
[0051] At block 306, a Region of Interests (RoIs) may be determined in the left image and the right image. In one implementation, the RoIs may be determined by the RoI determination module 214.
[0052] At block 308, a left centroid and a right centroid of the RoIs may be computed corresponding to the left image and the right image respectively. In one implementation, the left centroid and the right centroid of the RoIs may be computed by the interpolation module 216.
[0053] At block 310, a left sub-pixel image and a right sub-pixel image may be generated by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table. In one implementation, the left sub-pixel image and the right sub-pixel image may be generated by the interpolation module 216.
[0054] At block 312, the left sub-pixel image and the right sub-pixel image may be processed using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image. In one implementation, the left sub-pixel image and the right sub-pixel image may be processed by the matching module 217.
[0055] At block 314, a distance between the vehicle and the object may be computed based on a triangulation technique using difference, between the target centroid and the reference centroid, a focal length of the stereo image capturing unit 108, and a base line of the stereo image capturing unit 108. In one implementation, the distance between the vehicle and the object may be computed by the distance computation module 218.
[0056] Although implementations for methods and systems for computing a distance between a vehicle and an object appearing in way of the vehicle have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for computing the distance between the vehicle and the object appearing in way of the vehicle.
[0057] Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features.
[0058] Some embodiments enable a system and a method to detect objects located at a different range or at varying distance from a vehicle and calculate distance using stereo camera in real-time thereby facilitating automotive driver assistance and safety.
[0059] Some embodiments enable a system and a method to detect the objects with multiple scales and orientations by enhancing the conventional Histogram of Gradients algorithm.
[0060] Some embodiments enable a system and a method to accurately detect objects up to range of the distance of about 80 meters from the vehicle using a VGA resolution sensor with 40 degree Field of View (FOV).
[0061] To enhance the efficiency of matching at higher distances, the interpolation is performed at a maximum of 0.1 pixels of accuracy, if the detected object is vehicle and at a maximum of 0.01 pixels if the detected object is pedestrian. ,CLAIMS:WE CLAIM:

1. A method for computing distance between a vehicle and an object appearing in way of the vehicle (106), the method comprising:
capturing, by a processor, a left image and a right image by using a stereo image capturing unit (108);
detecting, by the processor, an object present in a left image and a right image based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of a distance between the vehicle (106) and the plurality of objects;
determining, by the processor, a Region of Interests (RoIs) in the left image and the right image;
computing, by the processor, a left centroid and a right centroid of the RoIs corresponding to the left image and the right image respectively;
generating, by the processor, a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table;
processing, by the processor, the left sub-pixel image and the right sub-pixel image using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image; and
computing, by the processor, a distance between the vehicle (106) and the object based on a triangulation technique using difference between the target centroid and the reference centroid, a focal length of the stereo image capturing unit (108), and a base line of the stereo image capturing unit (108).

2. The method of claim 1, wherein the plurality of objects comprises a pedestrian, a vehicle, or a tree.

3. The method of claim 1, wherein the plurality of window sizes comprises 128x64 pixels, 64 x64 pixels, or 32 x32 pixels.

4. The method of claim 1, wherein the pre-defined ranges of the distance between the vehicle (106) and the plurality of objects comprises 5 to 20 meters, 21 to 50 meters, or greater than 50 meters.

5. The method of claim 1, wherein the RoI is determined by using a Histogram of Oriented Gradient (HOG) technique or an Enhanced Histogram of Oriented Gradient (HOG) technique.

6. The method of claim 1, wherein the block matching technique is performed on the left sub-pixel image and the right sub-pixel image in order to determine closest match between the target centroid, of the right sub-pixel image, corresponding to the reference centroid in the left sub-pixel image, and wherein the closest match is determined based on minimal value of Normalized Correlation Coefficient (NCC) obtained from the block matching technique.

7. A system (100) for computing a distance between a vehicle (106) and an object appearing in way of the vehicle (106), the system (100) comprising:
a processor; and
a memory coupled to the processor, wherein the processor is capable of executing a plurality of modules stored in the memory, and wherein the plurality of modules comprising:
an image capturing module (212) for capturing a left image and a right image by using a stereo image capturing unit (108);
a Region of Interest (RoI) determination module (214) for
detecting an object present in a left image and a right image based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of a distance between the vehicle (106) and the plurality of objects; and
determining a Region of Interests (ROIs) in the left image and the right image;
an interpolation module (216) for
computing a left centroid and a right centroid of the ROIs corresponding to the left image and the right image respectively;
generating a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table; and
a matching module (217) for processing the left sub-pixel image and the right sub-pixel image using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image; and
a distance computation module (218) for computing a distance between the vehicle (106) and the object based on the a triangulation technique using difference, between the target centroid and the reference centroid, a focal length of the stereo image capturing unit (108), and a base line of the stereo image capturing unit (108).

8. The system (100) of claim 7 further comprising a display module (220) for displaying the distance, between the vehicle and the object based on the difference, on a display unit.

9. The system (100) of claim 7, wherein the matching module (217) performs the block matching technique on the the left sub-pixel image and the right sub-pixel image in order to determine closest match between the target centroid, of the right sub-pixel image, corresponding to the reference centroid in the left sub-pixel image, and wherein the closest match is determined based on Normalized Correlation Coefficient (NCC).

10. A non-transitory computer readable medium embodying a program executable in a computing device for computing a distance between a vehicle (106) and an object appearing in way of the vehicle, the program comprising:
a program code for capturing a left image and a right image by using a stereo image capturing unit (108);
a program code for detecting an object present in a left image and a right image based on a pre-defined feature descriptor table comprising a plurality of objects, a plurality of window sizes of the plurality of objects and pre-defined ranges of a distance between the vehicle (106) and the plurality of objects;
a program code for determining a Region of Interests (ROIs) in the left image and the right image;
a program code for computing a left centroid and a right centroid of the ROIs corresponding to the left image and the right image respectively;
a program code for generating a left sub-pixel image and a right sub-pixel image by interpolating the left centroid and the right centroid respectively based on the pre-defined feature descriptor table;
a program code for processing the left sub-pixel image and the right sub-pixel image using a block matching technique in order to determine a target centroid, of the right sub-pixel image, corresponding to a reference centroid in the left sub-pixel image; and
a program code for computing a distance between the vehicle (106) and the object based on the a triangulation technique using difference, between the target centroid and the reference centroid, a focal length of the stereo image capturing unit (108), and a base line of the stereo image capturing unit (108).

Documents

Application Documents

# Name Date
1 Form 3 [14-12-2016(online)].pdf 2016-12-14
2 Thumbs.db 2018-08-11
3 Form 2.pdf 2018-08-11
4 Form 2(complete specs).pdf 2018-08-11
5 Figure for Abstract.jpg 2018-08-11
6 1297-MUM-2014CORRESPONDENCE(7-10-2014).pdf 2018-08-11
7 1297-MUM-2014Certified Copy letter.pdf 2018-08-11
8 1297-MUM-2014-FORM 26(30-5-2014).pdf 2018-08-11
9 1297-MUM-2014-FORM 1(7-10-2014).pdf 2018-08-11
10 1297-MUM-2014-CORRESPONDENCE(30-5-2014).pdf 2018-08-11
11 1297-MUM-2014-FER.pdf 2020-01-09
12 1297-MUM-2014-OTHERS [09-07-2020(online)].pdf 2020-07-09
13 1297-MUM-2014-FER_SER_REPLY [09-07-2020(online)].pdf 2020-07-09
14 1297-MUM-2014-COMPLETE SPECIFICATION [09-07-2020(online)].pdf 2020-07-09
15 1297-MUM-2014-CLAIMS [09-07-2020(online)].pdf 2020-07-09
16 1297-MUM-2014-FORM-26 [03-08-2021(online)].pdf 2021-08-03
17 1297-MUM-2014-FORM-26 [03-08-2021(online)]-1.pdf 2021-08-03
18 1297-MUM-2014-Correspondence to notify the Controller [03-08-2021(online)].pdf 2021-08-03
19 1297-MUM-2014-Written submissions and relevant documents [17-08-2021(online)].pdf 2021-08-17
20 1297-MUM-2014-US(14)-HearingNotice-(HearingDate-06-08-2021).pdf 2021-10-03
21 1297-MUM-2014-PatentCertificate29-10-2021.pdf 2021-10-29
22 1297-MUM-2014-IntimationOfGrant29-10-2021.pdf 2021-10-29
23 1297-MUM-2014-RELEVANT DOCUMENTS [30-09-2023(online)].pdf 2023-09-30

Search Strategy

1 UPLOAD_SEARCH_27-11-2019.pdf

ERegister / Renewals

3rd: 10 Nov 2021

From 07/04/2016 - To 07/04/2017

4th: 10 Nov 2021

From 07/04/2017 - To 07/04/2018

5th: 10 Nov 2021

From 07/04/2018 - To 07/04/2019

6th: 10 Nov 2021

From 07/04/2019 - To 07/04/2020

7th: 10 Nov 2021

From 07/04/2020 - To 07/04/2021

8th: 10 Nov 2021

From 07/04/2021 - To 07/04/2022

9th: 10 Nov 2021

From 07/04/2022 - To 07/04/2023

10th: 06 Apr 2023

From 07/04/2023 - To 07/04/2024

11th: 05 Apr 2024

From 07/04/2024 - To 07/04/2025

12th: 17 Mar 2025

From 07/04/2025 - To 07/04/2026