Abstract: ABSTRACT Title : Method of Stitching Images and Creating Adaptive Surround View for the Vehicle The present invention is related to a method of stitching multiple images and creating adaptive surround view of the vehicle in response to the driving parameters of the vehicle. The system for capturing the multiple images and stitching them to provide surround view of the vehicle comprises of a plurality of cameras, electronic control unit (ECU), engine management module, body control module, display screen and communication protocol wherein all the units of the system are communicating with each other to form a system for stitching images preferably through CAN communication protocol. The method provides seamless stitching of multiple views through a unique central processing unit and creates adaptive surround view in such way that the relevant portion of the stitched images acquires more space on the screen depending on the driving parameters so as to impart precise information as an aid for driver for driving the vehicle safely.
Claims:We Claim
1. A method of stitching multiple images and creating adaptive surround view of the vehicle on the screen comprising the steps of;
- confirming cranked position of the engine by the system, wherein the cranked position is being confirmed by the engine management module which is in continues communication with the electronic control unit (ECU) through CAN communication protocol;
- acquiring raw image in colour (with RGB channel) by each of the cameras, wherein the preferred number of cameras are four and each of the cameras is positioned on the periphery of the vehicle, most preferably at four sides viz. front, rear, right side, and left side of the vehicle;
- converting RGB colour image to greyscale image by the fusion module using any one of the methods viz. lightness method, average method and luminosity method so as to reduce the amount of data being processed;
- compensating difference in exposure, hue, saturation hue, saturation through binary or adaptive threshold by the fusion module;
- correcting curvatures or distortion effect due to lens in the image (known as barrel distortion correction or pincushion lens distortion correction by the fusion module;
- converting the rectangular profile of the image to triangular profile by the fusion module, wherein said fusion module is embedded in the central processing unit of the electronic control unit and is in continuous
communication with each of the camera through multiplexer and camera driver;
- rotating the triangular profiled image depending upon whether it is from left, right or rear camera so as to make it ready for stitching, wherein the image rotation is being done by the adaptive view module embedded in central processing unit being in continuous communication with graphic processing unit (GPU) and fusion module;
- converting the rectangular views to trapezoidal view based on the angle of rotation of steering by the adaptive view module, wherein the steering angle is being captured by the body control module which is in continuous communication with the electronic control unit (ECU);
- stitching the four views seamlessly to generate an integrated image of surround view;
- identifying and plotting the pivot point on the screen, wherein said pivot point is being plotted by the fusion module when the engine of the vehicle is ON and the vehicle is at zero speed;
- superimposing a predefined icon or image of a car on the pivot point; and
- projecting adaptive surround view image on the display of the system as an aid to driver while driving, wherein the said display unit is mounted on the instrument panel of the vehicle in the sight of the driver and is always in communication with the electronic control unit.
2. The method of stitching multiple images and creating adaptive surround view of the vehicle as claimed in claim 1, wherein seamlessly stitched single image and the pivot point moves in proportion to speed and according to direction of vehicle.
3. The method of stitching multiple images and creating adaptive surround view of the vehicle as claimed in claim 2, wherein objects captured in the resultant view are identified and classified.
4. The method of stitching multiple images and creating adaptive surround view of the vehicle as claimed in claim 3, wherein the objects captured in the resultant view are tracked and probability of threat is computed.
5. The method of stitching multiple images and creating adaptive surround view of the vehicle as claimed in claim 4, wherein based on the level of severity of threat and/or probability of collision, an alert is generated; said alert being in the form of audio, visual and combination of both.
6. The method of stitching multiple images and creating adaptive surround view of the vehicle as claimed in claim 2, wherein at any point, one out of four view gets priority which means more space in display and more space for displaying visual alert.
Dated this 15th day of Mar. 2021
, Description:FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2005
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
TITLE OF THE INVENTION
“METHOD OF STITCHING IMAGES AND CREATING ADAPTIVE SURROUND VIEW FOR VEHICLE”
Endurance Technologies Limited
E-92, M.I.D.C. Industrial Area, Waluj,
Aurangabad – 431136, Maharashtra, India
The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed.
FIELD OF THE INVENTION
[001] The present invention is related to advanced driver assistance system. More particularly, the present invention is related to a method of stitching multiple images and creating adaptive surround view of the vehicle in response to the driving parameters of the vehicle.
BACKGROUND OF THE INVENTION
[002] In today’s automobile market human safety is given due importance. Therefore, each of the vehicle is having a set of safety features that aids in driving the vehicle safely. Surround view systems in the vehicles are one of the passive safety measures provided under Advanced Driver Assistance System (ADAS) that enables a safe drive in congested city traffic.
[003] To create the surround view of the vehicle and project it as an input for safe driving, a variety of systems and methods are available in public domain each differing from the other with varying degree of success. Document no. US10696231B2 discloses an in-vehicle image display system and an image processing method, and more particularly to an in-vehicle image display system which is equipped in a vehicle to support driving of a driver, and an image processing method. Though this document discloses an image processing method, it does not teaches about stitching multiple images adaptive surround view of the vehicle.
[004] Another prior art document No. US10479275B2 describes a vehicle peripheral observation device, and more specifically relates to a technique for displaying an image captured using a camera on a monitor and enlarging the display of the captured image displayed on the screen. Document No. US10740972B2 presents a systems, devices and methods for presentation and control of a virtual vehicle view with surround view imaging.
[005] None of the prior art references mentioned above discloses and/or teaches about the method of stitching the multiple images and creating adaptive surround view for the driver. Hence, there is a long pending unmet need to provide a method of stitching images which is being addressed by the present invention.
OBJECTIVES OF THE INVENTION
[006] The main object of the present invention is to provide a method of stitching multiple images and creating adaptive surround view of the vehicle.
[007] Another object of the present invention is to provide a method of stitching multiple images and creating adaptive surround view of the vehicle wherein said surround view is created in response to the driving parameters of the vehicle viz. direction of travel, speed and steering angle.
[008] Further object of the present invention is to provide a system for seamless stitching multiple views wherein the system has a unique central processing unit.
[009] Yet the object of the invention is to provide a method and system for stitching multiple images to create adaptive surround view wherein depending on the vehicle driving parameter/s the relevant portion of the stitched images acquires more space on the screen.
[0010] Yet another object of the present invention is to provide a system of creating adaptive surround view in such way that the view provides more precise information as an aid for driver for driving the vehicle safely.
BRIEF DESCRIPTION OF DRAWINGS
[0011] This invention is illustrated in the accompanying drawings, throughout which like reference letters indicate corresponding parts in the various figures. The embodiments herein and advantages thereof will be better understood from the following description when read with reference to the following drawings, wherein
[0012] Figure 1 shows the schematic of the system that acquires multiple images and stitch the images to provide adaptive surround view in accordance with the present invention.
[0013] Figure 2 shows the exploded view of the display of the system in accordance with the present invention.
[0014] Figure 3 shows the display screen of the system at different vehicular positions viz. (a) Vehicle at zero speed & engine ON, (b) vehicle in forward direction with speed more than zero, (c) vehicle in reverse direction with speed more than zero, (d) vehicle turning right with speed greater than zero, and (e) vehicle turning left with speed greater than zero.
[0015] Figure 4 is the presentation of shift of pivot point as the function of speed of the vehicle.
[0016] Figure 5 shows the functionality flowchart of the fusion module in accordance with the invention.
[0017] Figure 6 shows the functionality flowchart of the adaptive view module as per the invention.
[0018] Figure 7 shows the working flowchart of the system and method of stitching images in accordance with the present invention.
DETAILED DESCRIPTION OF THE PRESENT INVENTION
[0019] Referring to Fig. 1, a system for capturing the multiple images and stitching them to provide surround view of the vehicle comprises of a plurality of cameras, electronic control unit (ECU), engine management module, body control module, display screen and communication protocol wherein all the units of the system are communicating with each other to form a system for stitching images preferably through CAN communication protocol.
[0020] The images for the surround view is achieved by positioning at least four cameras on periphery of the vehicle, most preferably at four sides viz. front, rear, right side, and left side of the vehicle. The individual view captured by the cameras is stitched to create an integrated 360° view and is projected on the screen of the system so that driver is aware of his surroundings while driving (both, forward and reverse direction and additionally steering left or right) and can avoid accidents and damage. The integrated view is shown on an appropriate display screen which can be LCD or TFT type.
[0021] Each of the cameras captures the outside image of the vehicle, the body control module provides the input about steering angle and/or turn direction (left or right), engine management module provides input about the direction of the travel (front or rear ) and speed of the vehicle. All these inputs are fed to the electronic control unit of the system that processes all these inputs and provides output in the form of a stitched surround view image on the screen of the vehicle.
[0022] The display used in this system is rectangle in profile. The top left pixel is assumed as origin and referred as (0, 0) and bottom right pixel is referred as extreme positions or maxima, and in this case referred as (M, N). While creating a combined view for surround view system, a reference (pivotal) point is identified. This reference point is mathematically halfway from origin and is referred as shown in Fig. 2. A predefined icon or image of a car is superimposed on this pivot or reference point. This image/icon of a car on the display is representative of the host vehicle in which the said surround view system is fitted.
[0023] The stitching of the views captured by individual camera is computationally intensive work for which a dedicated electronic control unit (ECU) is developed and embedded with intelligent functionality module for image processing. The electronic control unit (ECU) of the system comprises of multiplexer, camera driver, an intelligent central processing unit (CPU), power management unit and graphics processing unit (GPU). The central processing unit is configured to have multi-core processor and embedded with firmware, fusion module and adaptive view module. Seamless stitching of four images is highly computation extensive for a single core microprocessor of CPU to handle for real time throughput, the concept of multicore is leveraged. This will help to eliminate latency and lag issues of the system and the data is interpreted in real-time mode.
[0024] Each core is dedicated to acquire the data from each camera, preprocess it for noise removal, grey-scale conversion, and histogram equalization. Each core perform computation in parallel and all the four cores are in sync with each other to compensate the random variation in the dynamic throughput of each camera, if any. Unless the flag of all the four core’s pipeline is not set indicating completion of entire processing, the data is stored in Dynamic random access memory (DRAM) of the central processing unit. Once the flag from all the four cores is set, the data is pushed to graphic processing unit (GPU) for stitching the images. After this the status flags are reset which is indication for the firmware to capture the new frame and the process is repeated in a continuous loop.
[0025] Stitching of plurality of images captured by a plurality of camera is done by graphics processing unit (GPU) in coordination with Fusion module and adaptive view module embedded in the central processing unit (CPU) and stitched single image is produced on the screen. Adaptive view module may run partially on GPU (for load sharing) and once a stitched image around the vehicle is created, an additional layer of processing for object detection and classification will be running fully utilizing resources of GPU. Free sourced modules viz. YOLO (You only look once) or SSD (Single Shot Detector), are optionally used for improving object recognition capability as this will lead to achieve passive application of the system where alerts can be generated when an object can be a potential threat to the host vehicle.
[0026] Parallel processing of raw video data by dedicated cores of CPU and a sync algorithms built into it, ensure the dynamic throughput variation of individual core and enables a seamless 360° unified view as single image which further processed by the interconnected GPU using adaptive view module. After object and threat level are computed, the selected data is sent to the connected video output device or display or HMI device and additionally made available as CAN (or any other protocol) data packet to be used by other ECU connected in the vehicle network.
[0027] Thus, the method of stitching multiple images and creating adaptive surround view of the vehicle on the screen comprises the following steps in a sequential manner;
- Confirming cranked position of the engine by the system,
- Acquiring raw image in colour (with RGB channel) by each of the cameras,
- Converting RGB colour image to greyscale image by the fusion module using any one of the methods viz. lightness method, average method and luminosity method so as to reduce the amount of data being processed,
- Compensating difference in exposure, hue, saturation through binary or adaptive threshold by the fusion module,
- Correcting curvature or distortion effect due to lens in the image (known as barrel distortion correction or pincushion lens distortion correction by the fusion module,
- Converting the rectangular profile of the image to triangular profile by simply clipping the redundant data (i.e. Image) in overlapping field of view of adjacent camera by the fusion module,
- Rotating the triangular profiled image depending upon whether it is from left, right or rear camera so as to make it ready for stitching,
- Converting the rectangular views to trapezoidal view based on the angle of rotation of steering,
- Stitching the four views seamlessly to generate an integrated image of surround view,
- Identifying and plotting the pivot point on the screen,
- Superimposing a predefined icon or image of a car on the pivot point, and
- Projecting adaptive surround view image on the display of the system as an aid to driver while driving.
[0028] Direction of vehicle has more significance and that must be given consideration while creating 360° view. When the vehicle is moving in forward direction, the front view should occupy more space than rest of other three views. The same is applicable in case vehicle is reversing as shown in Fig. 3. This dynamic allocation of view on display enable driver to driver safer way. For this to happen dynamically, the ECU of the system is in continuous communication with engine management module as shown in Fig. 1. Direction of vehicle movement is derived from the gear engaged and speed of vehicle.
[0029] When the vehicle is moving in forward direction, the central processing unit of the system make the front view to acquire more space on the screen enabling better visibility of front view to the driver. The position of center of reference (pivot point) on the screen is a function of direction and speed of the vehicle and is shifted in opposite direction of the vehicle movement on the screen in proportion to the speed in that direction. The shift of pivot point on the screen is mapped to speed of vehicle as shown in Fig. 4. There is a limit to where the revised pivot point can shift, thus maximum value is mapped to maximum vehicle speed which is a tunable and calibratable parameter in the central processing unit.
[0030] When the vehicle is moving in reverse direction, the rear view is made to acquire more space on the screen as shown in Fig. 3(c). This feature enables improved park assist aid for the driver. The shift of pivot point on the screen is done additionally incorporating steering wheel angle too. When the vehicle is moving in forward direction and turning right or left, the respective view acquires the more space on the screen as shown in Fig. 3(d) and 3(e), respectively.
[0031] The main function of the graphic processing unit (GPU) to process the data so as to get real time performance. The fusion module is active (Fig. 5) when the engine is ON but is in Neutral gear i.e. vehicle speed is zero. When driver engages the transmission (either forward or reverse), adaptive view module comes into action as shown in Fig. 6.
[0032] Therefore, the proposed invention has the novelty in terms of the customized central processing unit that enlarges the view of any of the quadrants automatically depending on speed with direction and steering angle of the vehicle. Further, the system does not require very powerful CPU and large memory. It provides real time performance and cost viable solution. The processor of the system being multi-core, it does not suffer with latency and lag and hence does not affect the response time.
[0033] Central Processing unit embedded with a customized processing unit for stitching a plurality of images is an intelligent feature of the invention that involves technical advance as compared to the existing knowledge that makes the invention not obvious to a person skilled in the art; wherein, the said CPU imparts a Technical Effect providing solution to a technical problem (seamless stitching of images) at higher speed, more economical use of memory, more effective data compression techniques, and improved reception/transmission of the signal to project the surround view on the screen; and the said CPU “making a specific portion of the surround view to acquire dynamically more space on the screen in response to the vehicle operating conditions viz. travel direction (forward, rearward, left, right), speed and the steering angle” as a contribution to the state of art in the field of ADAS technology leading to Technical Advancement.
[0034] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
| # | Name | Date |
|---|---|---|
| 1 | 202121010774-STATEMENT OF UNDERTAKING (FORM 3) [15-03-2021(online)].pdf | 2021-03-15 |
| 2 | 202121010774-PROOF OF RIGHT [15-03-2021(online)].pdf | 2021-03-15 |
| 3 | 202121010774-FORM 1 [15-03-2021(online)].pdf | 2021-03-15 |
| 4 | 202121010774-FIGURE OF ABSTRACT [15-03-2021(online)].jpg | 2021-03-15 |
| 5 | 202121010774-DRAWINGS [15-03-2021(online)].pdf | 2021-03-15 |
| 6 | 202121010774-DECLARATION OF INVENTORSHIP (FORM 5) [15-03-2021(online)].pdf | 2021-03-15 |
| 7 | 202121010774-COMPLETE SPECIFICATION [15-03-2021(online)].pdf | 2021-03-15 |
| 8 | Abstract1.jpg | 2021-10-19 |
| 9 | 202121010774-FORM 18 [19-03-2022(online)].pdf | 2022-03-19 |
| 10 | 202121010774-FER.pdf | 2022-11-17 |
| 11 | 202121010774-FER_SER_REPLY [17-05-2023(online)].pdf | 2023-05-17 |
| 12 | 202121010774-US(14)-HearingNotice-(HearingDate-17-04-2025).pdf | 2025-02-28 |
| 13 | 202121010774-US(14)-ExtendedHearingNotice-(HearingDate-22-04-2025)-1400.pdf | 2025-04-11 |
| 14 | 202121010774-Correspondence to notify the Controller [16-04-2025(online)].pdf | 2025-04-16 |
| 1 | SearchHistory-202117008738E_17-11-2022.pdf |