Sign In to Follow Application
View All Documents & Correspondence

An Automated System For Testing Adas Frame Work

Abstract: According to an embodiment of the invention, a method 100 and system 200 for testing an Advanced Driver Assistance System (ADAS) framework is disclosed. The method 100 may include a step 102 of capturing a real environment data. The real environment data may have a plurality of objects such as pedestrians, vehicles, road, sky etc. The method 100 may further include a step 106 of annotating the plurality of objects in the captured real environment data. Once the plurality of objects is annotated, the annotated data may be stored in a data storage platform. The method 100 of testing the ADAS framework may further include a step 110 of analysing the stored annotated data by comparing the stored annotated data with a test data. The test data used for comparing may be the captured real environment data with modified environmental parameters.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 March 2018
Publication Number
50/2019
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patents@ltts.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-07-11
Renewal Date

Applicants

L&T TECHNOLOGY SERVICES LIMITED
DLF IT SEZ Park, 2nd Floor-Block 3, 1/124,Mount Poonamallee Road, Ramapuram, Chennai ,Tamil Nadu, India-600 089.

Inventors

1. GOPINATH C
M.49 Housing Unit,Manickampalyam,Collectorate PO,Erode, Tamil Nadu, India 638011.
2. VENKATESAN S
108 VOC Street, Belur PO Valapadi TK,Salem DT Tamil Nadu India 636104.
3. KALAIVANI RAMALINGAM
4-1-130s Arasu Nagar,P.C.Patti Theni DT,Tamil Nadu,India-625531.
4. RAMESH BABU D
E-12,Arihant Amara No.49,Goparasanallur,Kumananchavadi, Chennai,Tamil Nadu,India-600077.

Specification

FIELD OF INVENTION
The invention generally relates to a field of ADAS system and more particularly to a system for testing ADAS framework.
SUMMARY OF THE INVENTION
According to an embodiment of the invention, an automated system for testing ADAS framework is disclosed. The input into the automated system may be a data captured by multiple sensors mounted on a vehicle or may be a video captured by a video capturing device mounted on a vehicle. The automated system may annotate at least one or more objects such as vehicles, pedestrians, roads, trees, traffic-signs etc in the video. The automated system may include three modes of annotating the objects such as manual mode, semi-automatic mode and fully automatic mode. The automated system may generate a ground truth data considering the annotations and other conditions in the video and further include a feature to compare the ground truth data received by the automated system with data of real autonomous system to determine the accuracy and the precision of the ground truth data.
BRIEF DESCRIPTION OF DRAWINGS
Other objects, features, and advantages of the invention will be apparent from the following description when read with reference to the accompanying drawings. In the drawings, wherein like reference numerals denote corresponding parts throughout the several views:

Figure 1 illustrates a block diagram of an automated system for testing ADAS framework according to an exemplary embodiment of the invention.
Figure 2 illustrates a flowchart of working of an automated system for testing ADAS framework according to an exemplary embodiment of the invention.
DETAILED DESCRIPTION OF DRAWINGS
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of exemplary embodiments. It includes various specific details to assist in that understanding but these are to be regarded as merely exemplary. Accordingly, those of ordinary skilled in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
Figure 1 illustrates a block diagram of an automated system for testing ADAS framework according to an exemplary embodiment of the invention. The automated system may enable a user to analyse and generate the ADAS data as well as may determine the accuracy of the generated ADAS data. The input into the automated system may be the data captured by multiple sensors mounted on a vehicle or may be a video captured by a video capturing device mounted on a vehicle or may be both data. The multiple sensors mounted on the vehicle for capturing data may be sensors such as, but not limited to, LIDAR sensor, RADAR sensor or any other suitable sensor known in the art. The captured video' may include multiple objects

the video. The captured video may further be of varying weather conditions such as, but not limited to, cloudy, foggy, rainy, sunny etc. The video may further be captured at different time of the day such as morning, evening, night etc. The view in the video captured by the video capturing device may further be blurred, visible, faded etc.
The automated system may annotate the objects in each frame of the video. The annotation of the objects in the video may be performed by multiple methods. Some of the annotation method available in the automated system may be fusion method or semantic segmentation method. The sematic segmentation method recognizes and understands each frame of the video in pixel level. The objects in the video may be annotated by any one of the multiple modes available in the system. The modes for annotating the object in the video may be manual annotation mode, semi-automatic annotation mode or fully automatic annotation mode. The manual annotation mode may allow the user to annotate each object in each frame of the video by manually selecting the object. The semi-automatic annotation may need the user to annotate the objects manually the first frame and then remaining annotation is performed automatically by a machine learning algorithm. The fully automatic annotation mode may annotate all the objects itself without any human interference.
The automated system may include options for controlling the playing of the video captured by the video capturing device. The user may play, stop, rewind, fast forward the video as well may directly go to a specific frame of the video as desired. The automated system may further include a manual inspection option. The manual inspection feature may allow the user to manually inspect the annotations in the video to perform corrections in the annotation done automatically by the system. The manual inspection feature may enable the user to add, re-

further have an option to configure the system to annotate only the desired type of objects from the list of objects in the video. The selective annotation may enable the user to ignore the objects unimportant in calculation of ADAS data. By way of an example, if the user wants to annotate only vehicles in the video, the user may select vehicle option and ignore the other
objects in the video. On completing the annotation of the objects in the video, the automated
i
system may enable the user to generate ground truth data. The automated system may be i
constructed on cloud system that may enable the data to be processed simultaneously in I
multiple devices thereby generating the ground truth data faster. The automated system may
further include a feature for comparing the ground truth data received by the automated system
with the data of the real autonomous system. The comparison of the ground truth data with the
real autonomous system data may enable in determining the precision and accuracy of the data
received by the automated system. A graphical representation of the comparison between both
these data may enable a user to understand the difference between both the data. The automated
system may further include an option to provide statistics of all the objects in the video.
Figure 2 illustrates a flowchart of working of an automated system for testing ADAS framework according to an exemplary embodiment of the invention. On collecting the data from the sensors or the video capturing device, the data is input into the automated system for annotating the objects. The data may be annotated by any one of the annotation modes such as manual annotation, semi-automatic annotation or fully automatic annotation. In case of manual annotation, each of the objects is annotated manually. In case of semi-automatic annotation, firstly the region of interest is marked and then the details of the objects are entered manually. On completing the step, the automated system may annotate the objects in the other frames automatically. In case of fully automatic annotation, the automated system may annotate the

automatic and fully automatic annotation modes may be automatically taken into consideration. The fully automatic and the semi-automatic annotation mode may further include 2D and 3D annotation modes. The 2D modes may capture the 2D view of the object and the 3D mode may capture the 3Dview of the object.
The system may be constructed on cloud where it may undergo parallel processing. The parallel processing may enable the video to be processed in a short time as compared to normal processing. Once the annotation of the objects is completed, the user may use the manual inspection option to inspect the annotations in the video. The user may use the option available in manual inspection to perform necessary corrections in the video to improve the accuracy of the data. On satisfactorily annotating the objects, the user may generate the ground truth data by the automated system. The ground truth data may be compared with the real autonomous system data to determine the accuracy and precision of the ground truth data.
It is understood that the above description is intended to be illustrative, and not restrictive. It is intended to cover all alternatives, modifications and equivalents as may be included within the spirit and scope of the invention as defined in the appended claims. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms "including" and "in which" are used as the plain-English equivalents of the respective terms "comprising" and "wherein," respectively.

Documents

Application Documents

# Name Date
1 Form5_As Filed_28-03-2018.pdf 2018-03-28
2 Form3_As Filed_28-03-2018.pdf 2018-03-28
3 Form2 Title Page_Provisional_28-03-2018.pdf 2018-03-28
4 Drawings_As Filed_28-03-2018.pdf 2018-03-28
5 Description Provisional_As Filed_28-03-2018.pdf 2018-03-28
6 Correspondence by Applicant_As Filed_28-03-2018.pdf 2018-03-28
7 Form1_As Filed_23-05-2018.pdf 2018-05-23
8 Form1_After Filling_23-05-2018.pdf 2018-05-23
9 Correspondence by Applicant_Form 1_23-05-2018.pdf 2018-05-23
10 Form-2 Title Page_Complete_28-03-2019.pdf 2019-03-28
11 Form-1_After Provisional_28-03-2019.pdf 2019-03-28
12 Drawing_After Provisional_28-03-2019.pdf 2019-03-28
13 Description Complete_As Filed_28-03-2019.pdf 2019-03-28
14 Correspondence by Applicant_After Provisional_28-03-2019.pdf 2019-03-28
15 Claims_After Provisional_28-03-2019.pdf 2019-03-28
16 Abstract_After Provisional_28-03-2019.pdf 2019-03-28
17 Form18_Normal Request_21-06-2019.pdf 2019-06-21
18 Correspondence by Applicant_Form18_21-06-2019.pdf 2019-06-21
19 201841011616-OTHERS [14-09-2021(online)].pdf 2021-09-14
20 201841011616-FER_SER_REPLY [14-09-2021(online)].pdf 2021-09-14
21 201841011616-CLAIMS [14-09-2021(online)].pdf 2021-09-14
22 201841011616-FER.pdf 2021-10-17
23 201841011616-Correspondence_Amend the email addresses_14-12-2021.pdf 2021-12-14
24 201841011616-PatentCertificate11-07-2024.pdf 2024-07-11
25 201841011616-IntimationOfGrant11-07-2024.pdf 2024-07-11

Search Strategy

1 searchE_15-03-2021.pdf

ERegister / Renewals

3rd: 29 Aug 2024

From 28/03/2020 - To 28/03/2021

4th: 29 Aug 2024

From 28/03/2021 - To 28/03/2022

5th: 29 Aug 2024

From 28/03/2022 - To 28/03/2023

6th: 29 Aug 2024

From 28/03/2023 - To 28/03/2024

7th: 29 Aug 2024

From 28/03/2024 - To 28/03/2025

8th: 29 Aug 2024

From 28/03/2025 - To 28/03/2026

9th: 29 Aug 2024

From 28/03/2026 - To 28/03/2027

10th: 29 Aug 2024

From 28/03/2027 - To 28/03/2028

11th: 29 Aug 2024

From 28/03/2028 - To 28/03/2029

12th: 29 Aug 2024

From 28/03/2029 - To 28/03/2030