Abstract: The present invention describes an Artificial Intelligence (AI) based Advanced Driver Assistance System (ADAS) testing tool (200). The testing tool (200) comprises a processing unit (204) configured to filter out, using a pre-trained noise cancellation ML model, the received audio signals to isolate noise signals from the received audio signals. Sequentially, the processing unit (204) is configured to, using a using a pre-trained audio classification ML model, identify one or more audio signal from the filtered-out audio signals relating to ADAS audio signals and classify each of the one or more identified signals into a frequency class based on their distinct frequency characteristics. Finally, the processing unit (204) is configured to evaluate each of the classified signal based on ADAS actions associated with the vehicle and generate test results of ADAS testing on each of the classified audio signals, based on the evaluation. FIG. 2
1. An Artificial Intelligence (AI) based Advanced Driver Assistance System (ADAS) testing tool
(200) comprising:
an input interface (202) configured to receive plurality of audio signals from a vehicle;
a processing unit (204) operatively coupled to the input interface (202) and configured to:
filter out, using a pre-trained noise cancellation ML model (206-a), the received audio
signals to isolate noise signals from the received audio signals;
identify, using a pre-trained audio classification ML model (208-a), one or more audio
signal from the filtered-out audio signals relating to ADAS audio signals;
classify, using the pre-trained audio classification ML model (208-a), each of the one
or more identified signals into a frequency class based on their distinct frequency
characteristics;
evaluate each of the classified signal based on ADAS actions associated with the
vehicle; and
generate test results of ADAS testing on each of the classified audio signals, based on
the evaluation.
2. The testing tool (200) as claimed in claim 1, wherein, to generate the test result, processing unit
(204) is further configured to:
identify the ADAS actions associated with the vehicle;
determine sample frequency classes based on the identified ADAS actions;
validate the frequency class of each of the one or more identified signal with the determined
sample frequency classes; and
generate the test results based on the validation.
3. The testing tool (200) as claimed in claim 1, wherein to train the noise cancellation model (206-a),
the processing unit (204) is configured to:
obtain a plurality of clean audio samples and a plurality of contaminated audio signals,
including various noises, corresponding to the clean same audio samples from at least a server;
filter out the contaminated audio signals to isolate noise signals and clean audio samples; and
train the noise cancellation model (206-a) based on the filtering.
23
4. The testing tool (200) as claimed in claim 1, wherein, in order to train the audio classification ML
model (208-a), the processing unit (204) is configured to:
obtain audio samples from at least a server;
generate multiple audio samples for each of the obtained audio sample by performing timeshift operation;
extract time and frequency parameters for the generated audio samples;
train the classification ML model (208-a) by using the time and frequency parameters of the
generated audio samples to distinguish between ADAS audio signal or non ADAS audio signal; and
train the classification ML model (208-a) by using the time and frequency parameters of the
generated ADAS audio samples to identify the audio classes of the audio samples.
5. The testing tool (200) as claimed in claim 4, wherein the processing unit (204) is configured to:
obtain, from the at least server, characteristics information of each of the generated audio
samples, wherein the characteristics information indicates whether the audio sample is the ADAS
audio signal and the respective audio class;
divide the generated audio samples into a plurality of a set of audio samples;
distinguish, by the classification ML model (208-a), the set of audio samples between the
ADAS audio signal or the non ADAS audio signal to generate first set of predictions;
identify, by the classification ML model (208-a), the audio class of the set of audio samples
to generate second set of predictions;
generate loss based on the generated first set of predictions, the generated second set of
predictions, and the obtained information; and
update parameters associated with the classification ML model (208-a) based on the generated
loss during the training of the classification ML model.
6. A method (500) for an Artificial Intelligence (AI) based Advanced Driver Assistance System
(ADAS) testing tool, comprising:
receiving (502) plurality of audio signals from a vehicle;
filtering (504), using a pre-trained noise cancellation ML model, the received audio signals to
isolate noise signals from the received audio signals;
identifying (506), using a pre-trained audio classification ML model, one or more audio signal
from the filtered audio signals relating to ADAS audio signals;
classifying (508), using the pre-trained audio classification ML model, each of the one or
more identified signals into a frequency class based on their distinct frequency characteristics;
24
evaluating (510) each of the classified signal based on ADAS actions associated with the
vehicle; and
generating (512) test results of ADAS testing on each of the classified audio signals, based on
the evaluation.
7. The method (500) as claimed in claim 6, wherein the generation of the test result further comprises:
identifying (512) the ADAS actions associated with the vehicle;
determining (512) sample frequency classes based on the identified ADAS actions;
validating (512) the frequency class of each of the one or more identified signal with the
determined sample frequency classes; and
generating (512) the test results based on the validation.
8. The method (500) as claimed in claim 6, wherein training of the noise cancellation model
comprises:
obtaining (504) a plurality of clean audio samples and a plurality of contaminated audio
signals, including various noises, corresponding to the clean same audio samples from at least a
server;
filtering (504) the contaminated audio signals to isolate noise signals and clean audio samples;
and
training (504) the noise cancellation model based on the filtering.
9. The method (500) as claimed in claim 6, wherein training of the audio classification ML model
comprises:
obtaining (508) audio samples from at least a server;
generating (508) multiple audio samples for each of the obtained audio sample by performing
time-shift operation;
extracting (508) time and frequency parameters for the generated audio samples;
training (508) the classification ML model by using the time and frequency parameters of the
generated audio samples to distinguish between ADAS audio signal or non ADAS audio signal; and
training (508) the classification ML model by using the time and frequency parameters of the
generated ADAS audio samples to identify the audio classes of the audio samples.
10. The method (500) as claimed in claim 9, further comprising:
25
obtaining (508), from the at least server, characteristics information of each of the generated
audio samples, wherein the characteristics information indicates whether the audio sample is the
ADAS audio and the respective audio class;
dividing (508) the generated audio samples into a plurality of a set of audio samples;
distinguishing (508), by the classification ML model, the set of audio samples between the
ADAS audio signal or the non ADAS audio signal to generate first set of predictions;
identifying (508), by the classification ML model, the audio class of the set of audio samples
to generate second set of predictions;
generating (508) loss based on the generated first set of predictions, the generated second set
of predictions, and the obtained information; and
updating (508) parameters associated with the classification ML model based on the generated
loss during the training of the classification ML model.
| # | Name | Date |
|---|---|---|
| 1 | 202341085611-STATEMENT OF UNDERTAKING (FORM 3) [14-12-2023(online)].pdf | 2023-12-14 |
| 2 | 202341085611-REQUEST FOR EXAMINATION (FORM-18) [14-12-2023(online)].pdf | 2023-12-14 |
| 3 | 202341085611-PROOF OF RIGHT [14-12-2023(online)].pdf | 2023-12-14 |
| 4 | 202341085611-POWER OF AUTHORITY [14-12-2023(online)].pdf | 2023-12-14 |
| 5 | 202341085611-FORM 18 [14-12-2023(online)].pdf | 2023-12-14 |
| 6 | 202341085611-FORM 1 [14-12-2023(online)].pdf | 2023-12-14 |
| 7 | 202341085611-DRAWINGS [14-12-2023(online)].pdf | 2023-12-14 |
| 8 | 202341085611-DECLARATION OF INVENTORSHIP (FORM 5) [14-12-2023(online)].pdf | 2023-12-14 |
| 9 | 202341085611-COMPLETE SPECIFICATION [14-12-2023(online)].pdf | 2023-12-14 |
| 10 | 202341085611-Form 1 (Submitted on date of filing) [04-06-2024(online)].pdf | 2024-06-04 |
| 11 | 202341085611-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202341085611-FORM 3 [03-12-2024(online)].pdf | 2024-12-03 |