Sign In to Follow Application
View All Documents & Correspondence

Method And System Of Verifying Deep Learning Models

Abstract: A method (300) and system (100) of verifying deep learning models is disclosed. A computing device (102) creates a replica environment based on one or more deployment parameters. Data control flow of the trained DL model is verified in the replica environment. A reference model performance score is determined based on processing of a reference dataset by the trained DL model. A plurality of test datasets is determined based on the reference dataset using a generative deep learning model. A test model performance score is determined based on processing of each of the plurality of test datasets by the trained DL model. Data performance of the trained DL model is verified based on a comparison of the test model performance score corresponding to each of the plurality of test datasets and the reference model performance score corresponding to the reference dataset. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 February 2024
Publication Number
35/2025
Publication Type
INA
Invention Field
MECHANICAL ENGINEERING
Status
Email
Parent Application

Applicants

L&T TECHNOLOGY SERVICES LIMITED
DLF IT SEZ Park, 2nd Floor – Block 3, 1/124, Mount Poonamallee Road, Ramapuram, Chennai - 600 089, Tamil Nadu, India

Inventors

1. SURESH GUNASEKARAN
No.1, Kajivadai Street, Arani, Arni, Tiruvannamalai, Tamil Nadu, India – 632301
2. SANTHIYA RAJAN
New No:199 / Old No: 374 – A, Periyar Nagar, Sowripalayam Road, Coimbatore, Tamil Nadu, India – 641045.

Specification

DESCRIPTION
Technical Field
[001] This disclosure relates generally to deep learning models, and more particularly to a
method and system of verifying deep learning models.
5 BACKGROUND
[002] Deep learning models provide necessary support in handling laborious tasks, harnessing
their formidable computational powers to attain performance levels parallel to human
capabilities in critical functions. These models comprise of numerous neurons resembling the
human brain and operate by processing input data and learning from it using various
10 algorithms. Consequently, the deep learning models help in performing various tasks in a
speedy manner and with enhanced efficiency. Such models find applications across a wide
spectrum of application areas and are being seamlessly integrated with the day-to-day aspects
of human workloads.
[003] Despite the remarkable technological achievements of deep learning models, they exhibit
15 limitations in certain aspects of learning and deployment. In numerous scenarios, it becomes
evident that these models are sensitive to noise or distortions. Consequently, when the input to
deep learning models is subtly changed to even minimal extents, its accuracy may deteriorate
substantially. Since, deep learning models are trained through backpropagation and possess
nonintuitive characteristics and inherent blind spots. Further, the structure of deep learning
20 models is intricately linked to non-obvious nuances of the data distribution. Therefore, this
connection can lead to unexpected failures in the performance of the deep learning model.
These limitations render the deep learning model less transparent and ultimately pose
challenges to efficient task performance.
[004] Therefore, there is a requirement for an efficient and effective methodology for verifying
25 deep learning model to analyse the robustness of the deep learning model to noise or
distortions.
SUMMARY OF THE INVENTION
[005] In an embodiment, a method for verifying a trained deep learning (DL) model is
disclosed. The method may include creating, by a computing device, a replica environment
30 based on one or more deployment parameters. In an embodiment, the replica environment may
be about same as an initial deployment environment. The method may further include
3
verifying, by the computing device, data control flow of the trained DL model in the replica
environment. The method may further include uploading, by the computing device and upon
the verification of the data flow, a reference dataset that may include labelled test data and
corresponding label information of the reference dataset. The method may include determining,
5 by the computing device, a reference model performance score based on processing of the
reference dataset by the trained DL model in the replica environment. The method may further
include determining, by the computing device, a plurality of test datasets based on the reference
dataset using a generative deep learning model. In an embodiment, the plurality of test datasets
may include an adversarial dataset, a correlated dataset, a fooling dataset, an outlier dataset and
10 an augmented dataset. The method may further include determining, by the computing device,
a test model performance score based on processing of each of the plurality of test datasets by
the trained DL model. The method may further include verifying, by the computing device,
data performance of the trained DL model based on a comparison of the test model performance
score corresponding to each of the plurality of test datasets and the reference model
15 performance score corresponding to the reference dataset.
[006] In another embodiment, a system of verifying a trained deep learning (DL) model is
disclosed. The system may include a processor, a memory communicably coupled to the
processor, wherein the memory may store processor-executable instructions, which when
executed by the processor may cause the processor to create a replica environment based on
20 one or more deployment parameters. In an embodiment, the replica environment is about same
as an initial deployment environment. Further, the processor may verify data control flow of
the trained DL model in the replica environment. The processor may upon the verification of
the data flow, upload a reference dataset including labelled test data and corresponding label
information of the reference dataset. The processor may further determine, a reference model
25 performance score based on processing of the reference dataset by the trained DL model in the
replica environment. Further, the processor may determine a plurality of test datasets based on
the reference dataset using a generative deep learning model. In an embodiment, the plurality
of test datasets may include an adversarial dataset, a correlated dataset, a fooling dataset, an
outlier dataset and an augmented dataset. Further, the processor may determine a test model
30 performance score based on processing of each of the plurality of test datasets by the trained
DL model. The processor may further verify data performance of the trained DL model based
on a comparison of the test model performance score corresponding to each of the plurality of
test datasets and the reference model performance score corresponding to the reference dataset.

WE CLAIM:
1. A method (300) of verifying a trained deep learning (DL) model, the method (300)
comprising:
creating (302), by a computing device (102), a replica environment based on one
or more deployment parameters,
wherein the replica environment is about same as an initial deployment
environment;
verifying (304), by the computing device (102), data control flow of the trained DL
model in the replica environment;
uploading (310), by the computing device (102) and upon the verification of the
data flow, a reference dataset comprising labeled test data and corresponding label
information of the reference dataset;
determining (316), by the computing device (102), a reference model performance
score based on processing of the reference dataset by the trained DL model in the replica
environment;
determining (318), by the computing device (102), a plurality of test datasets based
on the reference dataset using a generative deep learning model,
wherein the plurality of test datasets comprises an adversarial dataset, a
correlated dataset, a fooling dataset, an outlier dataset and an augmented dataset;
determining (326), by the computing device (102), a test model performance score
based on processing of each of the plurality of test datasets by the trained DL model; and
verifying (328), by the computing device (102), data performance of the trained DL
model based on a comparison of the test model performance score corresponding to each
of the plurality of test datasets and the reference model performance score corresponding
to the reference dataset.
2. The method (300) as claimed in claim 1, wherein the one or more deployment
parameters comprises a type of operating system, a type of programming language, and a
type of deep learning framework used in the initial deployment environment for
deployment of the trained deep learning model.
3. The method (300) as claimed in claim 1, wherein the data control flow of the trained DL
model is verified by:
18
uploading (306), by the computing device (102), the trained DL model in the replica
environment; and
verifying (308), by the computing device (102), an end to end control flow in the
trained DL model uploaded in the replica environment.
4. The method (300) as claimed in claim 1, wherein the determination of the reference
model performance score comprising:
determining (312), by the computing device (102), at least one of reference output
parameters based on the deployment of the trained DL model in the replica environment,
and
wherein the reference output parameters comprise accuracy, error,
confusion matrix, mean average precision, intersection over union, BLEU score,
word error rate, character error rate, and intersection over union matrix.
5. The method (300) as claimed in claim 4, wherein the reference output parameters are
determined by:
validating (314), by the computing device (102), the reference dataset based on a
shape compatibility between a shape of an input and/or an output corresponding to the
reference dataset.
6. The method (300) as claimed in claim 1, wherein the determination of the test model
performance score for each of the plurality of test datasets, comprises:
determining (320), by the computing device (102), a performance score based on
accuracy and error of each of the plurality of test datasets based on the processing of each
of the plurality of test datasets by the trained DL model;
calculating (322), by the computing device (102), an average of the performance
score of each of the plurality of test datasets; and
determining (324), by the computing device (102), a normalized performance score
based on normalizing the average of the performance score of each of the plurality of test
datasets in a range of about 0 to 1.
7. A system (100) for verifying a trained deep learning (DL) model, comprising:
a processor (104); and
19
a memory (106) communicably coupled to the processor (104), wherein the
memory stores processor-executable instructions, which, on execution, cause the processor
(104) to:
create a replica environment based on one or more deployment parameters,
wherein the replica environment is about same as an initial
deployment environment;
verify data control flow of the trained DL model in the replica environment;
upon the verification of the data flow, upload a reference dataset comprising
labeled test data and corresponding label information of the reference dataset;
determine a reference model performance score based on processing of the
reference dataset by the trained DL model in the replica environment;
determine a plurality of test datasets based on the reference dataset using a
generative deep learning model,
wherein the plurality of test datasets comprises an adversarial
dataset, a correlated dataset, a fooling dataset, an outlier dataset and an
augmented dataset;
determine a test model performance score based on processing of each of
the plurality of test datasets by the trained DL model; and
verify data performance of the trained DL model based on a comparison of
the test model performance score corresponding to each of the plurality of test
datasets and the reference model performance score corresponding to the reference
dataset.
8. The system (100) as claimed in claim 7, wherein the one or more deployment parameters
comprises a type of operating system, a type of programming language, and a type of deep
learning framework used in the initial deployment environment for deployment of the
trained deep learning model.
9. The system (100) as claimed in claim 7, wherein to verify the data control flow of the
trained DL model, the processor (104) is configured to:
upload the trained DL model in the replica environment; and
verify an end to end control flow in the trained DL model uploaded in the replica
environment.
20
10. The system (100) as claimed in claim 7, wherein to determine the reference model
performance score, the processor (104) is configured to:
determine at least one of reference output parameters based on the deployment of
the trained DL model in the replica environment, and
wherein the reference output parameters comprise accuracy, error,
confusion matrix, mean average precision, intersection over union, BLEU score,
word error rate, character error rate, and intersection over union matrix.
11. The system (100) as claimed in claim 10, wherein to determine the reference output
parameters, the processor (104) is configured to:
validate the reference dataset based on a shape compatibility between a shape of an
input and/or an output corresponding to the reference dataset.
12. The system (100) as claimed in claim 11, wherein to determine the test model
performance score for each of the plurality of test datasets, the processor (104) is
configured to:
determine a performance score based on accuracy and error of each of the plurality
of test datasets based on the processing of each of the plurality of test datasets by the trained
DL model;
calculate an average of the performance score of each of the plurality of test
datasets; and
determine a normalised performance score based on normalising the average of the
performance score of each of the plurality of test datasets in a range of about 0 to 1.

Documents

Application Documents

# Name Date
1 202441014356-STATEMENT OF UNDERTAKING (FORM 3) [27-02-2024(online)].pdf 2024-02-27
2 202441014356-REQUEST FOR EXAMINATION (FORM-18) [27-02-2024(online)].pdf 2024-02-27
3 202441014356-PROOF OF RIGHT [27-02-2024(online)].pdf 2024-02-27
4 202441014356-POWER OF AUTHORITY [27-02-2024(online)].pdf 2024-02-27
5 202441014356-FORM 18 [27-02-2024(online)].pdf 2024-02-27
6 202441014356-FORM 1 [27-02-2024(online)].pdf 2024-02-27
7 202441014356-DRAWINGS [27-02-2024(online)].pdf 2024-02-27
8 202441014356-DECLARATION OF INVENTORSHIP (FORM 5) [27-02-2024(online)].pdf 2024-02-27
9 202441014356-COMPLETE SPECIFICATION [27-02-2024(online)].pdf 2024-02-27
10 202441014356-Form 1 (Submitted on date of filing) [28-01-2025(online)].pdf 2025-01-28
11 202441014356-Covering Letter [28-01-2025(online)].pdf 2025-01-28
12 202441014356-FORM 3 [17-04-2025(online)].pdf 2025-04-17