Abstract: A method for detecting an anomaly in a device is presented. The method includes receiving 202 a determined model 204 and one or more model coefficients corresponding to the determined model, iteratively generating a trained model and one or more trained coefficients based on the determined model 510 and the one or more model coefficients 512 to generate 208 a validated model, wherein a measurement accuracy of the validated model 210 is greater than or equal to a measurement accuracy of the determined model 204. The method further includes detecting an anomaly in the device based at least on the validated model.
Claims:1. A method for detecting an anomaly in a device, comprising:
receiving a determined model and one or more model coefficients corresponding to the determined model;
iteratively generating a trained model and one or more trained coefficients based on the determined model and the one or more model coefficients to generate a validated model, wherein a measurement accuracy of the validated model is greater than or equal to a measurement accuracy of the determined model; and
detecting an anomaly in the device based at least on the validated model.
2. The method of claim 1, wherein iteratively generating the trained model and the one or more trained coefficients comprises:
updating the one or more model coefficients to generate one or more updated model coefficients; and
updating the determined model based on the one or more updated model coefficients.
3. The method of claim 1, further comprising:
generating operational data corresponding to the device based on one or more signals representative of values of operational parameters of the device, inputs to the device, outputs of the device, or combinations thereof;
generating inspection data based on one or more signals representative of inspection data points of the device, observations of a user, or a combination thereof; and
selecting a first training operational data set and a first validation operational data set from the operational data, wherein the first training operational data set comprises a plurality of operational data vectors.
4. The method of claim 3, wherein selecting the first training operational data set and the first validation operational data set comprises processing the operational data based on a random sampling.
5. The method of claim 3, wherein iteratively generating the trained model and one or more trained coefficients based on the determined model and the one or more model coefficients to generate the validated model comprises:
generating a first trained model and one or more first trained coefficients by iteratively generating an updated model;
determining a first error metric corresponding to the first trained model based on the first validation operational data set, the inspection data, and the first trained model; and
selecting the first trained model as the validated model and the one or more first trained coefficients as one or more validated coefficients based on the first error metric.
6. The method of claim 5, wherein iteratively generating the updated model comprises updating the one or more model coefficients based on the determined model, the one or more model coefficients, the first training operational data set, and the inspection data.
7. The method of claim 5, further comprising:
iteratively selecting a subsequent training operational data set and a subsequent validation operational data set from the operational data;
iteratively generating a subsequent trained model and one or more subsequent trained coefficients based on the determined model, the one or more model coefficients, the subsequent training operational data, and the inspection data; and
selecting the subsequent trained model as the validated model and the one or more subsequent trained coefficients as the one or more validated coefficients based on the subsequent trained model and a corresponding subsequent error metric.
8. The method of claim 5, wherein determining the first error metric comprises determining a root mean square of errors across the inspection data points in the inspection data.
9. The method of claim 5, wherein selecting the first trained model comprises:
comparing the first error metric to a designated error metric; and
determining whether the first trained model satisfies an error metric requirement based on the comparison of the first error metric to the designated error metric.
10. The method of claim 5, wherein generating the first trained model comprises:
initializing an initial model and one or more initial coefficients corresponding to the initial model;
equating the determined model to the initial model and the one or more model coefficients to the one or more initial coefficients;
iteratively selecting one of the plurality of operational data vectors corresponding to the first training operational data set;
iteratively generating one or more updated coefficients and the updated model by updating the one or more initial coefficients based on the one of the plurality of operational data vectors and the initial model;
iteratively performing one of equating the updated model to the initial model and equating the one or more updated coefficients to the one or more initial coefficients or retaining the initial model and the one or more initial coefficients based on a measurement accuracy of the initial model and a measurement accuracy of the updated model; and
selecting the initial model as the first trained model and the one or more initial coefficients as the one or more first trained coefficients based on the first training operational data set.
11. The method of claim 10, further comprising:
generating a list comprising a plurality of listed trained models, a plurality of corresponding listed trained coefficients, and a plurality of corresponding listed error metrics; and
selecting one of the plurality of listed trained models as the validated model and one or more of the plurality of corresponding listed trained coefficients as the one or more validated coefficients based on a determined number of iterations.
12. The method of claim 11, wherein generating the list comprises storing the first trained model, the one or more first trained coefficients, the subsequent trained model, the one or more subsequent trained coefficients, and the corresponding subsequent error metric in the list.
13. The method of claim 1, further comprising generating the one or more model coefficients by:
generating a plurality of training operational data sets, a plurality of corresponding inspection data points, and a plurality of determined vector coefficients;
iteratively determining inspection outputs by updating the determined model based on the plurality of training operational data sets and the plurality of determined vector coefficients;
determining errors corresponding to the plurality of determined vector coefficients based on the determined inspection outputs and the plurality of corresponding inspection data points; and
selecting one or more of the plurality of determined vector coefficients as the one or more model coefficients based on the errors.
14. A system for detecting an anomaly in a device, comprising:
a processing subsystem operatively coupled to the data repository and configured to:
iteratively generate a trained model and one or more trained coefficients based on a determined model and one or more determined coefficients corresponding to the determined model to generate a validated model, wherein a measurement accuracy of the validated model is greater than or equal to a measurement accuracy of the determined model; and
detect an anomaly in the device based at least on the validated model.
15. The system of claim 14, wherein the processing subsystem is further configured to:
generate operational data corresponding to the device based on one or more signals representative of values of operational parameters of the device, inputs to the device, outputs of the device, or combinations thereof;
generate inspection data based on one or more signals representative of inspection data points of the device, observations of a user, or a combination thereof; and
select a first training operational data set and a first validation operational data set from the operational data,
wherein the first training operational data set comprises a plurality of operational data vectors.
16. The system of claim 15, wherein the processing subsystem is further configured to:
generate a first trained model and one or more first trained coefficients by iteratively generating an updated model;
determine a first error metric corresponding to the first trained model based on the first validation operational data set, the inspection data, and the first trained model; and
select the first trained model as the validated model and the one or more first trained coefficients as one or more validated coefficients based on the first error metric.
17. The system of claim 16, wherein the processing subsystem is further configured to:
iteratively select a subsequent training operational data set and a subsequent validation operational data set from the operational data;
iteratively generate a subsequent trained model and one or more subsequent trained coefficients based on the determined model, the one or more coefficients, the subsequent training operational data set and the inspection data;
iteratively determine a corresponding subsequent error metric corresponding to the subsequent trained model based on the subsequent validation operational data set, the inspection data, and the subsequent trained model; and
select the subsequent trained model as the validated model and the one or more subsequent trained coefficients as the one or more validated coefficients based on the subsequent trained model and the corresponding subsequent error metric.
18. The system of claim 17, wherein the processing subsystem is configured to:
initialize an initial model and one or more initial coefficients corresponding to the initial model;
equate the determined model to the initial model and the one or more coefficients to the one or more initial coefficients;
iteratively select one of the plurality of operational data vectors from the first training operational data set;
iteratively generate one or more updated coefficients and the updated model by updating the one or more initial coefficients based on the one of the plurality of operational data vectors and the initial model;
iteratively perform one of equating the updated model to the initial model and equating the one or more updated coefficients to the one or more initial coefficients or retaining the initial model and the one or more initial coefficients based on a measurement accuracy of the initial model and a measurement accuracy of the updated model; and
select the initial model as the first trained model and the one or more initial coefficients as the one or more first trained coefficients based on the first training operational data set.
19. The system of claim 18, wherein the one or more updated coefficients are iteratively generated by updating the one or more initial coefficients by applying a determined technique on the one of the plurality of operational data vectors and the initial model.
20. The system of claim 18, wherein the processing subsystem is further configured to:
generate a list comprising a plurality of listed trained models, a plurality of corresponding listed trained coefficients and a plurality of corresponding listed error metric; and
select one of the plurality of listed trained models as the validated model and one or more of the plurality of corresponding listed trained coefficients as the one or more validated coefficients based on a determined number of iterations.
21. The system of claim 19, wherein the determined technique comprises a Kalman Filtering technique, an Extended Kalman Filtering technique, an iterative Extended Kalman Filtering Technique, an Unscented Kalman Filtering technique, or combinations thereof.
22. A processing system operatively coupled to a data repository, and configured to:
iteratively generate a trained model and one or more trained coefficients based on a determined model and one or more model coefficients corresponding to the determined model to generate a validated model, wherein a measurement accuracy of the validated model is greater than or equal to a measurement accuracy of the determined model; and
determine an anomaly in the device based at least on the validated model.
, Description:BACKGROUND
[0001] The subject matter disclosed herein generally relates to
determination of one or more anomalies in a device and more specifically to
updation of models to generate validated models that are used for the
determination of the anomalies in the device.
[0002] Prognostic models have been widely used in various applications
such as turbo-machinery and medicine. In the field of turbo-machinery,
prognostic models have been employed for predicting anomalies such as cracks,
wear, and the like. Furthermore, these prognostic models have also been used for
estimating remaining useful life of the systems.
[0003] Typically, prognostic models are transfer functions that relate
outputs of systems to a combination of inputs to the systems and model
coefficients. The inputs to the systems may for example include operational
parameters and other parameters. Currently available techniques for developing
the prognostic models are based on field-data obtained from the systems installed
in the field or from simulation/test data that at least qualitatively represents field
observations of the systems. However, since obtaining field-data for
development of the prognostic models is a difficult and onerous task, the
prognostic models are typically developed using simulation data. Even when the
prognostic models are developed using the field-data, the prognostic models may
not represent or conform to new field-data, and hence may lead to inaccurate
measurements or predictions. Additionally, the presently available techniques
fail to update the prognostic models based on potential continuous deviations in
the new field-data and behavior of the surrounding environment, thereby
resulting in inaccurate predictions/measurements. Moreover, the currently
available techniques also fail to validate the updated model.
BRIEF DESCRIPTION
[0004] In accordance with aspects of the present specification, a method
for detecting an anomaly in a device is presented. The method includes receiving
a determined model and one or more model coefficients corresponding to the
determined model, iteratively generating a trained model and one or more trained
coefficients based on the determined model and the one or more model
coefficients to generate a validated model, wherein a measurement accuracy of
the validated model is greater than or equal to a measurement accuracy of the
determined model. The method further includes detecting an anomaly in the
device based at least on the validated model.
[0005] In accordance with another aspect of the present specification, a
system for detecting an anomaly in a device is presented. The system includes a
processing subsystem operatively coupled to a data repository and configured to
iteratively generate a trained model and one or more trained coefficients based on
a determined model and one or more determined coefficients corresponding to
the determined model to generate a validated model, wherein a measurement
accuracy of the validated model is greater than or equal to a measurement
accuracy of the determined model. The processing subsystem is further
configured to detect an anomaly in the device based at least on the validated
model,
[0006] In accordance yet another aspect of the present specification, a
processing system operatively coupled to a data repository is presented. The
processing subsystem is configured to iteratively generate a trained model and
one or more trained coefficients based on a determined model and one or more
model coefficients corresponding to the determined model to generate a validated
model, wherein a measurement accuracy of the validated model is greater than or
equal to a measurement accuracy of the determined model. The processing
subsystem is further configured to determine an anomaly in the device based at
least on the validated model.
DRAWINGS
[0007] These and other features and aspects of embodiments of the
present invention will become better understood when the following detailed
description is read with reference to the accompanying drawings in which like
characters represent like parts throughout the drawings, wherein:
[0008] Fig. 1 is one embodiment of a system for determining an anomaly
in a device, in accordance with aspects of the present specification;
[0009] Fig. 2 is a flow chart illustrating a method for determining an
anomaly in a device, in accordance with aspects of the present specification;
[0010] Fig. 3A and Fig. 3B is a flow chart illustrating a method for
generating a validated model and validated coefficients, in accordance with
aspects of the present specification;
[0011] Fig. 4A and Fig. 4B is a flow chart illustrating a method for
generating a trained model and trained coefficients, in accordance with aspects of
the present specification; and
[0012] Fig. 5 is a flow chart illustrating a method for generating model
coefficients corresponding to a determined model, in accordance with aspects of
the present specification.
DETAILED DESCRIPTION
[0013] Unless defined otherwise, technical and scientific terms used
herein have the same meaning as is commonly understood by one of ordinary
skill in the art to which this disclosure belongs. The terms “a” and “an” do not
denote a limitation of quantity, but rather denote the presence of at least one of
the referenced items. The term “or” is meant to be inclusive and mean one, some,
or all of the listed items. The use of “including,” “comprising” or “having” and
variations thereof herein are meant to encompass the items listed thereafter and
equivalents thereof as well as additional items. The terms “control system,”
“controller” or “a processing subsystem” may include either a single component
or a plurality of components, which are either active and/or passive and are
connected or otherwise coupled together to provide the described function or
functions. While the present systems and methods are explained to update
prognostics models, the present systems and methods may be used for updating
models other than the prognostic models.
[0014] As will be described in detail hereinafter, various systems and
methods for enhanced detection of anomalies in a device are presented. In
particular, the systems and methods update a determined model to generate a
validated model such that a performance of the validated model is better or
similar to a performance of the determined model, thereby ensuring that the
performance of the validated model is not worse than that of the determined
model. By way of example, it may be validated/determined that the performance
of the validated model is better than the performance of the determined model
when a measurement accuracy and/or prediction accuracy of the validated model
is better than a measurement accuracy and/or prediction accuracy of the
determined model. As used herein, the term “measurement accuracy” may be
used to refer to closeness of agreement between values of a quantity or a
parameter measured and/or determined by a model to a corresponding accurate
value of the quantity.
[0015] As used herein, the term “model” may be used to refer to a
representation of the functioning of a device or a system via a set of mathematical
or statistical equations. Also, as used herein, the term “determined model” refers
to a model that is generated for the first time or a validated model that is
generated by updating a determined model/validated model using one or more
techniques and/or additional data. As used herein, the term “better” refers to at
least about 1% increase in measurement/determination/prediction accuracy.
[0016] Fig. 1 is an example of one embodiment of a system 100 for
determining an anomaly in a device 102, such as a turbomachine. It may be
noted that determining the anomaly in the device 102 includes predicting the
anomaly in the device 102. Further, determining the anomaly in the device 102
may entail determining anomalies in one or more components of the device 102.
The anomaly may include a defect or a fault in the device 102. For example, the
anomaly may include a crack in a blade of a turbine. In accordance with
exemplary aspects of the present specification, the system 100 is configured to
determine the anomaly in the one or more components of the device 102 in realtime
or near real-time using real-time operational data.
[0017] The device 102 may include a turbomachine, in certain
embodiments. Further, the device 102 receives inputs 104 and generates outputs
106. It may be noted that the inputs 104 and/or outputs 106 may be
representative of one or more operational parameters. As used herein, the term
“operational parameters” is representative of information regarding an
operational environment of the device 102 and/or components of the device 102.
[0018] The system 100 further includes one or more sensing devices 108.
The sensing devices 108 are operatively coupled to the device 102. One or more
of the sensing devices 108 measure the operational parameters of the device 102
and generate signals representative of values of the operational parameters of the
device 102. In certain embodiments, the values may be numerical values.
Additionally, one or more of the sensing devices 108 may generate signals
representative of inspection data points of the device 102. Furthermore, a user
110 may observe the device 102 to generate observations of the device 102. As
used herein, the term “inspection data points” refers to one or more values
representative of defects or faults in the device 102 during inspection of the
device 102.
[0019] The system 100 further includes a processing subsystem 112 and a data
repository 114. In one embodiment, processing subsystem 112 includes at least a
processor 118, a model generator unit 120 and an anomaly detection unit 122.
The processor 118 may be configured to perform/execute the various functions of
one or more of the model generator unit 120, the anomaly detection unit 122, and
the processing subsystem 112.
[0020] The processor 118 may include at least one arithmetic logic unit,
microprocessor, general purpose controller or other processor arrays configured
to perform computations, and/or retrieve data stored in memory and/or the data
repository 114. In one embodiment, the processor 118 may be a multiple core
processor. The processor 118 processes data signals and may include various
computing architectures including a complex instruction set computer (CISC)
architecture, a reduced instruction set computer (RISC) architecture, or an
architecture implementing a combination of instruction sets. In one embodiment,
the processing capability of the processor 118 may support the retrieval of data
and transmission of data. In another embodiment, the processing capability of
the processor 118 may also perform more complex tasks, including various types
of feature extraction, modulating, encoding, multiplexing, and the like. Use of
other types of processors, operating systems, and physical configurations is also
envisioned.
[0021] The processing subsystem 112 is operatively coupled to the device
102, the sensing devices 108, and the data repository 114. Also, the model
generator unit 120 is configured to generate the validated model, while the
anomaly detection unit 122 is configured to detect an anomaly in the device
based at least on the validated model.
[0022] Further, the processing subsystem 112 and the model generator
unit 120 in particular receives the signals representative of values of the
operational parameters, the inputs 104, and the outputs 106 from the device 102.
The model generator unit 120 generates operational data based on the signals
representative of values of the operational parameters, the inputs 104, and the
outputs 106. The operational data, for example, is a collection of a plurality of
operational data vectors at multiple time stamps. Moreover, an operational data
vector is a collection of values (for example, numerical values) of the operational
parameters. For example, the operational data of a turbomachine may include
numerical values of operational parameters such as pressure and temperature
associated with one or more components in the turbomachine, a number of
rotations per second of a rotor, and the like. For ease of understanding, one
example of the operational data represented by a matrix is shown in equation (1).
? ? ? ?
?
?
? ? ? ?
?
?
?
? ? ? ?
?
?
? ? ? ?
?
?
?
4 4 4
3 3 3
2 2 2
1 1 1
4
3
2
1
P T RPM
P T RPM
P T RPM
P T RPM
ODV
ODV
ODV
ODV
OD (1)
[0023] In equation (1), OD is operational data, ODV1 is a first operational
data vector at a first time stamp t1, ODV2 is a second operational data vector at a
second time stamp t2, ODV3 is a third operational data vector at a third time
stamp t3, and ODV4 is a fourth operational data vector at a fourth time stamp t4.
Furthermore, P1, P2, P3, and P4 are numerical values of an operational
parameter such as pressure at the time stamps t1, t2, t3, and t4, respectively. In a
similar fashion T1, T2, T3, and T4 are numerical values of an operational parameter
such as temperature at time stamps t1, t2, t3, and t4, respectively. Moreover,
RPM1, RPM2, RPM3, and RPM4 are numerical values of an operational
parameter such as revolutions per minute at time stamps t1, t2, t3, and t4,
respectively.
[0024] Furthermore, the processing subsystem 112 receives the signals
representative of the inspection data points from the device 102 and the
observations from the user 110. The processing subsystem 112 generates
inspection data based on the signals representative of the inspection data points
and the observations, and stores the inspection data in the data repository 114. In
one example, the inspection data is a combination of the inspection data points
and the observations. For example, the inspection data of a turbomachine may
include data regarding/related to presence of a crack in a blade of a rotor, size of
the crack, and the like.
[0025] In accordance with aspects of the present specification, a
determined model is used for determining any anomaly in the device 102. The
determined model may have corresponding model coefficients. By way of
example, the determined model may correspond to overhaul requirements of the
device 102. Also, in one example, the determined model may be a model that is
generated for the first time before or after the device 102 is commissioned.
Alternatively, the determined model may be a validated model that was generated
in the past by updating a validated/determined model corresponding to overhaul
requirements of the device 102. The determined model for example may be a
statistical model formed by a combination of the model coefficients and one or
more operational parameters. One example of the determined model is shown in
equation (2).
Y ??C0 ??(C1??X1) ??(C2??X 2) ??(C3??X 3) ??(C4??X 4) ??(C5??X 5) (2)
[0026] In equation (2), Y is an output of a determined model and is
representative of an anomaly in the device. Also, C0, C1, C2, C3, C4, and C5 are
model coefficients, and X1, X2, X3, X4, and X5 are operational parameters. It
may be noted that equation (2) depicts one example of the determined model.
However, use of other models is also envisaged. It may also be noted that a
structure and complexity of a determined model of a device may substantially
differ from a structure and complexity of a determined model of another device.
Furthermore, while the example of equation (2) entails use of six model
coefficients and operational parameters, use of a higher or lower number of
operational parameters and/or model coefficients is envisioned.
[0027] The processing subsystem 112 generates a validated model based
on the model coefficients and the determined model. In particular, the processing
subsystem 112 generates the validated model such that a desired error metric
requirement is satisfied. The processing subsystem 112 further generates one or
more validated coefficients corresponding to the validated model. As used
herein, the term “error metric requirement” refers to a maximum allowable error
in achieving an output of the validated model or a minimum error difference
between an error corresponding to the validated model and an error
corresponding to the determined model. The maximum allowable error or the
minimum error difference, for example, may be specified by the user 110. Also,
the maximum allowable error, for example, may include a root mean square error
of a plurality of errors corresponding to trained model or subsequent trained
models. One example of the desired error metric requirement will be described
in greater detail with reference to Fig. 2.
[0028] In accordance with aspects of the present specification, the
processing subsystem 112 generates the validated model by iteratively generating
a trained model and one or more trained coefficients corresponding to the trained
model. It may be noted that performance of the trained model is better than or
similar to performance of the determined model. Furthermore, the processing
subsystem 112 iteratively generates the trained model by updating the model
coefficients to generate one or more updated coefficients, and subsequently
updating the determined model based on the updated coefficients. Generation of
the validated model and the iterative generation of the trained model will be
described in greater detail with reference to Figs. 2-4.
[0029] In some embodiments, the trained model is an updated model that
is generated based on one or more updated coefficients. These updated
coefficients may be generated by updating the model coefficients when
performance of the updated model is better than or similar to performance of the
determined model. However, in certain other embodiments, the updated model is
the determined model when the performance of the determined model is better
than the performance of the updated model. The performance of the updated
model may be better than the performance of the determined model when an error
in determining/predicting an output of the updated model is less than an error in
determining/predicting an output of the determined model.
[0030] In one embodiment, the system 100 includes a memory unit 116
that is operatively coupled to the processing subsystem 112. In some
embodiments, the processing subsystem 112 may store transient information,
models, model coefficients, or combinations thereof in the memory unit 116. For
example, in the presently contemplated configuration, the processing subsystem
112 stores one or more of the trained model, the trained coefficients, the updated
model, and the updated coefficients in the memory unit 116.
[0031] Fig. 2 is a flow chart illustrating a method 200 for determining an
anomaly in a device, in accordance with aspects of the present specification. The
method 200, for example may be executed by the processing subsystem 112 of
Fig. 1. Also, the method of Fig. 2 may be described with reference to the
components of Fig. 1.
[0032] At block 202, a determined model 204 and one or more model
coefficients 206 may be received. The determined model 204 and the model
coefficients 206 may be received by the processing subsystem 112. Furthermore,
at block 208, a validated model 210 and validated coefficients 212 are generated.
The validated model 210, for example, is determined based at least on the
determined model 204, the model coefficients 206, operational data, inspection
data, and an error metric requirement. In one example, the validated model 210
is determined by iteratively generating a trained model and one or more trained
coefficients corresponding to the trained model such that the performance of the
trained model is better than the performance of the determined model 204. In
addition, the trained model is generated by updating the model coefficients 206
and the determined model 204. The trained model is iteratively generated until
the validated model 210 that meets the error metric requirement is generated.
The validated model 210 includes operational parameters of the device 102 and
validated coefficients 212. An example of generation of the validated model 210
by iterative generation of the trained model will be described in greater detail
with reference to Fig. 3A and Fig. 3B. Also, one example of generation of the
trained model will be described in greater detail with reference to Fig. 4A and
Fig. 4B.
[0033] In accordance with aspects of the present specification, one
example of the validated model 210 that is generated by updating the model
coefficients 206 of the determined model 204 of equation (2) is represented by
equation (3).
( 5 5)
0 ( 1 1) ( 2 2) ( 3 3) ( 4 4)
'
' ' ' ' '
C X
Y C C X C X C X C X
???
?????????????????
(3)
[0034] In equation (3), C0' ,C1' , C2' , C3' , C4' , C5' are validated
coefficients. It may be noted that a structure of the validated model 210 is similar
to a structure of the determined model 204. However one or more of the
validated coefficients 212 may be different from the model coefficients 206
corresponding to the determined model 204. One example of numerical values of
the model coefficients C0, C1, C2, C3, C4, C5 and numerical values of the
validated coefficients C0' ,C1' , C3' , C4' , C5' is shown in Table 1.
Table 1
Model Coefficient Value Validated Coefficient Value
C0 100 C0' , 110
C1 400 C1' 360
C2 300 C2' 285
C3 -350 C3' -350
C4 -175 C4' -210
C5 125 C5' 100
[0035] Furthermore, at block 214, existence of an anomaly in the device
102 may be determined based at least on the validated model 210. In a presently
contemplated configuration, the processing subsystem 112 is used to determine
the anomaly in the device 102 based at least on the validated model 210. In one
example, when an output (for example: Y) of the validated model 210 is greater
than a desired value, then it may be determined that an anomaly exists in the
device 102.
[0036] Accordingly, at step 216, a check may be carried out to verify if
one or more anomalies are detected in the device 102. If the existence of one or
more anomalies in the device 102 is identified at step 216, an action may be
executed/performed to circumvent the anomaly in the device 102, as indicated by
step 218. The action, for example, may include removal of one or more
components from the device 102, resetting the device 102, repairing one or more
components of the device 102, and the like. The action, for example may be
taken by a user or the processing subsystem 112 (see Fig. 1).
[0037] However, at step 216, if it is determined that the device 102 has no
anomalies, the processing subsystem 112 is configured to communicate the nonexistence
of any anomalies in the device 102 to a user, as indicated by step 220.
In one example, the output Y of the validated model 210 shown in equation (3)
may be characterized by a numerical value that is indicative of non-existence of
any anomaly in the device. In certain embodiments, if a presence or an absence
of the anomaly is determined in the device 102, then a user may be notified about
the same.
[0038] Fig. 3A and Fig. 3B is a flow chart illustrating a method 300 for
generating a validated model, such as the validated model of 210 of Fig. 2, in
accordance with aspects of the present specification. In particular, the method of
Fig. 3 describes block 208 of Fig. 2 in greater detail. The method of Fig. 3A and
Fig. 3B may be described with reference to the components of Figs. 1-2.
[0039] The method 300, for example, may be executed by the processing
subsystem 112. At block 302, operational data 304 and inspection data 306 may
be generated. Furthermore, at block 308, a first training operational data set 310
and a first validation operational data set 312 are selected from the operational
data 304. The first training operational data set 310 and the first validation
operational data set 312 may be selected by processing the operational data 304
by random sampling. In one example, the operational data 304 may be divided
such that about eighty percent of the operational data 304 is representative of the
first training operational data set 310 and about twenty percent of the operational
data 304 is representative of the first validation operational data set 312.
Additionally, in one example, when the operational data 304 is represented by
equation (1), then the first training operational data set (TOD) 310 and the first
validation operational data set (VOD) 312 may be respectively represented by
equations (4) and (5).
? ? ?
?
?
? ? ?
?
?
?
? ? ?
?
?
? ? ?
?
?
?
? ? ?
?
?
? ? ?
?
?
?
3 3 3
2 2 2
1 1 1
3
2
1
3
2
1
P T RPM
P T RPM
P T RPM
TODV
TODV
TODV
ODV
ODV
ODV
TOD (4)
VOD???ODV4]????VODV1???P4 T4 RPM4??(5)
[0040] In equation (4), TOD is a training operational data set and TODV1,
TODV2, and TODV3 are respectively representative of first, second and third
training operational data vectors. Also, VOD is a validation operational data set,
and VODV1 is representative of a first validation operational data vector.
[0041] The first training operational data set 310 is a collection of a
plurality of training operational data vectors (TODVs) that is used for updating
model coefficients 316. As used herein, the phrase “training operational data
vector” refers to values such as numerical values of multiple operational
parameters that are used for updating model coefficients. The first validation
operational data set 312 is a collection of multiple validation operational data
vectors (VODVs). As used herein, the phrase “validation operational data
vector” refers to values such as numerical values of multiple operational
parameters that are used for validating trained coefficients to determine whether
the trained coefficients may be chosen as the validated coefficients. It may be
noted that one example of the first training operational data set 310 and the first
validation operational data set 312 are presented in equations (4) and (5).
However, use of other numbers and types of operational parameters is also
envisaged. By way of example, in operation, a number of the TODVs, VODVs,
and operational parameters in a training operational data set may be greater than
or lesser than those in the training operational data set, the validation operational
data set and the operational parameters presented in equations (4) and (5).
[0042] Reference numeral 314 is representative of a determined model
and reference numeral 316 is representative of model coefficients. At block 318,
a first trained model and one or more first trained coefficients corresponding to
the first trained model may be generated by iteratively generating an updated
model. The updated model may be generated by updating the model coefficients
316 based on the determined model 314, the model coefficients 316, the first
training operational data set 310, and the inspection data 306. The generation of
the first trained model and the first trained coefficients will be described in
greater detail with reference to Fig. 4A and 4B. The first trained model includes
the operational parameters of the device and the first trained coefficients. In one
example, the first trained model may be generated by updating the model
coefficients 316 as indicated in equation (6).
Y ?C0'' ??(C1'' ??X1) ??(C2??X 2) ??(C3'' ??X 3) ??(C4'' ??X 4) ??(C5'' ??X5)
(6)
[0043] In equation (6), C0'' , C1'' , C2 , C3'' , C4'' , C5'' are first trained
coefficients corresponding to the first trained model. It may be noted that in the
example of equation (6), the structure of the first trained model is similar to the
structure of the determined model 314. However, one or more of the first trained
coefficients corresponding to the first trained model may differ from the model
coefficients 316 corresponding to the determined model 314. It may also be
noted that use of different structures, first trained coefficients, and operational
parameters corresponding to a first trained model is also envisioned. Also,
numerical values of one or more of the first trained coefficients corresponding to
the first trained model may be similar to one or more numerical values of the
model coefficients 316.
[0044] Furthermore, at block 320, a first error metric corresponding to the
first trained model is determined. The first error metric corresponding to the first
trained model is determined based on the first validation operational data set 312,
the inspection data 306, and the first trained model. The first error metric, for
example, may be a root mean square of errors across inspection data points in the
inspection data 306. In one example, the first error metric may be determined
using equation (7).
N
e e e
E
tm
N
tm tm
tm
2 2
2
2
1 ( ) ??( ) ??...( )
??(7)
[0045] In equation (7), Etm is a first error metric corresponding to the
trained model, etm 1 is a first error determined using a first inspection data point
generated at a first time stamp t1 and a first output of the first trained model, etm 2 is
a second error determined using a second inspection data point generated at a
second time stamp t2 and a second output of the first trained model, tm
N e is an Nth
error determined using an Nth inspection data point generated at an Nth time stamp
and an Nth output of the first trained model, N is a number of errors or a number
of validation operational data vectors used for determining the errors etm 1 to tm
N e .
The first error etm 1 , the second error etm 2 and the Nth error tm
N e may be respectively
determined using the equations (8)–(10).
1 1
tm
1
o
e ?Y tm ?Y (8)
2 2 2
o
etm ?Y tm ?Y (9)
N
tm
N
tm
N e Y Y
o
????(10)
[0046] In equations (8)-(10), Ytm 1 is a first output of the first trained
model, determined using a first VODV corresponding to the first time stamp t1,
Ytm 2 is a second output of the first trained model determined using a second
VODV corresponding to the second time stamp t2, and tm
N Y is an Nth output of the
first trained model determined using an Nth VODV corresponding to the Nth time
stamp tN. Furthermore, 1
o
Y is a first inspection data point corresponding to the
first time stamp t1, 2
o
Y is a second inspection data point corresponding to the
second time stamp t2, and Y N
o
is an Nth inspection data point corresponding to the
Nth time stamp tN.
[0047] Moreover, the first output Ytm 1 may be determined by substituting
the first VODV in the first trained model (for example, equation (6)), and solving
the first trained model. Similarly, the second output Ytm 2 may be determined by
substituting the second VODV in the first trained model, and solving the first
trained model. Also, the Nth output tm
N Y may be determined by substituting the Nth
VODV in the first trained model, and solving the first trained model.
[0048] Additionally, in certain embodiments, at block 320, the first error
metric corresponding to the determined model 314 may be determined based on
the first validation operational data set 312, the inspection data 306, and the
determined model 314. The first error metric, for example, may be a root mean
square of errors across the inspection data points in the inspection data 306. For
example, the first error metric corresponding to the determined model may be
determined using equation (11).
N
e e e
E
pm
N
pm pm
pm
2 2
2
2
1 ( ) ?( ) ??......( )
??(11)
[0049] In equation (11), Epm is first error metric corresponding to a
determined model, e pm 1 is a first determined model error determined using a first
inspection data point generated at the first time stamp t1 and a first output of the
determined model, e pm 2 is a second determined model error determined using a
second inspection data point generated at the second time stamp t2 and a second
output of the first trained model, pm
N e is an Nth determined model error
determined using an Nth inspection data point generated at an Nth time stamp and
an Nth output of the determined model, N is a number of errors or a number of
validation operational data vectors used for the determining the errors e pm 1 to pm
N e
. The first error e pm 1 , the second error e pm 2 and the Nth error pm
N e may be
determined using equations (12)–(14).
1 1
pm
1
o
e ?Y pm ?Y (12)
2 2 2
o
e pm ?Y pm ?Y (13)
N
pm
N
pm
N e Y Y
o
????(14)
[0050] In equations (12)-(14), Y pm 1 is the first output, of the determined
model 314 determined using the first validation operational data vector
corresponding to the first time stamp t1, Y pm 2 is the second output of the
determined model 314, determined using the second validation operational data
vector corresponding to the second time stamp t2, pm
N Y is the Nth output of the
determined model 314 determined using the Nth validation operational data vector
corresponding to the Nth time stamp tN, 1
o
Y is the first inspection data point
corresponding to the first time stamp t1, 2
o
Y is the second inspection data point
corresponding to the second time stamp t2, and Y N
o
is the Nth inspection data
point corresponding to the Nth time period tN.
[0051] The first output Y pm 1 may be determined by substituting the first
validation operational data vector in the determined model 314 (for example,
equation (2)), and solving the determined model 314. Similarly, the second
output Y pm 2 may be determined by substituting the second validation operational
data vector in the determined model 314 and solving the determined model 314,
Again, the Nth output pm
N Y may be determined by substituting Nth validation
operational data vector in the determined model 314, and solving the determined
model 314.
[0052] Subsequently, at block 322, a check is carried out to determine
whether the first trained model generated at block 318 satisfies an error metric
requirement 321. The check, for example, may entail comparing the first error
metric corresponding to the first trained model, determined at block 320, to a
designated error metric. As used herein, the term “designated error metric” refers
to a threshold value that may be compared to an error metric of a trained model.
In this example, at step 322, if the error metric of the trained model does not
exceed the designated error metric, the trained model is chosen as a validated
model. Furthermore, when the first error metric does not exceed the designated
error metric, it may be determined that the first trained model meets the error
metric requirement 321. However, if the error metric of the trained model
exceeds the designated error metric, the trained model is not chosen as the
validated model. It may be noted that when the first error metric exceeds the
designated error metric, it may be determined that the first trained model fails to
meet the error metric requirement 321. Execution of the check at block 322
ensures that a measurement accuracy of the validated model 334 is greater than or
equal to a measurement accuracy of the determined model 314.
[0053] In an alternative embodiment, at step 322, the check may be
carried out by determining a difference (hereinafter referred to as “error metric
difference”) between the first error metric (Epm) corresponding to the determined
model 314 and the first error metric (Etm) corresponding to the first trained
model. Subsequently, the error metric difference may be compared to a
minimum error metric difference (interchangeably referred to as “determined
error metric difference”) to determine if the trained model meets the error metric
requirement. As used herein, the term “minimum error metric difference” refers
to a minimum desirable difference between an error metric corresponding to a
determined model and an error metric corresponding to a trained model for
selecting the trained model as a validated model. The minimum error metric
difference may be specified by a user before or after commissioning of the
device. Equation (15) represents one example of the check at step 322.
(Epm - Etm) > MEMD (15)
wherein Epm is an error metric corresponding to the determined model 314, Etm is
a first error metric corresponding to the first trained model, and MEMD is the
minimum error metric difference.
[0054] It may be noted that when the error metric difference (Epm - Etm)
exceeds the minimum error metric difference, it may be determined that the first
trained model satisfies the error metric requirement 321. Furthermore, when the
error metric difference (Epm - Etm) is less than the minimum error metric
difference, it may be determined that the first trained model fails to satisfy the
error metric requirement 321.
[0055] Also, at block 322, if the first trained model satisfies the error
metric requirement 321, control is transferred to block 324. At block 324, the
first trained model is selected as a validated model 334 and the first trained
coefficients are selected as the validated coefficients 336. With returning
reference to block 322, if the first trained model fails to satisfy the error metric
requirement 321, control is transferred to block 326. At block 326, the first
trained model, the first trained coefficients and the respective first error metric
are stored in a list 328.
[0056] Subsequently, at block 330, a check is carried out to determine
whether a number of iterations of the method 300 exceed a determined number of
iterations. It may be noted that the determined number of iterations may be
preset for the device or may be chosen by a user. Furthermore, at block 330, if
the number of iterations of the method 300 is not greater than the determined
number of iterations, control is transferred to block 332 where a subsequent
training operational data set and a subsequent validation operational data set are
selected from the operational data 304. The subsequent training operational data
set and the subsequent validation operational data set are subsets of the
operational data 304. The subsequent training operational data set and the
subsequent validation operational data set, for example, may be selected from the
operational data 304 in a manner that is substantially similar to the selection of
the first training operational data set 310 and the first validation operational data
set 312.
[0057] Moreover, control is transferred to block 318 where a subsequent
trained model and one or more subsequent trained coefficients are generated. In
particular, the subsequent trained model and one or more subsequent trained
coefficients are generated. More specifically, the subsequent trained model and
one or more subsequent trained coefficients are generated by iteratively
generating an updated model. Generating the updated model in turn entails
updating the model coefficients 316 based on the determined model 314, the
model coefficients 316, the subsequent training operational data selected at block
332, and the inspection data 306. The subsequent trained model includes the
operational parameters of the determined model 314 and the subsequent trained
coefficients that correspond to the determined model 314. In certain
embodiments, the subsequent trained model is similar in structure to the first
trained model and the determined model 314. However, one or more of the
subsequent trained coefficients may be different from the model coefficients 316
and the first trained coefficients. The subsequent trained model and the
subsequent trained coefficients may be determined in a manner that is
substantially similar to the determination of the first trained model and the first
trained coefficients. The generation of the subsequent trained model and the
subsequent trained coefficients will be described in greater detail with reference
to Fig. 4A and Fig. 4B.
[0058] Following the generation of the subsequent trained model, a
subsequent error metric corresponding to the subsequent trained model is
determined, as indicated by step 320. The subsequent error metric corresponding
to the subsequent trained model, for example, may be determined based on the
subsequent validation operational data, the inspection data 306, and the
subsequent trained model. The subsequent error metric, for example, may be a
root mean square of errors across the inspection data 306, and may be determined
using equations (7)-(10). In certain embodiments, depending on the error metric
requirement 321, at block 320, a subsequent error metric corresponding to the
determined model 314 may be determined based on a subsequent validation
operational data set, the inspection data 306, and the determined model 314. The
subsequent error metric, for example, may be determined by using equations
(11)-(14).
[0059] Furthermore, at block 322, a check is carried out to determine
whether the subsequent/trained model satisfies the error metric requirement 321.
The check to determine whether the subsequent trained model satisfies the error
metric requirement 321, for example, may be carried out in a manner that is
substantially similar to the check of the error metric requirement 321 associated
with the first trained model. At block 322, if the subsequent trained model fails
to satisfy the error metric requirement 321, control is transferred to block 326
where the subsequent trained model, the corresponding subsequent trained
coefficients, and the corresponding subsequent error metric are stored in the list
328. Subsequently, at block 330, a check is carried out to determine whether the
number of iterations of the method 300 is greater than the determined number of
iterations. If the number of iterations is not greater than the determined number
of iterations, then the control is transferred to block 332.
[0060] At block 332, a subsequent training operational data set and a
subsequent validation operational data set are selected from the operational data
124. The subsequent training operational data set and the subsequent validation
operational data set may be selected using random sampling. Furthermore, a
subsequent trained model and one or more subsequent trained coefficients may be
generated. Accordingly, the method 300 iteratively generates subsequent trained
models, one or more corresponding subsequent trained coefficients and a
corresponding subsequent error metric until either the subsequent trained model
satisfies the error metric requirement 321 or a number of iterations for generating
the subsequent trained model exceeds the determined number of iterations.
[0061] With returning reference to block 322, if it is determined that the
subsequent/first trained model satisfies the error metric requirement 321, the
control is transferred to block 324, where the subsequent/first trained model is
selected as the validated model 334 and the trained coefficients are selected as
validated coefficients 336. Since the subsequent/first trained model satisfies the
error metric requirement 321, the subsequent/first trained model is selected as the
validated model 334, the validated model 334 also meets the error metric
requirement 321.
[0062] Referring again to block 322, if the subsequent/trained model fails
to satisfy the error metric requirement 321, at block 326, the subsequent/trained
model and the subsequent/error metric corresponding to the subsequent/trained
model is stored in the list 328. This may be followed by a check at block 330 to
determine whether the number of iterations exceeds the determined number of
iterations. For ease of understanding, the first/subsequent trained model stored in
the list 328 are hereinafter referred to as “listed trained models.” Similarly, the
first/subsequent trained coefficients that correspond to the listed trained models
are hereinafter referred to as “corresponding listed trained coefficients.” Also,
the first/subsequent error metric that corresponds to the listed trained models are
hereinafter referred to as “listed error metrics.”
[0063] At block 330, if the number of iterations is greater than the
determined number of iterations, then at block 338 one of the listed trained
models that most closely meets the error metric requirement 321 or corresponds
to a listed error metric that is less than the rest of the corresponding listed error
metrics is selected as the validated model 334. Furthermore, listed trained
coefficients corresponding to the selected listed trained model are selected as the
validated coefficients 336. Table 2 depicts one example of the list 328. In
particular, the list 328 depicted in Table 2 includes a list of trained models and
their corresponding trained coefficients.
Table 2
Trained Model
Number
Listed Trained
Models
Listed Trained
Coefficients
Listed Error
Metric (Root
Mean Square
Error)
1 M1 C1, C2 , C3 0.5
2 M2 C1, C2' , C3'' 0.7
3 M3 C1, C2 , C3' 0.7
[0064] In the example of the list 328 presented in Table 2, the listed error
metric corresponding to the model M1 has the lowest value. Hence, the listed
trained model M1 may be selected as the validated model 334, and the listed
trained coefficients C1, C2, C3 may be selected as the validated coefficients 336.
[0065] Fig. 4A and Fig. 4B is a flow chart illustrating a method 400 for
generating a trained model 430 and trained coefficients 432, in accordance with
aspects of the present specification. In one example, the trained model 430
includes the subsequent/first trained model referred to with respect to Fig. 3A and
Fig. 3B, and the trained coefficients include the subsequent/first trained
coefficients referred to with respect to Fig. 3A and Fig. 3B. The trained model
430 includes the trained model referred to with respect to Fig. 1, and the trained
coefficients 432 include the trained coefficients referred to with respect to Fig. 1.
In one embodiment, Fig. 4A and Fig. 4B describe block 318 of Fig. 3A in greater
detail.
[0066] At block 402, an initial model and one or more initial coefficients
are initialized. The initial model and the initial coefficients, for example, may be
initialized by equating the initial model and the initial coefficients equal to zero.
One example of the initialization of the initial model is presented in equation
(16).
I_ = 0_ (16)
[0067] Furthermore, at block 408, a determined model 404 is equated to
the initial model, and model coefficients 406 are equated to the initial
coefficients. By way of example, if the determined model 404 is represented by
equation (2) and the initial model is represented by equation (16), then equating
the determined model 404 to the initial model of block 408 may be represented
by equation (17).
I = C0 + (C1*X1) + (C2 *X2) + (C3*X3) + (C4 *X4) + (C5 *X5) (17)
[0068] Reference numeral 410 is representative of a training operational
data set. The training operational data set 410 may be the first training
operational data set 310 of Fig. 3A, or the subsequent training operational data set
selected at block 332 of Fig. 3B. The training operational data set 410, for
example, is a subset of the operational data. As previously noted with reference
to Fig. 1, the training operational data set 410 is a collection of multiple training
operational data vectors generated at multiple time stamps. In one example, the
training operational data set 410 may be represented by the matrix of equation
(4).
[0069] At block 412, a first training operational data vector may be
selected from the training operational data set 410. For example, when the
training operation data set 410 is represented by a matrix, then a column or a row
of the matrix that represents a training operational data vector may be chosen as
the first training operational data vector from the matrix. Subsequently at block
414, updated coefficients may be generated by updating the initial coefficients of
the initial model based upon the first training operational data vector, the initial
model and a determined technique. The determined technique, for example,
includes a Kalman Filtering technique, an Extended Kalman Filtering technique,
an iterative Extended Kalman Filtering Technique, an Unscented Kalman
Filtering technique, and the like.
[0070] Moreover, at block 416, an updated model may be generated
based on the updated coefficients and the determined model 404. The updated
model, for example, is generated by substituting the updated coefficients in place
of the model coefficients 406 in the determined model 404. For example, if the
determined model 404 is represented by equation (2), and the updated
coefficients generated at block 414 are C0, C1' , C2, C3' , C4, C5, then the
updated model may be represented by equation (18) as:
Y = C0 + (C1' * X1) + (C2 *X2) + (C3' *X3) + (C4 *X4) + (C5*X5)
ud (18)
wherein Yud is an updated model.
[0071] Reference numeral 418 is representative of inspection data. At
block 420, an inspection data point may be selected from the inspection data 418
corresponding to the training operational data vector. The selected inspection
data point, for example, corresponds to the training operational data vector when
both the training operational data vector and the inspection data point are
generated at the same time stamp. In certain embodiments, an inspection data
point generated at a single time stamp may correspond to multiple training
operational data vectors generated during a determined time period. For
example, an inspection data point such as ‘crack length’ determined at a time
stamp t may correspond to training operational data vectors generated for 3 hours
including the time stamp t.
[0072] Furthermore, at block 422, a check may be carried out to
determine whether performance of the updated model is better than performance
of the initial model. In one example, it may be determined that the performance
of the initial model is better than the performance of the updated model when an
error corresponding to the updated model is less than an error corresponding to
the initial model. The check, for example, may be carried out by comparing the
error corresponding to the initial model to the error corresponding to updated
model. The error corresponding to the initial model and the error corresponding
to the updated model may be determined using equations (19) and (20).
Furthermore, the check may be carried out using equations (21) and (22).
t (initial) t (initial) t (initial) e Y Y
o
????(19)
t (updated) t (updated) t (updated) e Y Y
o
????(20)
[0073] It may be noted that if t (updated ) e < t (initial) e , then the performance of
the updated model is better than that of the initial model. However, if t (updated ) e >
t (initial) e , then the performance of the initial model is better than that of the
updated model. Further, t (initial) e is an error corresponding to the initial model
determined using an output of the initial model and an inspection data point
corresponding to a time stamp t, t (initial) Y is the output (for example, an anomaly
in a device) of the initial model determined using a training operational data
vector generated at the time stamp t. Also, Y t (initial)
o
is an inspection data point
corresponding to the time stamp t and t (updated ) e is an error corresponding to the
updated model determined using an output of the updated model and an
inspection data point corresponding to a time stamp t. Moreover, t (updated) Y is an
output (for example, an anomaly in a device) of the updated model determined
using the training operational data vector generated at the time stamp t. In
addition, Y t(updated)
o
is an inspection data point corresponding to the time stamp t.
[0074] At block 422, if it is determined that the performance of the
updated model is better than the performance of the initial model, then control is
transferred to block 424, where the updated model is equated to the initial model
and the updated coefficients are equated to the initial coefficients. Subsequent to
execution of block 424, control is transferred to block 426.
[0075] With returning reference to block 422, if it is determined that the
performance of the updated model is not better than the performance of the initial
model, then control is transferred to block 426 without updating the initial model
or the initial coefficients. Further, at block 426, a further check may be carried
out to determine whether each training operational data vector has been used to
generate a corresponding updated model. At block 426, if it is determined that
each training operational data vector has been used to generate the corresponding
updated model, then control is transferred to block 428, where the initial model is
selected as a trained model 430 and the initial coefficients are selected as trained
coefficients 432 corresponding to the trained model 430. With returning
reference to block 426, if it is determined that each training operational data
vector in the training operational data set 410 has not been used to generate the
corresponding updated model, then the control is transferred to block 412.
[0076] At block 412, a subsequent training operational data vector is
selected from the training operational data set 410. Subsequently, at block 414,
subsequent updated coefficients are generated by updating the initial coefficients
of the initial model based on the subsequent training operational data vector and
the determined technique. Furthermore, at block 416, a subsequent updated
model is generated by substituting the updated coefficients in the determined
model 404. Also, at block 420, a subsequent inspection data point corresponding
to the subsequent training operational data vector is selected from the inspection
data 418. Moreover, at block 422, a performance of the subsequent updated
model is compared to the performance of the initial model to determine whether
the subsequent updated model needs to be equated to the initial model at block
424. Subsequent to the execution of block 422, blocks 424-428 are executed.
[0077] Accordingly, the present method 400 iteratively generates the
updated model and the updated coefficients by updating the initial coefficients of
the initial model based on training operational data vectors selected from the
training operational data set 410. The method 400 updates the initial coefficients
of the initial model until each training operational data vector in the training
operational data set 410 has been used for generation of the corresponding
updated model. The present method 400 selects the initial model as the trained
model, and the initial coefficients as the trained coefficients when each of the
training operational data vectors have been used for generating the corresponding
updated model and the corresponding updated coefficients.
[0078] Turning now to Fig. 5, a flow chart illustrating a method 500 for
generating one or more model coefficients corresponding to a determined model,
in accordance with aspects of the present specification, is presented. At block
502, a plurality of training operational data sets, a plurality of corresponding
inspection data points, and a plurality of designated vector coefficients may be
generated. Each training operational data set, for example, may be generated by
selecting a subset of operational data corresponding to a device, such as the
device 102 of Fig. 1. Similarly, the corresponding inspection data points may be
generated by a user or one or more sensing devices that are operatively coupled to
the device. Additionally, the designated vector coefficients may be assumed by a
user as per field and technical knowledge.
[0079] Subsequently, at block 504, an inspection output may be
determined iteratively by updating a determined model 510. The inspection
output, for example, may be determined based on the training operational data
sets and the determined vector coefficients. In one embodiment, the inspection
output is iteratively determined until all or maximum amount of data in the
training operational data sets is used. In one embodiment, the inspection output
may be generated by applying a Kalman filter technique on the determined model
510, the training operational data sets, and the determined vector coefficients.
[0080] Furthermore, at block 506, errors corresponding to the determined
vector coefficients may be determined. For example, the errors corresponding to
the determined vector coefficients may be determined based on the determined
inspection output and the corresponding inspection data points. In one
embodiment, the errors may be determined by subtracting the determined
inspection outputs from the corresponding inspection data points.
[0081] Moreover, at block 508, model coefficients may be selected from
the determined vector coefficients. Particularly, the model coefficients may be
selected based on the errors determined at block 506. For example, a determined
vector coefficient that corresponds to a minimum error may be selected as a
model coefficient. Reference numeral 512 is representative of the model
coefficients.
[0082] The present systems and methods presented hereinabove provide
enhanced detection of one or more anomalies in a device. More particularly, the
systems and methods entail updating a determined model to generate a validated
model such that a performance of the validated model is better than or similar to a
performance of the determined model. Consequently, the present systems and
methods ensure that the performance of the validated model is not worse than that
of the determined model. In particular, the performance of the validated model is
determined to be better than the performance of the determined model when the
measurement accuracy and/or prediction accuracy of the validated model is better
than the measurement accuracy and/or prediction accuracy of the determined
model.
[0083] While only certain features of the invention have been illustrated
and described herein, many modifications and changes will occur to those skilled
in the art. It is, therefore, to be understood that the appended claims are intended
to cover all such modifications and changes as fall within the true spirit of the
invention.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [18-03-2016(online)].pdf | 2016-03-18 |
| 2 | Form 3 [18-03-2016(online)].pdf | 2016-03-18 |
| 4 | Description(Complete) [18-03-2016(online)].pdf | 2016-03-18 |
| 5 | 201641009626-Power of Attorney-210416.pdf | 2016-07-11 |
| 6 | 201641009626-Correspondence-PA-210416.pdf | 2016-07-11 |
| 7 | 201641009626-FER.pdf | 2019-02-25 |
| 8 | 201641009626-RELEVANT DOCUMENTS [07-08-2019(online)].pdf | 2019-08-07 |
| 9 | 201641009626-FORM 13 [07-08-2019(online)].pdf | 2019-08-07 |
| 10 | 201641009626-AMENDED DOCUMENTS [07-08-2019(online)].pdf | 2019-08-07 |
| 11 | 201641009626-RELEVANT DOCUMENTS [22-08-2019(online)].pdf | 2019-08-22 |
| 12 | 201641009626-PETITION UNDER RULE 137 [22-08-2019(online)].pdf | 2019-08-22 |
| 13 | 201641009626-OTHERS [22-08-2019(online)].pdf | 2019-08-22 |
| 14 | 201641009626-FER_SER_REPLY [22-08-2019(online)].pdf | 2019-08-22 |
| 15 | 201641009626-DRAWING [22-08-2019(online)].pdf | 2019-08-22 |
| 16 | 201641009626-CORRESPONDENCE [22-08-2019(online)].pdf | 2019-08-22 |
| 17 | 201641009626-COMPLETE SPECIFICATION [22-08-2019(online)].pdf | 2019-08-22 |
| 18 | 201641009626-CLAIMS [22-08-2019(online)].pdf | 2019-08-22 |
| 19 | 201641009626-ABSTRACT [22-08-2019(online)].pdf | 2019-08-22 |
| 20 | 201641009626-FORM 3 [23-08-2019(online)].pdf | 2019-08-23 |
| 21 | 201641009626-US(14)-HearingNotice-(HearingDate-04-11-2020).pdf | 2021-10-17 |
| 1 | 2019-01-1112-18-08_11-01-2019.pdf |