Sign In to Follow Application
View All Documents & Correspondence

Robust Filtering And Predicting Using Switching Models For Machine Condition Monitoring

Abstract: In a machine condition monitoring technique, a sensor reading is filtered using a switching Kalman filter. Kalman filters are created to describe separate modes of the signal, including a steady mode an a non-steady- mode. Foreach new observation of the signal, a new mode is estimated based on the previous mode and state, and a new state is then estimated based on the new mode and the previous mode and state. In the steady mode, evolution covariances of both the observed signal and the rate of change of that signal are low. In the non-steady mode, the evolution covariance of the observed signal is set to a higher value, permitting the observed signal to vary widely, while the evolution covariance of the rate of change of the signal is maintained at a low level.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 May 2011
Publication Number
43/2012
Publication Type
INA
Invention Field
ELECTRICAL
Status
Email
Parent Application

Applicants

SIEMENS CORPORATION
170 WOOD AVENUE SOUTH, ISELIN, NJ 08830, U.S.A.

Inventors

1. YUAN, CHAO
36 MARION DRIVE, PLAINSBORO, NJ 08536, U.S.A.

Specification

ROBUST FILTERING AND PREDICTION USING SWITCHING MODELS FOR
MACHINE CONDITION MONITORING
Claim of Priority
[0001] This application claims priority to, and incorporates by reference herein in its
entirety, pending United States Provisional Patent Application Serial Number 61/106,701,
filed October 20,2008, and entitled "Robust Filtering and Prediction Using Switching Models
for Machine Condition Monitoring."
Field of the Disclosure
[0002] The present invention relates generally to machine condition monitoring for
the purpose of factory automation. More specifically, the invention relates to techniques for
filtering observed values in order to estimate a true signal value without noise.
Background
[0003] The task of machine condition monitoring is to detect faults as early as
possible to avoid further damage to a machine. That is usually done by analyzing data from a
set of sensors, installed on different parts of a machine, for measuring temperature, pressure,
vibrations, etc. When a machine works normally, the sensor data is located within a normal
operating region. When the sensor data deviates much from that region, a fault may have
occurred and an alarm should be made.
[0004] Sensor signals are often contaminated by noise. Removing noise and
recovering the underlying true signals are fundamental tasks for machine condition
monitoring. The present disclosure focuses on removing noise from a single sensor signal,
but the proposed methodology is applicable to multiple signals.


[0005] In an exemplary system, there are t observed values y1,y2,..., yt, from time
stamp 1 to time stamp t for a sensor. Since yt is often corrupted by noise, it is of interest to
estimate the true signal zt without noise. Example observed and true values for a sensor are
shown in the graph 200 of FIG. 2, where the horizontal axis indicates time (with day as unit)
and the vertical axis indicates sensor value. The curve 210 denotes the observed signals yt,
and the curve 220 denotes the noise-free signal zt.
[0006] Once the true signal z, is uncovered, fault detection may be performed using
methods as simple as threshold-based rules. For example, if the example sensor is a pressure
sensor and the pressure should never be larger than 0.5, the rule can be "IF zt > 0.5, THEN
this is a failure." Using that rule, the sensor represented in FIG. 2 yields an alarm at about t =
230. ;
[0007] That system may be improved, however, by detecting the upward trend earlier.
For example, if the slope of zt at t = 215 can be correctly estimated, then it is possible to
predict at t = 215 that zt will hit 0.5 at t = 230. That technique produces an alarm 15 days
earlier, which is a big benefit. The system therefore should estimate not only z,, but also its
derivatives such as velocity and acceleration (and so on). Those quantities are denoted as zt
and zt, respectively. A vector is used to denote the above quantities of the true signal
To create predictive alarms as described above, the vector x, must be
estimated.
[0008] A filtering problem is defined as: given a series of observations y1,y2,..., yt
is estimated. A widely used filtering algorithm is the Kalman filter model, which
is formulated as follows:
[0009]
[0010]

[0011] The first equation in (1) is a state evolution model. It specifies a linear relation
between the current state xt and the previous state . The evolution matrix A is defined as
[0012]
[0013] where At is the time difference between adjacent data points; in this case At =
1 day. Note that At can vary if signals are sampled with a varying interval. This linear
relation follows simple physics about displacement, velocity and acceleration; for example
is the evolution error vector and has a Gaussian
distribution with zero mean and diagonal covariance diag() is an
operator transforming a vector into a diagonal matrix. and correspond to the
variances for and respectively.
[0014] The second equation in (1) is called the observation model. It relates the state
xt with the observation yt. The observation matrix C = (1 0 0], because only zt is observable.
The observation noise wt has a Gaussian distribution with zero mean and variance r. Both
evolution covariance Q and observation noise variance r are parameters for this Kalman
filtering model and can be either specified or learned from training data.
[0015] Kalman filtering proceeds in an iterative fashion. Once a new observation y, is
available, the filter updates its estimate or conditional probability of x, given y\:,. This can be
proved to be single Gaussian distribution. Once the estimate for x, is obtained, a prediction is
performed for a given future time By
doing this, the true value of the sensor may be predicted at a future time stamp based on the
information available only at the present time.
[0016] A filtering model such as the Kalman filtering model is widely used in many
industrial applications with great success. In many real systems, however, the evolution of

sensor signals is too complex to be described by a single filtering model. For example, in a
non-steady mode of a machine, spikes or abrupt Step changes are often observed in the sensor
values. One example of a sensor output of such a system 500 is shown in FIG. 5. During
spikes such as those at 521, 522, 523 and steps such as steps 511, 512, the true sensor signal
changes dramatically in a short period, imposing a great challenge to the filtering algorithm.
[0017] There is therefore presently a need for an improved technique to filter sensor
outputs in machine monitoring systems, for use in fault detection and predictive maintenance.
Summary of the Invention
[0018] In the present disclosure, multiple filtering models are used to model sensor
signals. A mode variable is introduced to indicate the Operating mode of a machine.
Different models are applied for different modes of a machine. For example, the traditional
Kalman filtering model is used for the steady mode, but another Kalman filtering model is
used for the non-steady mode. Switching models, in particular the switching Kalman filtering
technique, are applied to perform filtering and prediction. The switching model can
automatically determine the mode and apply the corresponding model for the mode.
[0019] One embodiment of the invention is a method for filtering a signal from a
sensor in a machine monitoring system using a switching Kalman filter. The signal has at
least a steady mode wherein a mode variable st has a first value and a non-steady mode
wherein s, has a second value.
[0020] At a machine monitoring computer, a new observation y, of the signal is
received. An estimate of a current mode s, of the signal is computed based on the new
observation a previous mode and a previous state of the signal, the previous state x,.
comprising values for at least a previous true signal and a first derivative of the
previous true signal.

[0021] An estimate of a current state of the signal is computed based on the
estimate of the current mode the previous mode s and the previous state . The
previous mode is then set equal to the estimate of the current mode , and the previous
state is set equal to the estimate of the current state xt. The steps are then repeated.
[0022] The step of computing an estimate of a current state x, of the signal may
include selecting a Kalman filter to compute the estimate of the current state x,. The Kalman
filter is then selected from at least a first Kalman filter for use when the current mode s, is the
steady mode, and a second Kalman filter for use when the current mode st is the non-steady
mode.
[0023] The first and second Kalman filters may include evolution error vectors v,
having diagonal covariances containing variances for at least a signal z, and a first derivative
of that signal zt, wherein the variance for zt in the second Kalman filter is at least ten times
the variance for zt in the first Kalman filter. In another embodiment, the variance for z, in
the second Kalman filter is greater than the variance for z, in the second Kalman filter and
greater than the variances for z, and z, in the first,Kalman filter.
[0024] The first and second Kalman filters may differ only by an evolution covariance
matrix. The variances for the signal z, and the first'derivative of that signal, z, maybe
learned from training data. At least one of the first arid second Kalman filters may include an

observation noise variance learned from training data.
[0025] The previous state of the signal may further comprise a value for a second
derivative of the previous true signal.
[0026] The previous mode and the previous state of the signal may be
maintained as a Gaussian mixture model.


[0027] The step of computing an estimate of a current mode s, may include
implementing a probability of 0.9 that the current mode is equal to the previous mode
and a probability of 0.1 that the current mode s, is not equal to the previous mode . Tn an
initial execution of the steps, the mode may be the steady mode wherein the mode variable s,
= 1.
[0028] The method may further include the steps of predicting a future true signal
based on the estimate of a current state xt of the signal; and producing an alarm if the
predicted future signal is outside a set of process limits.
[0029] Another embodiment of the invention is a computer-usable medium having
computer readable instructions stored thereon for execution by a processor to perform
methods as described above.
Brief Description of the Drawings
[0030] FIG. 1 is a schematic view showing a system according to the present
disclosure.
[0031] FIG. 2 is a graph showing observed and true values for an example sensor in a
machine monitoring system.
[0032] FIG. 3 is a graphical model of a switching Kalman filter according to the
present disclosure.
[0033] FIG. 4 is a work flow chart showing a method according to the present
disclosure.
[0034] FIG. 5 is a graph showing a true signal from a sensor over time for use in an
example implementation of the system according to the present disclosure.
[0035] FIG. 6 is a graph showing an estimated true signal from a sensor using Kalman
filtering.


[0036] FIG. 7 is a graph showing an estimated velocity of a true signal from a sensor
using Kalman filtering.
[0037] FIG. 8 is a graph showing an estimated true signal from a sensor using an SKF
model according to the present disclosure.
[0038] FIG. 9 is a graph showing an estimated velocity of a true signal from a sensor
using an SKF model according to the present disclosure.
Description
[0039] The present invention may be embodied in a system for filtering sensor values,
which may be included in a machine monitoring system or may be a stand-alone system.
FIG. 1 illustrates a machine monitoring system 100 according to an exemplary embodiment of
the present invention. As shown in FIG. 1, the system 100 includes a personal or other
computer 110. The computer 110 may be connected to a sensor 171 over a wired or wireless
network 105. The system preferably includes additional sensors (not shown) that are
similarly connected.
[0040] The sensor 171 is arranged to acquire "data representing a characteristic of the
machine or system 180 or its environment. The sensor measures a characteristic such as
temperature, pressure, humidity, rotational or linear spfeed, vibration, force, strain, power,
voltage, current, resistance, flow rate, proximity, chemical concentration or any other
characteristic. As noted above, the sensor 171 measures an observed value v that includes
noise. The true signal z must be estimated.
[0041] The sensor 171 may be connected with the computer 110 directly through the
network 105, or the signal from the sensor may be conditioned by a signal conditioner 160
before being transmitted to the computer. Signals from sensors monitoring many different


machines and their environments may be connected through the network 105 to the computer
110.
[0042] The computer 110, which may be a portable or laptop computer or a
mainframe or other computer configuration, includes a central processing unit (CPU) 125 and
a memory 130 connected to an input device 150 and an output device 155. The CPU 125
includes a signal filtering and prediction module 145 that includes one or more methods for
filtering signals and predicting signals as discussed herein. Although shown inside the CPU
125, the module 145 can be located outside the CPU 125, such as within the signal
conditioner 160. The CPU may also contain a machine monitoring module 146 that acquires
signals for use by the signal filtering and prediction module. The machine monitoring module
146 may also be used in acquiring training data from the sensor 171 for use in configuring the
signal filtering and prediction module.
[0043] The memory 130 includes a random access memory (RAM) 135 and a read-
only memory (ROM) 140. The memory 130 can also include a database, disk drive, tape
drive, etc., or a combination thereof. The RAM 135 functions as a data memory that stores
data used during execution of a program in the CPU 125 and is used as a work area. The
ROM 140 functions as a program memory for storing a program executed in the CPU 125.
The program may reside on the ROM 140 or on any other computer-usable medium as
computer readable instructions stored thereon for execution by the CPU 125 or other
processor to perform the methods of the invention. The ROM 140 may also contain data for
use by the programs, such as training data that is acquired from the sensor 171 or created
artificially.
[0044] The input 150 may be a keyboard, mouse, network interface, etc., and the
output 155 may be a liquid crystal display (LCD), cathode ray tube (CRT) display, printer,
etc.
8

[0045] The computer 110 can be configured to operate and display information by
using, e.g., the input 150 and output 155 devices to execute certain tasks. Program inputs,
such as training data, etc., may be input through the input 150, may be stored in memory 130,
or may be received as live measurements from the sensor 171.
[0046] The presently disclosed method for filtering and predicting machine
monitoring sensor signals uses different filtering models for different modes of a machine.
For example, a machine may have two modes: a steady mode and a non-steady mode. In
accordance with the present disclosure, a separate model is applied for each mode. During the
steady mode, the sensor signals are stable. In that case, an evolution covariance with small
values for each of q1, q2 and q3 is used.
[0047] On the other hand, during the non-steady mode, the sensor signals are more
erratic. In that case, a different evolution covariance is designed as follows. First, the
variance q\ of the true signal z, is set to a very large value such that zt is allowed to change
dramatically. Second, the variances for the higher order derivatives q2, q3 are kept the same
small values as those of the steady mode. The reason this is that those high order derivatives
can only exist for a continuous signal with smooth variations. Those derivatives should not
change much with any significant and sudden change's of signals.
[0048] Two filters have been introduced for two modes of a machine. Those filters
differ only by the evolution covariance matrix. A new mode variable s, is introduced to
indicate the mode. If st=1, the machine is in the steady mode; if st = 2, the machine is in the
nonsteady mode. The corresponding evolution covariance matrices are denoted by Q1, and
for mode 1 (steady) and by Q2 for mode 2 (non-steady).
[0049] Since at any time a machine can either stay in one mode or change to a
different mode, a switching model is used in the present disclosure to perform filtering. In
particular, the switching Kalman filtering methods are applied. A switching Kalman filtering

method is described in K. P. Murphy, "Switching Kalman Filters," Compaq Cambridge
Research Lab Tech. Report 98-10, 1998, which is hereby incorporated by reference in its
entirety. The switching Kalman filtering method is widely used in signal processing. The
network 300 of FIG. 3 shows a graphical model representation of the switching Kalman filter
(SKF). Arrows indicate dependencies between variables. If and st are the same, the model
becomes a single Kalman filter. The prior probability of is set to be 0.9 and that of.?,
to be 0.1. That is based on the common-sense notion that a machine tends to stay in the
same mode and has few mode changes. It is also assumed that at the beginning when f = 1,
the machine is in the steady mode.
[0050] The flow chart of FIG. 4 illustrates a method 400 for performing filtering using
the SKF model. At each time stamp t, the previous estimates of state x,_i and mode s,-i are
kept as a Gaussian mixture model, since there are two modes and under each mode x,.i has a
Gaussian distribution. A new observation yt is received at 410. For the new observation yt, a
new estimate for the mode st is computed at 420, which is presented by a posterior probability
of Then a new estimate is made at 430 for x,, which is represented by a
posterior probability of . The new estimates of and s, will replace the
previous and That procedure is repeated at 440 as time goes on.
[0051] Since state xt is represented by a Gaussian mixture, the mean of this mixture
model is computed as the final point estimate of xt. That point estimate will be used for
prediction.
(0052] Test results
[0053] The methods of the present disclosure are demonstrated using the following
example. An observed signal 500 from a sensor is represented on the graph of FIG. 5 as a
function of time. The signal is generally flat around zero between t = 1 and t = 200. The


signal then trends upward with a slope = 0.02 affer t = 200. Superimposed on that basic
signal, the signal undergoes a step up 511 at t = 50 and a step back down 512 to normal at t =
100. In addition, the signal has large variations 521", 522, 523, at t = 150, 230 and 300,
respectively. In an actual machine monitoring system, those sudden changes are typically due
to a non-steady working mode of a machine.
[0054] In this test, the performance of the proposed switching Kalman filtering is
compared with that of single Kalman filtering. For the test, acceleration is ignored so that
. The evolution matrix A and observation matrix C may be obtained accordingly
by removing the corresponding row or third column representing acceleration.
[0055] For the single Kalman filtering, only steady state mode is considered with
evolution covariance matrix Q = diag([0.00001 0.00001]). For the switching Kalman filtering
model, the steady mode covariance Q1 is set to Q. For the non-steady mode, however, a new
Q2 = [100 0.00001] is used. Note that in the non-steaay mode the variance q1 for the true
signal is much larger than the variance q2 for the derivative. Specifically, in the example, the
variance q\ for the true signal is set to 100, whilcthe variance q2 for velocity is 0.00001,
which is the same as the value for the steady mode. In the example, q1 is 107 times larger than
q2. In another example, q1 is 104 times larger; in yet another example, q1 may be at least 10
times larger. In all cases, the variance q\ in the non-steady mode is larger than the variance q2
in the non-steady mode, and is larger than both variances q1, q2 in the steady mode. The
observation noise r in the example is always set to 0.1.,
[0056] The estimated true signal zt, as determined using the single Kalman filtering,
is shown as line 610 in FIG. 6 superimposed over the observed signal 620. The estimated
velocity z, as determined using the Kalman filtering, is shown as line 710 in FIG. 7. The
results at the beginning or at the end of the time frame, are accurate. The estimates are
adversely affected in the middle, however, by the nonsteady behavior of the signal. For

example, at t = 50, the Kalman filter tries to adapt to the sudden jump with changes to both
the true signal and the velocity. That leads to poor results for both estimates. As a result, the
velocity estimate is as high as 0.23 (FIG. 7). If one uses this falsely high velocity to predict
future behaviors, false alarms are very likely . Similar undesired results also happen
at the periods with high signal variations.
[0057] FIGS. 8 and 9 show the corresponding results using the switching Kalman
filter (SKF) model of the present disclosure. The estimate 810 for the true signal (FIG. 8) fits
the observed data much better than that using the Kalman filtering (FIG. 6). The velocity
estimate 910 of FIG. 9 also reflects the ground truth nicely. For example, between t = 1 and t
= 200, the estimated velocity is close to zero (the ground truth) with some fluctuation due to
noise. After t = 200, the velocity change is detected' and the estimate quickly adapts to the
change; after t = 230, the estimated velocity fluctuates around 0.02 (the ground truth). The
SKF model handles those non-steady periods very well and the estimates are not appreciably
affected.
[0058] Conclusion
[0059] The foregoing Detailed Description is to be understood as being in every
respect illustrative and exemplary, but not restrictive and the scope of the invention disclosed
herein is not to be determined from the Description of the Invention, but rather from the
Claims as interpreted according to the full breadth permitted by the patent laws. It is to be
understood that the embodiments shown and described herein are only illustrative of the
principles of the present invention and that various modifications may be implemented by
those skilled in the art without departing from the scope and spirit of the invention.

What is claimed is:
1. A method for filtering a signal from a sensor in a machine monitoring
system using a switching Kalman filter, the signal having at least a steady mode wherein a
mode variable st has a first value and a non-steady mode wherein st has a second value, the
method comprising:
receiving at a machine monitoring computer a new observation yt of the signal;
computing an estimate of a current mode st of the signal based on the new observation
yt, a previous mode st-1 and a previous state xt-1 of the signal, the previous state xt-1
comprising values for at least a previous true signal zt-1 and a first derivative zt-1 of the
previous true signal;
computing an estimate of a current state xt of the signal based on the estimate of the
current mode st, the previous mode st-1 and the previous state xt-1;
setting the previous mode st-1 equal to the estimate of the current mode st, and setting
the previous state xt-1 equal to the estimate of the current state xt; and
repeating the above steps.
2. The method of claim 1, wherein the-step of computing an estimate of a
current state xt of the signal comprises selecting a Kalman filter to compute the estimate of the
current state xt, the Kalman filter being selected from at least a first Kalman filter for use
when the current mode s, is the steady mode, and a second Kalman filter for use when the
current mode s, is the non-steady mode.
3. The method of claim 2, wherein the first and second Kalman filters include
evolution error vectors vt having diagonal covariances containing variances for at least a
signal z, and a first derivative of that signal i,, and wherein the variance for z, in the second
Kalman filter is at least ten times the variance for zt in the first Kalman filter.


4. The method of claim 2, wherein the first and second Kalman filters include
evolution error vectors vt. having diagonal covariances containing variances for at least a
signal zt and a first derivative of that signal zt, and wherein the variance for zt in the second
Kalman filter is greater than the variance for zt in the second Kalman filter and greater than
the variances for zt and zt in the first Kalman filter.
5. The method of claim 2, wherein the first and second Kalman filters differ
only by an evolution covariance matrix.
6. The method of claim 2, wherein the first and second Kalman filters include
evolution error vectors vt having diagonal covariances containing variances for at least a
signal zt and a first derivative of that signal zt
7. The method of claim 6, wherein the variances for at least a signal zt and a
first derivative of that signal, zt are learned from training data.
8. The method of claim 2, wherein at least one of the first and second Kalman
filters includes an observation noise variance learned from training data.
9. The method of claim 1, wherein the previous state xt-1 of the signal further
comprises a value for a second derivative zt-1 of the previous true signal.
10. The method of claim 1, wherein the previous mode st-1 and the previous
state xt-1 of the signal are maintained as a Gaussian mixture model.
11. The method of claim 1, wherein the step of computing an estimate of a
current mode s, includes implementing a probability of 0.9 that the current mode s, is equal to

the previous mode st-1 and a probability of 0.1 that the current mode st is not equal to the
previous mode st-1.
12. The method of claim 1, wherein, in an initial execution of the steps, the mode
is the steady mode wherein the mode variable st = 1.
13. The method of claim 1, further comprising the steps of:
predicting a future true signal zt-1 based on the estimate of a current state x, of the
signal; and
producing an alarm if the predicted future signal is outside a set of process limits.

In a machine condition monitoring technique, a sensor reading is filtered
using a switching Kalman filter. Kalman filters are created to describe
separate modes of the signal, including a steady mode an a non-steady-
mode. Foreach new observation of the signal, a new mode is estimated
based on the previous mode and state, and a new state is then estimated
based on the new mode and the previous mode and state. In the steady
mode, evolution covariances of both the observed signal and the rate of
change of that signal are low. In the non-steady mode, the evolution
covariance of the observed signal is set to a higher value, permitting the
observed signal to vary widely, while the evolution covariance of the rate
of change of the signal is maintained at a low level.

Documents

Application Documents

# Name Date
1 2106-KOLNP-2011-AbandonedLetter.pdf 2017-10-08
1 abstract-2106-kolnp-2011.jpg 2011-10-07
2 2106-KOLNP-2011-FER.pdf 2017-02-16
2 2106-kolnp-2011-specification.pdf 2011-10-07
3 2106-kolnp-2011-pct request form.pdf 2011-10-07
3 2106-KOLNP-2011-(28-10-2013)-ANNEXURE TO FORM 3.pdf 2013-10-28
4 2106-kolnp-2011-pct priority document notification.pdf 2011-10-07
4 2106-KOLNP-2011-(28-10-2013)-CORRESPONDENCE.pdf 2013-10-28
5 2106-kolnp-2011-international search report.pdf 2011-10-07
5 2106-KOLNP-2011-ASSIGNMENT.pdf 2011-10-18
6 2106-kolnp-2011-international publication.pdf 2011-10-07
6 2106-KOLNP-2011-CORRESPONDENCE-1.2.pdf 2011-10-18
7 2106-kolnp-2011-international preliminary examination report.pdf 2011-10-07
7 2106-kolnp-2011-abstract.pdf 2011-10-07
8 2106-kolnp-2011-gpa.pdf 2011-10-07
8 2106-kolnp-2011-claims.pdf 2011-10-07
9 2106-KOLNP-2011-CORRESPONDENCE-1.1.pdf 2011-10-07
9 2106-kolnp-2011-form-5.pdf 2011-10-07
10 2106-kolnp-2011-correspondence.pdf 2011-10-07
10 2106-kolnp-2011-form-2.pdf 2011-10-07
11 2106-kolnp-2011-description (complete).pdf 2011-10-07
11 2106-KOLNP-2011-FORM-18.pdf 2011-10-07
12 2106-kolnp-2011-drawings.pdf 2011-10-07
12 2106-kolnp-2011-form-13.pdf 2011-10-07
13 2106-KOLNP-2011-FORM 3-1.1.pdf 2011-10-07
13 2106-kolnp-2011-form-1.pdf 2011-10-07
14 2106-KOLNP-2011-FORM 3-1.1.pdf 2011-10-07
14 2106-kolnp-2011-form-1.pdf 2011-10-07
15 2106-kolnp-2011-drawings.pdf 2011-10-07
15 2106-kolnp-2011-form-13.pdf 2011-10-07
16 2106-kolnp-2011-description (complete).pdf 2011-10-07
16 2106-KOLNP-2011-FORM-18.pdf 2011-10-07
17 2106-kolnp-2011-form-2.pdf 2011-10-07
17 2106-kolnp-2011-correspondence.pdf 2011-10-07
18 2106-KOLNP-2011-CORRESPONDENCE-1.1.pdf 2011-10-07
18 2106-kolnp-2011-form-5.pdf 2011-10-07
19 2106-kolnp-2011-claims.pdf 2011-10-07
19 2106-kolnp-2011-gpa.pdf 2011-10-07
20 2106-kolnp-2011-abstract.pdf 2011-10-07
20 2106-kolnp-2011-international preliminary examination report.pdf 2011-10-07
21 2106-KOLNP-2011-CORRESPONDENCE-1.2.pdf 2011-10-18
21 2106-kolnp-2011-international publication.pdf 2011-10-07
22 2106-KOLNP-2011-ASSIGNMENT.pdf 2011-10-18
22 2106-kolnp-2011-international search report.pdf 2011-10-07
23 2106-KOLNP-2011-(28-10-2013)-CORRESPONDENCE.pdf 2013-10-28
23 2106-kolnp-2011-pct priority document notification.pdf 2011-10-07
24 2106-KOLNP-2011-(28-10-2013)-ANNEXURE TO FORM 3.pdf 2013-10-28
24 2106-kolnp-2011-pct request form.pdf 2011-10-07
25 2106-kolnp-2011-specification.pdf 2011-10-07
25 2106-KOLNP-2011-FER.pdf 2017-02-16
26 abstract-2106-kolnp-2011.jpg 2011-10-07
26 2106-KOLNP-2011-AbandonedLetter.pdf 2017-10-08

Search Strategy

1 search_15-02-2017.pdf