Abstract: The present invention relates to speaker recognition. The proposed system (100) comprises means (110) for receiving an input utterance (12) from a speaker, means (120) for extracting a plurality of features (130) from the input utterance (12), means (140) for forming a feature vector (150) by combining at least two different extracted features, means (160) for forming a set of speaker specific multiple templates (170) under different conditions and means for matching a further input utterance (230) of a speaker against the set of speaker specific multiple templates (170).
Description
Speaker recognition system and method
The present invention relates to a speaker recognition system
and a method thereof.
Speaker recognition or voice recognition is the task of
recognizing people from their voices. Such systems extract
features from speech, model them and use them to recognize
the person from his/her voice. Speaker recognition usually
uses the acoustic features of speech that have been found to
differ between individuals. These acoustic patterns reflect
learned behavioral patterns e. g. voice pitch, speaking
style, etc. of the speaker.
Speaker recognition methods usually have two phases: training
and testing. During training a number of speakers are asked
to provide input utterances in different conditions of
background noise over a period of time. Acoustic features are
extracted from the voice utterances and a model is formed
which serves as the basis for speaker recognition. During the
test phase a speaker utterance is matched against the model
formed during training to recognize the speaker.
It is an object of the present invention to provide improved
speaker recognition, which is robust against variations over
time.
The above object is achieved by a method for speaker
recognition, the method comprising the steps of:
- a) receiving an input utterance from a speaker;
- b) extracting a plurality of features from the input
utterance;
- c) forming a feature vector by combining at least two
different extracted features;
- d) forming a set of speaker specific multiple templates by
repeating steps a, b and c under different conditions;
- e) repeating the steps a, b, c and d for a plurality of
speakers; and
- f) matching a further input utterance of a speaker against
the set of speaker specific multiple templates.
The above object is achieved by a system for speaker
recognition, the system comprising:
- means for receiving an input utterance from a speaker;
- means for extracting a plurality of features from the input
utterance;
- means for forming a feature vector by combining at least
two different extracted features;
- means for forming a set of speaker specific multiple
templates under different conditions; and
- means for matching a further input utterance of a speaker
against the set of speaker specific multiple templates.
The underlying idea of the present invention is to combine
two different features of the input utterance signal and form
a single feature vector such that the two features, which
were solving two separate problems before combination, when
combined provide a solution to a set of problems.
In a preferred embodiment of the present invention the method
further comprises receiving a text corresponding to the input
utterance along with the input utterance from the speaker.
The system also has access to the password text corresponding
to the input speech signal and will retrieve the set of
multiple templates for the particular text.
In a further preferred embodiment of the present invention
the text corresponding to the input utterance is a sequence
of words selected from a predefined vocabulary of words. The
predefined vocabulary is a set of words which were used while
making the speaker specific multiple templates. The set of
words are used to make variable text passwords and allow the
flexibility to change the password by selecting different set
of words. The password can therefore be dynamically formed by
concatenating a sequence of words drawn from a pre-specified
set of words ('vocabulary' of the system). For example digits
9-3-1 (nine-three-one) can be used as input utterance from a
pre-defined set of numbers from 0 to 9.
In a further preferred embodiment of the present invention
the plurality of features include spectral parameters and
pitch parameters, and the feature vector is formed by
combining at least one spectral parameter and at least one
pitch parameter. Spectral and pitch parameters are different
for different speakers and they broadly provide all the
necessary information which forms the basis of speaker
recognition techniques. The pitch parameter is not impacted
by background noise or channel variation and when combined
with spectral parameter, provides the system robustness
against a set of problems.
In a further preferred embodiment of the present invention, a
time warp method is used for matching the further input
utterance against the speaker specific multiple templates.
This allows the comparison of two segments of signals having
different time duration together with one signal being
suitably time-warped to aid the comparison process.
In a further preferred embodiment of the present invention,
the step of matching the further input utterance is done
using a one-pass dynamic programming algorithm, the result
being a match score between the further input utterance and
the set of speaker specific multiple templates for each
speaker. For speaker-identification, one such score is
computed for each speaker and the speaker with the lowest
score is declared the input speaker. For speaker-
verification, this score corresponds to the match between the
input utterance and the speaker specific multiple template
sets. The score is normalized and the normalized score is
compared to a threshold. The input speaker claim is accepted
if the normalized score is less than the threshold and
rejected otherwise.
The present invention is further described hereafter with
reference to preferred embodiments shown in the accompanying
drawings, in which:
FIG 1 is a schematic overview of a conventional single
template DTW based speaker recognition system,
FIG 2 shows the DTW matching for a conventional single
template DTW based speaker recognition system,
FIG 3 is a schematic diagram of a multiple template DTW based
text-dependent speaker recognition system,
FIG 4 shows the DTW matching of an input utterance with a
speaker specific multiple template set,
FIG 5 shows DP recursion that operates in the one-pass DP
algorithm for multiple templates,
FIG 6 shows the matching by one-pass DP algorithm when inter-
word silences are present,
FIG 7 shows the general recursions operating in the one-pass
DP algorithm when interword silences are present,
FIGS 8 shows how MFCC Feature Vector sequences look different
for different speakers,
FIG 9 shows how Pitch Contours look different for different
speakers,
FIG 10 is a schematic illustration of the proposed system for
speaker recognition, according to a particular embodiment of
the present invention, and
FIG 11 is a flowchart illustrating a method for speaker
recognition, according to a particular embodiment of the
present invention.
There is a difference between speaker recognition
(recognizing who is speaking) and speech recognition
(recognizing what is being said). Generally these two terms
are frequently confused and voice recognition is used as a
synonym for speech recognition instead.
Speaker recognition uses the acoustic features of speech that
have been found to differ between individuals. These acoustic
patterns reflect learned behavioral patterns e. g. voice
pitch, speaking style, etc. of the speaker. The incorporation
of learned patterns into the voice templates has earned
speaker recognition its classification as a behavioral
biometric.
Speaker recognition systems employ different styles of spoken
input: text-dependent and text-independent input. This
relates to the spoken text used during training versus test.
If the text must be the same for training and test this is
called text-dependent recognition. In a text-dependent
speaker recognition system, during the training, the system
is trained with speech signal segments uttered by various
speakers (corresponding to the individual words of the
'vocabulary' of the system from which 'passwords' are
formed). During actual usage of the system, given the uttered
password and the corresponding text information, the system
attempts to identify the speaker in a 'text-dependent'
manner.
The password or 'text' can be a 'fixed-text' in which the
password is unique and fixed, one per user, or it can be
'variable-text' in which the password can be dynamically
formed by concatenating from a set of words e.g. 9-3-1, from
the digits from 0 to 9. For a fixed-text system, if the
password needs to be changed then the system will have to be
re-trained with the new password. For a variable-text system,
it is easy to change the password, as the new password can be
dynamically formed from a set of words.
In the case of text-independent systems the text during
training and test is different. In fact, the training may
happen without the user's knowledge. Some recorded piece of
speech may suffice. Since text-independent systems have no
knowledge of the text being spoken only general speaker-
specific properties of the speaker's voice can be used. This
does limit the accuracy of the recognition. On the other
hand, this approach is also completely language independent.
In speaker-recognition systems, the use of a 'variable-text'
mode of operation provides the flexibility of defining
arbitrary passwords from a given vocabulary of words (such as
the ten digits). Now, passwords can be dynamically defined
and changed by the user at any time or by the system as in a
prompted mode of operation. This is in contrast to the
'fixed-text' mode of operation, which requires that the test
password (a phrase or a sentence) be recorded for training
every time it is defined anew or needs to be changed. Thus a
'variable-text' mode of operation is a more secure and better
system.
Variable-text systems typically have a front-end feature
extractor which extracts some features from the given speech
signal segments at the input. A back-end classifier then uses
these extracted features to either train the system (during
training phase) or to enable the system to detect the speaker
(during actual use). For such back-end classification,
conventional text-dependent speaker recognition systems are
using either the Hidden Markov Model (HMM) based methods or
the Dynamic Time Warping (DTW) method.
DTW is a method in which two segments of signals, which are
of different time duration, can be compared together with the
signals being suitably time-warped and aligned against each
other to aid the comparison process. This way two signal
segments can be compared even though they are not time-
aligned or they have a different length.
Conventional speaker recognition techniques suffer from
certain problems, namely channel and background noise
variation during training and testing and intra-speaker
variability over a period of time. Channel variation occurs
when the channel through which the speech is communicated and
recorded is different during training and testing (actual
use). Various background noise conditions (both noise types
and noise levels) can drastically reduce the performance of
any device applying such techniques. Intra-speaker
variability refers to the variations one sees in the speech
of the same person recorded/tested over a long period of
time. Moreover, there may also be distinct pronunciation
variability for the same speaker across various sessions. The
speaker's voice may be influenced by physiological changes as
a cold infection. Different moods of the speaker may also
influence the voice. Typically, performance degrades in
conventional methods when there is a large time gap between
training and testing.
The present invention provides an improved system and method
for text-dependent speaker recognition, which is robust
against variations over time, in particular channel
variation, background noise and intra-speaker variations over
time.
Referring to FIG 1, an overview of a conventional DTW based
speaker recognition system 10 is illustrated wherein an input
utterance 12 and a text corresponding to the input utterance
14 is supplied to the system 10 and the system extracts
acoustic features from the input utterance and matches it
with the acoustic features of the stored speaker specific
template sets to recognize the speaker. The system 10 has to
be trained before it is used for speaker recognition.
The speaker specific template 16 shown in FIG 1 comprises
templates Ri 18 corresponding to words Wi 20 in the pre-
specified vocabulary, i. e. only one template Ri is used for
each word Wi. This is an example of a variable-text system,
where the reference template 22 is formed by concatenation of
individual word templates. The input utterance acoustic
features are compared with the acoustic features of the
reference template 22 and a score 24 is given based on the
matching.
FIG 2 shows the DTW matching for an example case where the
input utterance and the text is "9-1-5". The X-axis 26
represents the acoustic features of the input utterance 12
and the Y-axis 28 represents the acoustic features of
reference template 22 formed by concatenating individual word
templates of "9", "1" and "5". The one-pass DP process
optimally warps the reference template acoustic feature
sequence Y to align it with the input utterance acoustic
feature sequence X. In this process, the warping function 30
f (x,y) is generated which relates X to Y such that the
accumulated distance between them over the warping path is
minimized. This distance is the score 24 for that particular
match. The resulting match score 24 is the optimal distance
between the input utterance and the word-templates. For
speaker-verification, this score corresponds to the match
between the input utterance and the "claimed speaker's"
models. After comparison, each speaker is provided a score 24
and the lowest score match is declared as the speaker of the
input utterance 12 in closed-set speaker-identification.
Referring to FIG 3, an overview of a variable-text speaker-
recognition system based on one pass DP with multiple
templates 32 is illustrated. In this the speaker specific
template set is a multiple template set 32 which consists of
several templates for the same word i. e. each speaker has a
set of templates, say L templates Ri,j 34 where j=1,2,3..., L,
for the word Wi 20. These templates are formed under
different conditions of background noise, channel variation
and are recorded for an interval of time to take care of
intra-speaker variability. The system uses a judicious
combination of multiple templates and one-pass DP to solve
the background noise and intra-speaker variability problems.
The input utterance 12 is converted into a sequence of
feature vectors by extracting acoustic features, mainly
spectral parameters such as the mel-frequency-cepstral
coefficients (MFCCs). In this case the speaker specific
template sets 32 are formed of feature vectors formed from
MFCC. During the speaker recognition the feature vector of
input utterance 12 is compared with the feature vectors of
speaker specific multiple templates for all the speakers and
a score 24 is given to each speaker depending on the distance
between the feature vector of input utterance and feature
vector of speaker specific templates.
FIG 4 shows the DTW matching similar to the matching of FIG 2
with multiple templates set 32. The X-axis 36 represents the
acoustic features of the input utterance 12 and the Y-axis 38
represents the acoustic features of the reference template 22
formed by concatenating the word templates of "9", "1" and
"5.
The one-pass DP process optimally warps the reference
template acoustic feature sequence Y and the input utterance
acoustic feature sequence X to align them in a non-linear
fashion. In this process, the warping function 30 f (x,y) is
generated which relates X to Y such that the accumulated
distance between them over the warping path is minimized.
Four multiple templates of the word "1" R11, R12, R13 and R14
are shown on the y-axis (the multiple templates of the words
"9" and "5" are not shown for sake of clarity). The best
warping path 30 obtained by the one-pass DP algorithm can be
noted to have preferred the second template of word "1" (R1,2)
as the best matching template for the word "1" as part of the
input utterance of "9-1-5".
FIG 5 shows the DP recursion that operates in the one-pass DP
algorithm for multiple templates specifically for entry into
one of the multiple word templates from a preceding inter-
word silence template or preceding word in the password text.
The recursions for multiple templates can be of two types: i)
Within-word recursion 36 and ii) Across-word recursion 38 as
shown in FIG 5 in the context of the word-sequence "9-1-5",
with the across-word recursion being illustrated for
transition from any of the 4 templates of word "1" (and the
inter-word silence- template) to one template of the word "5".
The general equations for these two types of recursions are:
Within-word recursion
D(m,n,v) = d(m,n,v) + min [ D(m-1,j,v) ]
n-2<=j<=n
Across-word recursion
D(m,1,v) = d(m,1,v) + min {D(m-1,1,v), min D (m-1, Nu, u)}
U ε Pred'(v)
Here, D(m,n,v) is the minimum accumulated distortion by any
path reaching the grid point defined as frame 'n' of word-
template 'v' 40 and frame 'm' 42 of the input utterance.;
d(m,n,v) is the local distance between the m-th frame of
word-v template and n-th frame of the input utterance. The
within-word recursion 36 applies to all. frames of word v 40
template, which are not the starting frame (i. e., n>1). The
across-word recursion 38 applies to frame 1 of any word-v to
account for a potential 'entry' into word v template from the
last frame Nu of any of the other words {u} 44 which are
valid predecessors of word-v; i. e., Pred'(v) = {Silence
template RSil, Pred(v)} which are the valid predecessors of
any word v consisting of a silence template Rsil 48 and the
multiple templates Pred(v) of the word preceding the word v
40 in the 'password' text; for instance, if the 'password'
text is 915, and v=5, then Pred' (v=5) = {Rsil, R11, R12, R13,
R14} and Pred' (v=l) = {Rsil, R91, R92, R93, R94}. This across-
word recursion 38 takes care of entry into any template of
any word from a preceding silence template or from any
template of any preceding word in the password text.
FIG 6 shows a typical matching by one-pass DP algorithm when
inter-word silences 46 are present. The input utterance 12 is
"9-1-5" as in FIG 4, but spoken with silence before "9",
between "1" and "5" and after "5". There is no inter-word
silence 46 between "9" and "1" which represents an inter-word
co-articulation. The one-pass DP algorithm uses concatenated
'multiple' templates 22 of each word in the password '915' as
in FIG 4, but with a silence template 48 between adjacent
words (for the sake of clarity and also to emphasize the
handling of inter-word silence, only one template per word is
shown in FIG 6. The X-axis 36 represents the acoustic
features of the input utterance 12 and the Y-axis 38
represents the acoustic features of reference template 22
formed by concatenating word templates of "9", "1" and "5.
The warping function f(x,y) 30 is generated and drawn as in
FIG 4. FIG 6 shows how the adoption of the one-pass DP
algorithm now correctly decodes the input utterance 12, by
skipping the silence model between word "9" and "1". Other
inter-word silences are mapped to the corresponding silence
templates 46.
FIG 7 shows recursions for entry into an inter-word silence
template. This is illustrated for the transition from any of
the four templates of word "1" to the silence template
between words "1" and "5". The within-word 36 and across-word
recursions 38 in this case are:
Within-word recursion
D(m,n,v) = d(m,n,v) + min [ D(m-1,j,v) ]
n-2<=j<=n
Across-word recursion
D(m,1,v) = d(m,1,v) + min { D(m-1,1,v), min D (m-1, Nu, u) }
Here, all terms are as in the recursion of FIG 5 except the
definition of Pred(v), where v 40 is the inter-word silence
template Rsil 4 8 between two consecutive words in the
password; thus, Pred(v) is the set of the multiple templates
of the preceding word in the 'password' text; for instance,
if the 'password' text is '915', then Pred(v = Rsil between 1
and 5) = {R11, R12, R13, R14}, i. e., the four templates of
word "1".
The above recursions together describe the one-pass DP
recursion for using multiple templates and inter-word silence
templates for forced alignment matching as required in the
variable-text speaker-recognition. The score D(T,Nr,r), where
T is the last frame of the input utterance 12 and word-r is
the last silence template (with Nr as the last frame) yields
the minimum accumulated distance Di of the match between the
input utterance and the 'password' text and is used as the
score for that speaker-i whose word-templates were used. The
way this score Di 24 is used for speaker-identification or
speaker-verification task is described earlier.
The above recursions together describe the one-pass DP
recursion for using multiple templates and inter-word silence
templates for forced alignment matching as required in the
variable-text speaker-recognition. The best score (lowest)
among D(T,Nr,r), r=1, ..., L+1, where T is the last frame of
the input utterance and r=1, ... ,L+1 refers to the L multiple
templates of the last word in the password text and the last
silence template (with Nr as their respective last frames)
yields the minimum accumulated distance Di of the match
between the input utterance and the "password" text and is
used as the score for that speaker i whose word-templates
were used. The next section will describe how Di is used for
closed-set speaker-identification, speaker-verification and
open-set speaker-identification.
FIG 8 shows diagrams of MFCC feature vector sequences for
different speakers for the utterance of the same word in an
experiment. The speakers are made to utter the word "one" and
feature vectors are formed form MFCCs of each utterance. The
X-axis 60 represents the frame index (typically for a frame-
size of 20 ms), Y-axis 62 represents the MFCC dimension and
the Z-axis 64 represents the MFCC values. The diagrams show
that the MFCC feature vector sequences are different for
different speakers.
FIG 9 shows how pitch contours look different for different
speakers for the utterance of same word. The X-axis 70
represents the frame index and the Y-axis 72 represents the
pitch values. The speakers are made to utter the word "two"
and the pitch contours were obtained from the utterances. The
horizontal arrow 74 shows how the pitch contours for the same
person look similar and the vertical arrow 7 6 shows how the
pitch contour differs for different speakers. The pitch
contour is not impacted even if there is channel variation or
background noise as it simply measures the periodicity of the
input speech signal. Thus pitch is a reliable feature vector
to use with the MFCC vectors.
The shown embodiment of the proposed speaker recognition
system uses a one-pass DP algorithm for multiple templates as
discussed above. In a preferred embodiment of the proposed
system the feature vector is formed by the combination of one
spectral parameter such as MFCC and one pitch parameter.
Referring to FIG 10, the system 100 comprises means 110 for
receiving an input utterance 12 from a speaker, means 120 for
extracting a plurality of acoustic features 130 from the
input utterance 12, means 140 for forming a feature vector
150 by combining at least two different extracted features,
means 160 for forming a set of speaker specific multiple
templates 170 under different conditions and means 180 for
matching a further input utterance of a speaker against the
set of speaker specific multiple templates.
Referring to FIG 11, the method 200 for speaker recognition
begins with step 223 of receiving an input utterance 12 from
a speaker. In the present embodiment of the invention a text
corresponding to the input utterance is also supplied to the
system. The text corresponding to the input utterance is a
sequence of a subset of words selected from a predefined
vocabulary of words. This makes the method usable for
variable text speaker recognition and the speaker can select
words from the predefined vocabulary to form a password and
change it any time when needed.
At step 224 a plurality of acoustic features 130 are
extracted from the input utterance 12. The plurality of
acoustic features 130 include spectral parameters and pitch
parameters combined (for instance, to form a heterogenous
feature vector). Spectral parameters such as MFCCs are mostly
used in speaker recognition systems.
In step 225 a feature vector 150 is formed by combining at
least one spectral parameter and at least one pitch
parameter. In the present embodiment of the invention the
spectral parameter MFCC is combined with pitch to form a
single feature vector.
In step 226 the steps 223, 224 and 225 are repeated for
different conditions. The speaker utters the same word for
different conditions of background noise, channel variation
and over a period of time to take account of intra-speaker
variability. This forms a set of speaker specific templates
170.
In step 227 the steps 223, 224, 225 and 226 are repeated for
a plurality of speakers. Speaker specific templates 170 are
formed for these speakers and stored in the database. Now the
system is ready and an input utterance can be tested for
speaker recognition.
In step 228 the matching of a further input utterance 230
against the set of speaker specific multiple templates is
done using a one-pass dynamic programming algorithm, the
result being a match score 24 between the further input
utterance and the set of speaker specific multiple templates
for each speaker. The speaker with the lowest score is
declared as the speaker in the case of closed-set speaker-
identification.
Summarizing the present invention relates to speaker
recognition. The proposed system 100 comprises means 110 for
receiving an input utterance 12 from a speaker, means 120 for
extracting a plurality of features 130 from the input
utterance 12, means 140 for forming a feature vector 150 by
combining at least two different extracted features, means
160 for forming a set of speaker specific multiple templates
170 under different conditions and means for matching a
further input utterance 230 of a speaker against the set of
speaker specific multiple templates 170.
Although the invention has been described with reference to
specific embodiments, this description is not meant to be
construed in a limiting sense. Various modifications of the
disclosed embodiments, as well as alternate embodiments of
the invention, will become apparent to persons skilled in the
art upon reference to the description of the invention. It is
therefore contemplated that such modifications can be made
without departing from the spirit or scope of the present
invention as defined.
We claim,
1. A method for speaker recognition, the method comprising
the steps of:
a) receiving an input utterance (12) from a speaker;
b) extracting a plurality of features (130) from the input
utterance (12) ;
c) forming a feature vector (150) by combining at least two
different extracted features;
d) forming a set of speaker specific multiple templates (170)
by repeating steps a, b and c under different conditions;
e) repeating the steps a, b, c and d for a plurality of
speakers; and
g) matching a further input utterance (230) of a speaker
against the set of speaker specific multiple templates (170) .
2. The method according to claim 1, further comprising
receiving a text (14) corresponding to the input utterance
along with the input utterance (12) from the speaker.
3. The method according to claim 2, wherein the text (14)
corresponding to the input utterance is a sequence of words
selected from a predefined vocabulary of words.
4. The method according to any of the preceding claims,
wherein the plurality of features (130) include spectral
parameters and pitch parameters, and the feature vector is
formed by combining at least one spectral parameter and at
least one pitch parameter.
5. The method according to any of the preceding claims,
wherein a time warp method is used for matching the further
input utterance against (230) the speaker specific multiple
templates (170).
6. The method according to any of the preceding claims,
wherein the step of matching the further input utterance is
done using a one-pass dynamic programming algorithm, the
result being a match score (24) between the further input
utterance (230) and the set of speaker specific multiple
templates (170) for each speaker.
7. A system (100) for speaker recognition, the system
comprising:
a) means (110) for receiving an input utterance (12) from a
speaker;
b) means (120) for extracting a plurality of features (130)
from the input utterance (12);
c) means (140) for forming a feature vector (150) by
combining at least two different extracted features;
d) means (160) for forming a set of speaker specific multiple
templates (170) under different conditions; and
e) means (180) for matching a further input utterance (230)
of a speaker against the set of speaker specific multiple
templates (170).
8. The system (100) according to claim 7, further comprising
means for receiving a text (14) corresponding to the input
utterance along with the input utterance (12) from the
speaker, wherein the text is a subset of words selected from
a predefined vocabulary of words.
9. A method or system substantially as herein described and
illustrated in the figures of the accompanying drawings.
The present invention relates to speaker recognition. The proposed system (100) comprises means (110) for receiving an input utterance (12) from a speaker, means (120) for
extracting a plurality of features (130) from the input utterance (12), means (140) for forming a feature vector (150) by combining at least two different extracted features,
means (160) for forming a set of speaker specific multiple templates (170) under different conditions and means for matching a further input utterance (230) of a speaker against the set of speaker specific multiple templates (170).
| # | Name | Date |
|---|---|---|
| 1 | 1029-KOL-2008-(12-06-2008)-CORRESPONDENCE.pdf | 2008-06-12 |
| 1 | 1029-KOL-2008-CANCELLED PAGES.pdf | 2017-08-09 |
| 2 | 1029-KOL-2008-(07-08-2008)-CORRESPONDENCE.pdf | 2008-08-07 |
| 2 | 1029-KOL-2008-FIRST EXAMINATION REPORT.pdf | 2017-08-09 |
| 3 | 1029-kol-2008-form 18.pdf | 2017-08-09 |
| 3 | 1029-KOL-2008-(27-07-2009)-CORRESPONDENCE.pdf | 2009-07-27 |
| 4 | abstract-01029-kol-2008.jpg | 2011-10-07 |
| 4 | 1029-KOL-2008-GPA.pdf | 2017-08-09 |
| 5 | 1029-KOL-2008-GRANTED-ABSTRACT.pdf | 2017-08-09 |
| 5 | 1029-KOL-2008-FORM 1-1.1.pdf | 2011-10-07 |
| 6 | 1029-KOL-2008-GRANTED-CLAIMS.pdf | 2017-08-09 |
| 6 | 1029-KOL-2008-CORRESPONDENCE OTHERS 1.1.pdf | 2011-10-07 |
| 7 | 1029-KOL-2008-GRANTED-DESCRIPTION (COMPLETE).pdf | 2017-08-09 |
| 7 | 01029-kol-2008-form 3.pdf | 2011-10-07 |
| 8 | 1029-KOL-2008-GRANTED-DRAWINGS.pdf | 2017-08-09 |
| 8 | 01029-kol-2008-form 2.pdf | 2011-10-07 |
| 9 | 01029-kol-2008-form 1.pdf | 2011-10-07 |
| 9 | 1029-KOL-2008-GRANTED-FORM 1.pdf | 2017-08-09 |
| 10 | 01029-kol-2008-drawings.pdf | 2011-10-07 |
| 10 | 1029-KOL-2008-GRANTED-FORM 2.pdf | 2017-08-09 |
| 11 | 01029-kol-2008-description complete.pdf | 2011-10-07 |
| 11 | 1029-KOL-2008-GRANTED-FORM 3.pdf | 2017-08-09 |
| 12 | 01029-kol-2008-correspondence others.pdf | 2011-10-07 |
| 12 | 1029-KOL-2008-GRANTED-LETTER PATENT.pdf | 2017-08-09 |
| 13 | 01029-kol-2008-claims.pdf | 2011-10-07 |
| 13 | 1029-KOL-2008-GRANTED-SPECIFICATION-COMPLETE.pdf | 2017-08-09 |
| 14 | 01029-kol-2008-abstract.pdf | 2011-10-07 |
| 14 | 1029-KOL-2008_EXAMREPORT.pdf | 2016-06-30 |
| 15 | 1029-kol-2008-(27-05-2015)-ABSTRACT.pdf | 2015-05-27 |
| 15 | 1029-KOL-2008-(27-05-2015)-REPLY TO EXAMINATION REPORT.pdf | 2015-05-27 |
| 16 | 1029-kol-2008-(27-05-2015)-CLAIMS.pdf | 2015-05-27 |
| 16 | 1029-kol-2008-(27-05-2015)-PA.pdf | 2015-05-27 |
| 17 | 1029-kol-2008-(27-05-2015)-OTHERS.pdf | 2015-05-27 |
| 17 | 1029-kol-2008-(27-05-2015)-CORRESPONDENCE.pdf | 2015-05-27 |
| 18 | 1029-kol-2008-(27-05-2015)-DESCRIPTION (COMPLETE).pdf | 2015-05-27 |
| 18 | 1029-kol-2008-(27-05-2015)-FORM-2.pdf | 2015-05-27 |
| 19 | 1029-kol-2008-(27-05-2015)-DRAWINGS.pdf | 2015-05-27 |
| 19 | 1029-kol-2008-(27-05-2015)-FORM-1.pdf | 2015-05-27 |
| 20 | 1029-kol-2008-(27-05-2015)-DRAWINGS.pdf | 2015-05-27 |
| 20 | 1029-kol-2008-(27-05-2015)-FORM-1.pdf | 2015-05-27 |
| 21 | 1029-kol-2008-(27-05-2015)-DESCRIPTION (COMPLETE).pdf | 2015-05-27 |
| 21 | 1029-kol-2008-(27-05-2015)-FORM-2.pdf | 2015-05-27 |
| 22 | 1029-kol-2008-(27-05-2015)-CORRESPONDENCE.pdf | 2015-05-27 |
| 22 | 1029-kol-2008-(27-05-2015)-OTHERS.pdf | 2015-05-27 |
| 23 | 1029-kol-2008-(27-05-2015)-CLAIMS.pdf | 2015-05-27 |
| 23 | 1029-kol-2008-(27-05-2015)-PA.pdf | 2015-05-27 |
| 24 | 1029-KOL-2008-(27-05-2015)-REPLY TO EXAMINATION REPORT.pdf | 2015-05-27 |
| 24 | 1029-kol-2008-(27-05-2015)-ABSTRACT.pdf | 2015-05-27 |
| 25 | 01029-kol-2008-abstract.pdf | 2011-10-07 |
| 25 | 1029-KOL-2008_EXAMREPORT.pdf | 2016-06-30 |
| 26 | 01029-kol-2008-claims.pdf | 2011-10-07 |
| 26 | 1029-KOL-2008-GRANTED-SPECIFICATION-COMPLETE.pdf | 2017-08-09 |
| 27 | 01029-kol-2008-correspondence others.pdf | 2011-10-07 |
| 27 | 1029-KOL-2008-GRANTED-LETTER PATENT.pdf | 2017-08-09 |
| 28 | 01029-kol-2008-description complete.pdf | 2011-10-07 |
| 28 | 1029-KOL-2008-GRANTED-FORM 3.pdf | 2017-08-09 |
| 29 | 01029-kol-2008-drawings.pdf | 2011-10-07 |
| 29 | 1029-KOL-2008-GRANTED-FORM 2.pdf | 2017-08-09 |
| 30 | 01029-kol-2008-form 1.pdf | 2011-10-07 |
| 30 | 1029-KOL-2008-GRANTED-FORM 1.pdf | 2017-08-09 |
| 31 | 1029-KOL-2008-GRANTED-DRAWINGS.pdf | 2017-08-09 |
| 31 | 01029-kol-2008-form 2.pdf | 2011-10-07 |
| 32 | 1029-KOL-2008-GRANTED-DESCRIPTION (COMPLETE).pdf | 2017-08-09 |
| 32 | 01029-kol-2008-form 3.pdf | 2011-10-07 |
| 33 | 1029-KOL-2008-GRANTED-CLAIMS.pdf | 2017-08-09 |
| 33 | 1029-KOL-2008-CORRESPONDENCE OTHERS 1.1.pdf | 2011-10-07 |
| 34 | 1029-KOL-2008-GRANTED-ABSTRACT.pdf | 2017-08-09 |
| 34 | 1029-KOL-2008-FORM 1-1.1.pdf | 2011-10-07 |
| 35 | abstract-01029-kol-2008.jpg | 2011-10-07 |
| 35 | 1029-KOL-2008-GPA.pdf | 2017-08-09 |
| 36 | 1029-kol-2008-form 18.pdf | 2017-08-09 |
| 36 | 1029-KOL-2008-(27-07-2009)-CORRESPONDENCE.pdf | 2009-07-27 |
| 37 | 1029-KOL-2008-(07-08-2008)-CORRESPONDENCE.pdf | 2008-08-07 |
| 37 | 1029-KOL-2008-FIRST EXAMINATION REPORT.pdf | 2017-08-09 |
| 38 | 1029-KOL-2008-(12-06-2008)-CORRESPONDENCE.pdf | 2008-06-12 |
| 38 | 1029-KOL-2008-CANCELLED PAGES.pdf | 2017-08-09 |