Sign In to Follow Application
View All Documents & Correspondence

System And Method Of Using Multi Pattern Viterbi Algorithm For Joint Decoding Of Multiple Patterns

Abstract: Systems, devices, and methods for using Multi-Pattern Viterbi Algorithm for joint decoding of multiple patterns are disclosed. An exemplary method may receive a plurality of sets of time-sequential signal observations for each of a number K of signal repetitions. Further, each set of signal observations is associated with a respective dimension of a K-dimensional time grid having time-indexed points. Moreover, at each of a plurality of the time-indexed points, a state cost metric is calculated with a processor for each state in a set of states of a hidden Markov model (HMM). In addition, each state in the set of states and for a given time-indexed point, the state cost metric calculation provides a most-likely predecessor state and a corresponding most likely predecessor time-indexed point. The exemplary method may also determine a sequence of states using the calculated state cost metrics and determine a corresponding cumulative probability measure for the HMM.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 November 2009
Publication Number
21/2011
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

INDIAN INSTITUTE OF SCIENCE
Bangalore - 560012  Karnataka

Inventors

1. NISHANTH ULHAS NAIR
592  Akshaya  Manorayanapalya  Sultan Palya  R.T. Nagar Post  Bangalore-560032
2. THIPPUR VENKATANARASAIAH SREENIVAS
F-32  Sreekrishna  12th Cross  Margosa Road  Malleshwaram  Bangalore - 560003

Specification

FORM 2 THE PATENTS ACT 1970 [39 OF 1970] & THE PATENTS RULES, 2003 COMPLETE SPECIFICATION [See section 10; rule 13] Title: “SYSTEM AND METHOD OF USING MULTI PATTERN VITERBI ALGORITHM FOR JOINT DECODING OF MULTIPLE PATTERNS” Address of the Applicant: INDIAN INSTITUTE OF SCIENCE, Bangalore 560 012, Karnataka, India. Nationality: Indian Attorney’s Docket No.: “MBHB Case No. 09-1146” The following specification particularly describes the invention and the manner in which it is to be performed. Attorney’s Docket No.: “MBHB Case No. 09-1146” 2 BACKGROUND In day to day telephone/mobile conversations, a listener in a conversation may often ask a speaker to repeat certain portions of their speech due to the listener’s inability to understand the certain portions of speech. Such a situation happens more often in the presence of background noise where the intelligibility of speech is affected significantly. Speech recognition systems, devices, and methods can utilize such repeated information, especially in the presence of heavy/bursty background noise, to better discern speech for various applications. Some speech recognition systems, such as Automatic Speech Recognition (ASR) systems, work well when test and training conditions are comparable. An example of an ASR system may be the speech recognition system used in an automated call center for an airline. Many speech recognition systems, including ASR systems, store training data that includes data representing the most likely used parts of speech. Training data is unaffected by ambient noise, different speaker accents, or any other negative audio effects on the speech data. However, real world testing environments are different than training conditions. Various factors like additive noise, acoustic echo, and speaker accent may affect speech recognition performance in many real world test environments. Since ASR can be characterized as a statistical pattern recognition problem, if the test patterns are unlike anything used to train the models, then errors may occur. Various approaches to increase robustness in ARS technology have been proposed that include: (i) reducing the variability of the model or (ii) modifying the statistical model parameters to suit the noisy condition. However, under very high noise conditions or bursty error channels, such as in packet communication where packets may be dropped, speech recognition systems may benefit from taking the approach of using repeated utterances to accurately decode speech. Attorney’s Docket No.: “MBHB Case No. 09-1146” 3 SUMMARY The present application discloses systems, devices, and methods for using a Multi-Pattern Viterbi Algorithm to detect signals from multiple patterns. One embodiment of the disclosure may be a method that receives a plurality of sets of time-sequential signal observations for each of a number K of signal repetitions. Further, each set of signal observations is associated with a respective dimension of a K-dimensional time grid having time-indexed points. Moreover, at each of a plurality of the time-indexed points, a state cost metric is calculated with a processor for each state in a set of states of a hidden Markov model (HMM). In addition, for each state in the set of states and for a given time-indexed point, the state cost metric calculation provides a most-likely predecessor state and a corresponding most-likely predecessor time-indexed point. In one embodiment, the method includes determining a cost metric at the final state and the terminal time-indexed point. This may also be referred to as a cumulative probability measure that the observations were generated by the corresponding HMM. Thus, some methods further include determining the cost metrics at the final state at the terminal time-indexed point for each of a plurality of HMMs, and then selecting the smallest cost metric and its corresponding HMM. The corresponding HMM is then used to identify the pattern (which for example may be a word in a speech recognition system). Some methods may also include determining a sequence of states using the calculated state cost metrics as well as determining a corresponding cumulative probability measure for the HMM. Furthermore, some methods involve repeating calculating the state cost metric for each state in the set of states for the plurality of time-indexed points and determining a most likely sequence and corresponding cumulative probability measure for a plurality of HMMs. Thereafter, the method may identify a most likely HMM based on the corresponding cumulative Attorney’s Docket No.: “MBHB Case No. 09-1146” 4 probability measures, or cost metrics, for the plurality of HMMs. The method may also include for a given one of the plurality of time-indexed points, the state cost metric for each state in the set of states is determined by: calculating a cost metric associated with each possible prior state at each possible predecessor time-indexed point; and, selecting the lowest cost metric for each state. In some embodiments, for a given possible predecessor state, the state cost metrics are based only on observations associated with dimensions that are incremented when moving from the given predecessor time-index point to the given one of the plurality of time-indexed points. Additionally, some methods may include determining a most likely sequence of states by identifying a lowest state cost metric at a final state at a terminal time-indexed point. A plurality of time-indexed points may be used so as to restrict the points from the time-dimensioned grid that are used, and may be determined with respect to a predetermined distance from a diagonal line through the K-dimensional space. The predetermined distance may be based on differences in the respective time durations of the observation sequences. The method may include calculating a given state cost metric for a given state based on state cost metrics for all states associated with all candidate predecessor time-indexed points, the probability of transitioning from each state of each candidate predecessor time-indexed point to the given state, the respective probability of transitioning from the respective candidate predecessor time-indexed point and the joint probability of the observations being emitted from the state in the set of states. The determined sequence of states may also determine an alignment of the sets of observations. Some embodiments described herein may take the form of an article of manufacture including a computer-readable medium, such as a solid state memory, compact disk, digital Attorney’s Docket No.: “MBHB Case No. 09-1146” 5 video disk ROM, magnetic storage medium and the like, having instructions stored thereon that, if executed by a computing device, cause the computing device to perform operations comprising: retrieving from memory a number K sets of time-sequential signal observations for each of a number K of signal repetitions, wherein each set of signal observations is associated with a respective dimension of a K-dimensional time grid having time-indexed points; retrieving from memory a set of parameters for each of a plurality of HMMs; calculating a state cost metric for each state in a set of states of a given HMM at each of a plurality of the time-indexed points, wherein for each state in the set of states and for a given time-indexed point, the state cost metric calculation provides a most-likely predecessor state and a corresponding most-likely predecessor time-indexed point; determining a cumulative probability measure for each of the plurality of HMMs; and, determining a most likely HMM from the plurality of HMMs. In other embodiments, the apparatus comprises: a processor executing software applications stored in memory, the software instructions that include: calculating a state cost metric for each state in a set of states of a given HMM at each of a plurality of the time-indexed points, wherein for each state in the set of states and for a given time-indexed point, the state cost metric calculation provides a most-likely predecessor state and a corresponding most-likely predecessor time-indexed point; optionally determining a sequence of states using the calculated state cost metrics, determining a corresponding cumulative probability measure for each of the plurality of HMMs, and determining a most likely HMM from the plurality of HMMs. In some embodiments, the apparatus further comprises a memory that stores: a digital representation of a plurality of sets of time-sequential signal observations for each of a number K of signal repetitions, wherein each set of signal observations is associated with a respective dimension of a k-dimensional time grid having time-indexed points, and a set of parameters for Attorney’s Docket No.: “MBHB Case No. 09-1146” 6 each of a plurality of HMMs. In other embodiments, the apparatus further comprises an audio receiver that: receives a plurality of sets time-sequential audio signal observations for each of a number K of signal repetitions, wherein each set of audio signal observations is associated with a respective dimension of a k-dimensional time grid having time-indexed points, and converts the plurality of sets time-sequential audio signal observations for each of a number K of signal repetitions into a plurality of sets time-sequential analog electrical signal observations for each of a number K of signal repetitions. In still other embodiments, the apparatus further comprises an analog-to-digital converter that transforms the plurality of sets time-sequential analog electrical signal observations for each of a number K of signal repetitions into the digital representation of a plurality of sets of timesequential signal observations for each of a number K of signal repetitions. The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is an example of a Hidden Markov Model for an example speech recognition system. Figure 2 shows an exemplary speech recognition application incorporating aspects of a Multi-Pattern Viterbi Algorithm. Figure 3 is a functional block diagram of an example speech recognition system using a Multi-Pattern Viterbi Algorithm. Attorney’s Docket No.: “MBHB Case No. 09-1146” 7 Figure 4 is an example flowchart describing and example method of using a Multi- Pattern Viterbi Algorithm to decode speech from multiple speech utterances received by the speech recognition system. Figure 5 is an example time path for decoding speech using K=2 patterns of speech in a Multi-Pattern Viterbi Algorithm. Figure 6 is a three dimensional grid that shows the optimum state sequence and the optimum time path using K=2 patterns of speech in a Multi-Pattern Viterbi Algorithm. Figure 7 is a block diagram illustrating an example computing device 700 that is arranged for a speech recognition system using a Multi-Pattern Viterbi Algorithm. Figure 8 is a flowchart for an example method 800 of detecting a signal from multiple signal observations using the Multi-Pattern Viterbi Algorithm. DETAILED DESCRIPTION In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented herein. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, separated, and designed in a wide variety of different configurations, all of which are explicitly contemplated herein. Described herein are systems, devices, and methods for using Multi-Pattern Viterbi Algorithm for joint decoding of multiple patterns. Attorney’s Docket No.: “MBHB Case No. 09-1146” 8 Generally, the embodiments described herein incorporate robust speech recognition techniques using Multi-Pattern Viterbi Algorithm (MPVA). Considering the analogy of human communication over telephones, a listener may ask a speaker to repeat certain portions of their speech, because the listener does not understand the speaker. Such situations occur more often in the presence of background noise where the intelligibility of speech is affected significantly. Under very high noise conditions or bursty error channels, such as in packet communication where packets may be dropped, a speech recognition system may benefit to take the approach of using repeated utterances in implementing speech recognition techniques. Although MPVA may be used in speech recognition and methods, the MPVA may also be used in any system or method that detects a signal from multiple patterns. Further, embodiments may be used in a variety of applications including mobile telephone technologies, command and control applications, speech recognition in railway stations, military applications, robotics technologies, and pronunciation estimation as well as many non-speech applications. Many applications may have a need to accurately discern speech from a speaker that is in the presence of significantly adverse background noise. For example, speech recognition systems in mobile telephones do not work well in the presence of transient noises like car noise, road noise, etc. Embodiments may allow a mobile telephone user to repeat the name of the person the user would like to call and thereby increase speech recognition performance especially in the presence heavy/bursty noise. Further embodiments may be incorporated in command and control applications such as in a noisy cockpit where the pilot would like to give instructions. Likewise, embodiments may be used in noisy environments such as railway station where many people are speaking in the background (called babble noise). In such military applications, soldiers may communicate with automated devices that Attorney’s Docket No.: “MBHB Case No. 09-1146” 9 incorporate speech recognitions systems. Consequently, speech recognition systems decode speech from soldiers on a battlefield where there is a high degree of ambient noise due to bullets from machine guns, shells from artillery, etc. Also, embodiments may be used in robotic industrial applications where a robot can use multiple repetitions of speech from a human controller to learn/recognize commands in a factory or other industrial environment. Further embodiments may be applicable in pronunciation estimation to jointly estimate pronunciations from multiple patterns. Embodiments may also be incorporated in widely used various applications like speech recognition, bioinformatics, telecommunications, linguistics, image processing, keyword spotting, etc. and any applications where dynamic programming (e.g. Viterbi algorithm) can be used. A Hidden Markov Model (HMM) and dynamic programming may used in many speech recognition techniques. A HMM is a statistical model in which a system being modeled is assumed to be a Markov process with unobserved state. A Markov process is a mathematical model for a memoryless system, which the likelihood of a given future state, at any given moment, depends only on its present state, and not on any past states. In a regular Markov process, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. An example of a Markov process may be the sequence of results from flipping a coin. The result of flipping of a coin may be modeled as a random variable. There are two equally likely results for a random variable of flipping a coin, Heads and Tails, each with a probability equal to 0.5. The sum of the probabilities of all the outcomes of a random variable is 1. Further, a random process is a sequence of random variables, and hence, the sequence of results from flipping a coin can be modeled as a random process. In addition, the result of flipping a coin Attorney’s Docket No.: “MBHB Case No. 09-1146” 10 does not depend on the result of the previous coin flip and hence can be described as memoryless. Therefore, the sequence of results from flipping a coin can be modeled as a Markov process. An example of a random process that is not memoryless may be the result of picking colored marbles from a bag without replacement. For example, a bag may contain five black marbles and five white marbles. The probability that a first pick from the bag is a black marble is 0.5. However, the probability of a second pick from the bag is a black marble depends on the result of the first pick. If the first pick was a black marble, then the probability of the second pick to be a black marble is 4/9=0.44. Conversely, if the first pick from the bag was a white marble, then the probability that the second pick is a black marble is 5/9=0.56. Thus, the probability of a certain result of picking a marble depends of past results. Therefore, such a random process is not memoryless. Unlike a Markov process, in a Hidden Markov Model, the state is not directly visible, but instead an output event is observed that is dependent on the state. Each state has a transition probability to remain in the state or transition to another state. Further, each state has an emission probability for each output event. Therefore, the sequence of output events generated by a HMM gives some information about the sequence of states. The term “hidden” refers to the state sequence through which the model passes, not to the parameters of the model (such as the transition probabilities or the emission probabilities). Even if the model parameters are known exactly, the model is still “hidden” because the states are not visible to an observer. Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics. Attorney’s Docket No.: “MBHB Case No. 09-1146” 11 Figure 1 is an example of a Hidden Markov Model 100 for an example speech recognition system. The HMM 100 may model a speech utterance, for example, of a potential caller to an automatic call center for an airline. The example speech recognition system may be used in to discern a flight number in the caller’s utterance. Further, the airline may have a spoken word “Five” as a digit in a flight number. Thus, the HMM used in the speech recognition system may include the phonemes of “Five” as states. A phoneme is the smallest segmental unit of sound employed to form meaningful contrasts between utterances. For example, “F,” “i” “ve” are three different phonemes contained in the word “Five.” Hence, Figure 1 shows a portion of an HMM that shows two states, “F” and “i,” representing the first two phonemes in the word “Five” (105, 120). Further, the HMM has the observed outputs “F” and “i” (160, 135). The HMM 100 also shows the state transition probabilities transitioning from one state to the next (115, 170, 180) or remaining in a state (110, 115) as well as the emission probabilities from each state to an observed output (140, 145, 150, and 155). The emission probability is the likelihood of observing an event given the state of the HMM 100. The sum of all the transition probabilities into a state (110, 170, 115, 120) as well as the sum of the transition probabilities out of a state (110, 115, 125, 180) is equal to one. In addition, the sum of the emission probabilities into an observed output is also equal to one (140, 145, 150, 155). The HMM 100 in Figure 1 may be only a portion of a larger HMM used by the speech recognition system. Thus, there may be a state transition from a state other than “i” to state “F” as indicated by transition 170 with a probability of 0.3. Further, there may be a state transition from “i” to a state other than “F” as indicated transition 180 with probability 0.3. Often times, when implementing an application, a most likely sequence of states in a HMM may be helpful to find. Many different methods may be used in finding the most likely Attorney’s Docket No.: “MBHB Case No. 09-1146” 12 sequence of states in a HMM. Dynamic programming is both a mathematical optimization method that simplifies a complex problem by breaking it down into simpler subproblems in a recursive manner and may be used to find a most likely sequence of states in a HMM. Further, the Viterbi algorithm is an example dynamic programming algorithm for finding the most likely sequence of hidden states (called the Viterbi path) that results in a sequence of observed events in a Hidden Markov Model. Embodiments described herein utilize a Multi-Pattern Viterbi Algorithm (MPVA), which is a novel dynamic programming method that may be used in many decoding and signal detection applications that may analyze multiple patterns. One such application is a speech recognition system. In the exemplary speech recognition system, the multiple utterances are jointly decoded to recognize the speech patterns. Persons of ordinary skill in the art would understand that the MPVA can also be used in any application that can use dynamic programming methods. Figure 2 shows an exemplary speech recognition application 200 using MPVA. A caller 210 may contact and communicate with an airline automated call center 225 using a mobile telephone 215 across a wireless communication network 220. The automated call center may have a speech recognition system 255 that receives calls from airline customers such as caller 210. Further, the speech recognition system may request a flight number from the caller 210 to access flight information for the caller 210, for example. The caller 210 may then utter a digit of a flight number, such as “Five.” The speech recognition system 255 may then request the caller 210 to repeat the utterance “Five.” Each utterance of the digit of the flight number “Five” may be represented by an audio signal as shown in graphical representations (230, 240) in Figure 2. The benefits of the speech recognition system 255 having more than one utterance to Attorney’s Docket No.: “MBHB Case No. 09-1146” 13 decode the flight number given by the caller 210 are illustrated in the graphical representations of the two utterances (230, 240). Due to ambient noise from the airport surroundings 205 of the caller 210, bursty noise may affect different portions (245, 250) of the audio signal in each utterance (230, 240). Consequently, by processing the audio signal of a single utterance, the speech recognition system 255 may inaccurately decode the flight number uttered by the caller 210. However, by processing both utterances (230, 240), which contain the same sequence of phonemes (e.g. flight number), the speech recognition system 255 may receive two audio signals where bursty noise may have affected different parts of each signal (245, 250). Therefore, the speech recognition system may use the MPVA to accurately decode the speech uttered by the caller 210 using the two repeated utterances. Speech recognition systems may use HMMs to assist in decoding speech from one or more received audio signals. Each received audio signal received by the speech recognition system may take one of many different forms. The processed signals may be analyzed as a timesequence of observed outputs, or more simply observations, of the HMM used in the speech recognition system. One exemplary processing of an audio signal may be calculating the Melfrequency Cepstral Coefficients (MFCCs) for the audio signal. In some embodiments, an MFCC vector may be calculated for every 20 milliseconds of sampled audio data. In some embodiments, the MFCCs may be calculated using overlapping intervals, such as by processing 20 milliseconds of audio data and then shifting by 10 milliseconds and processing the audio data in that interval, and so on. Cepstral coefficients may be found by processing the decibel spectrum of the audio signal, such as taking the Fourier Transform, for example. MFCCs may be one of many different features, or observations, for an audio signal in a speech recognition system. Features are the individual measurable heuristic properties of sound phenomena that Attorney’s Docket No.: “MBHB Case No. 09-1146” 14 may be observed outputs of the HMM in a speech recognition system. Features, or observations, can include MFCCS, spectral density, spectral energy, noise ratios, length of sounds, relative power, filter matches, etc., of portions of an audio signal. Figure 3 is a functional block diagram of an example speech recognition system using a Multi Pattern Viterbi Algorithm decoder. The example speech recognition system 300 may be used in an airline automated call center. The automated call center may request a caller for a flight number to access requested information. Further, the automated call center may request the caller to repeat the flight number several times to ensure accurate recognition of the caller’s speech. When the caller makes an utterance of a flight number, such as “Five” or “Nine,” then the speech recognition receives an audio signal that represents the caller’s utterance by using a receiver 310. The receiver 310 may be a microphone, acoustic transducer, or some other audio receiver that converts an audio signal into an analog electrical signal. The receiver may forward the analog electrical signal to an analog-to-digital (ADC) converter 315 to transform the analog electrical signal into digital data that represents the analog electrical signal as well as the audio signal. The analog-to-digital converter 315 may store the digital data in a signal storage portion 330 of system memory 320. Of course, the sampled voice data may be provided by any number of means: the receiver and ADC may be provided by a handset and/or portions of a public switched telephone network, or by a microphone and ADC associated with a computer workstation, and as such are not necessary components of the system. In addition, a processor 350 may be part of the example speech recognition system. The processor 350 may contain a MFCC subprocessor 360 that can access and process the stored digital data of the audio signal to obtain Mel-frequency Cepstral Coefficients (MFCCs) to be the Attorney’s Docket No.: “MBHB Case No. 09-1146” 15 feature vectors for the speech recognition system. The time sequence of feature vectors of a given utterance of a word or phrase will then form the time sequence of observations for the given utterance. The processor may also include a MPVA decoder 370 that receives the observations in the form of MFCC feature vectors and accesses the HMM data from HMM storage portion 340 of the system memory 320. The HMM data includes well-known parameters generally denoted as λ. The MPVA decoder 370 performs the MPVA to decode speech from the plurality of utterances retrieved from the memory device 320. In addition, the memory device 320 may store program instructions that may control the execution of the MPVA on the processor 350. In some embodiments, the system may optionally include a digital-to-analog converter (DAC) 375. The DAC 375 transforms the digital data representing the decoded speech into an analog electrical signal. Further, the DAC forwards the analog electrical signal to a system output interface 380, which may be a speaker or some other acoustic transducer device that converts an analog electrical signal to an audio signal that represents the decoded speech. Hence, the speech recognition system 300 may recite the audio signal to the caller for the caller to verify the decoded speech. If the caller indicates that the decoded speech is not accurate, the speech recognition system may request the caller to articulate another repeated utterance such that the MPVA may have more data to accurately decode the caller’s speech. Figure 4 is an example flowchart 400 describing an example method of using a Multi- Pattern Viterbi Algorithm to decode speech from multiple speech utterances received by the speech recognition system. A preliminary step in decoding speech may be to select features of an audio signal to be used in a speech recognition 405. An example of a feature of a speech Attorney’s Docket No.: “MBHB Case No. 09-1146” 16 utterance may be Mel-frequency Cepstral Coefficients of the speech utterance. Another preliminary step may be to train one or more HMMs 410 that includes a number of states, state transition probabilities, and emission probability density function of observed outputs. A HMM may be developed from analyzing noise free speech. For example, a HMM may be trained to analyze a speech utterance “Voice Dialer” by several different speakers, male and female, with different accents (American, British, etc.) Such a speech utterance may include 8 states, each corresponding to a phoneme in the speech utterance, “V,” “oi.” “ce,” “D,” “i,” “a,” “l” “er.” During training of such an HMM, the state transition and emission probabilities for the speech utterance “Voice Dialer” may be found. In one embodiment, a plurality of HMMs are used in the system: there is a separate HMM for each pattern to be identified. In a voice recognition system, each of the HMMs correspond to a different word (or phrase). The HMMs may be derived by the system via training, or may be provided to the system. The speech recognition system implementing the example method 400 may receive a plurality of audio signals that represent repeated speech utterances from a caller 415. The plurality of audio signals may be received by an audio receiver, microphone, acoustic transducer, or some other device. A further step in the example method 400 may be processing each audio signal into a digital data representation of each audio signal 425. Processing may include transforming each audio signal into an analog electrical signal by audio receiver, microphone, acoustic transducer, etc. Thereafter, each analog electrical signal may be converted to digital data using an analog-to-digital converter. The digital data may be stored in the memory of the speech recognition system. In other embodiments the digitized audio data samples are provided to the system and stored in memory. The analog-to-digital conversion, and the associated means and methods of obtaining the sampled audio data are known to persons skilled in the art and not Attorney’s Docket No.: “MBHB Case No. 09-1146” 17 significant. In addition, the speech recognition system may calculate Mel-frequency Cepstral Coefficients (MFCCs) 430 based on the digital data representing each audio signal corresponding to an utterance. A processor or subprocessor may access the digital data from system memory to compute the MFCCs. The MFCCs are the time sequence of observations to be used by the MPVA. Further, the MPVA may use different features instead of MFCCs such as LPCCs, sound length, noise ratios, etc. LPCCs represent the spectral envelope of a digital signal of speech in compressed form, using the information of a linear predictive model. In another step 440, the speech recognition applies the MPVA to the MFCCs using one or more HMMs stored in system memory. The MPVA may be implemented by a MPVA decoder embodied in a processor executing software instructions. When applying the MPVA, the speech recognition system receives K observation sequences denoted as O1:T 1 1 ,O1:T 2 2 ,. . .,O1:T K K with frame lengths T1 ,T2 , . .. ,TK respectively, where Oi = (O ,O , ,Oi ) Ti i i Ti ... 1: 1: 2: and Oti i is the feature vector of the ith pattern at time frame ti. The set of feature vectors may be the computed MFCCs. Each of the K observation sequences may belong to the same pattern class (e.g. spoken word) and hence decoded by a single HMM denoted as λ. However, the speech recognition system may use one or more HMMs each corresponding to a word. Moreover, the K observation sequences may be different utterances of the same word by the same speaker. The MPVA is able to jointly decode these K patterns to discern the speaker’s speech. To visualize the joint decoding of multiple received utterances using the MPVA, consider an extension of a standard HMM trellis to K+ 1 dimensions, where K dimensions corresponds to the K patterns, and thus define a K-dimensional time grid, and one dimension corresponds to the set of HMM states for a given HMM. Similar to the two dimensional trellis of standard HMM, Attorney’s Docket No.: “MBHB Case No. 09-1146” 18 the trellis grid traversing is monotonic along all the K time axes, T1 to TK. This is because the observations of the repeated utterances progress jointly through time in a forward manner. One can see that if the repeated patterns (e.g., multiple utterances of the same word for a speech recognition algorithm) were exactly synchronized (e.g., with all of the phonemes occurring at the same relative time offsets), then one would expect the optimal path to be along the diagonal line through the K-dimensional time-indexed space. If, on the other hand, the utterances are not perfectly registered with each other, as one might expect, then the K observations for the corresponding portions of the pattern would not line up along the diagonal. Indeed, observations sequences are typically of different lengths for each pattern or utterance. Thus, as described below, the optimal path may not lie along the diagonal, and the algorithm accommodates different rates of use, or consumption, of the observations of each set of observations (because each set of observations is associated with one of the K time dimensions). The state transitions along the (K + 1)th dimension are determined by the HMM state-transition matrix (either ergodic or left-to-right). The HMM state sequence can be defined the following: [ ,... ] (1): (T ) (1) (T ) def φ φ φ φ q =q = q q (1) where φ (1) q 1:N is the state index at time φ(t) = (t1, ... , tK), where φ(t) is represented by a point in the K-dimensional grid, and N is the number of HMM states. Because the state sequence depends on the evolution of φ(t), another variable may be defined such as Φ = [φ(1), φ(2), ... , φ(T)] on the K-dimensional time space or grid (See Figure 5), such that φ(1) = (1, ... ,1), φ(T) = (T1, .. , TK). Φ is the time alignment between the K patterns. Any point in the (K + 1) dimensional trellis grid can be represented by {qφ(t), φ(t)}. Moreover, t represents the hop number along each coordinate in the K dimensional grid (See Figure 5) and φ(t) = (t1, .. , tK) Attorney’s Docket No.: “MBHB Case No. 09-1146” 19 represents a single point in that K-dimensional space. Further, t moves from 1 to T, where T is the total number of hops needed to traverse from (1, ... ,1) to (T1, ... , TK). The value of T depends on the path traversed and its value for the optimum time path can be known only after path backtracking. Three objectives for the MPVA may be to determine the total joint multi-pattern likelihood, the optimum state sequence, and the optimum time path, which are stated in Equations (2)-(6) below. The optimum state sequence reveals the decoded speech when the MPVA is used in a speech recognition system. The total joint multi-pattern likelihood may be defined as: (O ,...,O ; ) = Σ∀q (O ,...,O ,q ; ) 1: (1): ( ) 1 1: 1: 1 1: 1 1 λ λ ϕ ϕ T K T T K T TK K P P (2) Considering all the valid paths through the (K+1)-dimensional grid, the joint K-pattern likelihood along the optimum HMM state sequence q* and optimum time path Φ* can be defined as follows: ( ,..., , , ; ) max ( ,..., , , ; ) 1: 1 1: ( , ) * * 1: 1 1: 1 1 Φ λ = Φ λ ∀ Φ O O q O O q q K T T K T TK K P P (3) Equation (2) is the total probability of all the K patterns with respect to the given HMM λ. Further, the maximum likelihood (optimum) HMM state sequence q* and optimum time path Φ* may be found as follows: ( , ) argmax ( , / ,..., ; ) 1: 1 1: ( , ) * * 1 K λ T TK q P q O O q Φ = Φ Φ (4) ( , ) argmax ( , , ,..., ; ) 1: 1 1: ( , ) * * 1 K λ T TK q P q O O q Φ = Φ Φ (5) ( , ) argmax ( ,..., / , ; ) ( , ; ) 1: 1 1: ( , ) * * 1 Φ = Φ λ Φ λ Φ q O O q q q P K P T TK (6) (Φ *, q*) is determined jointly by traversing through the (K+1)-dimensional grid. In the Attorney’s Docket No.: “MBHB Case No. 09-1146” 20 grid, the MPVA traversing from φ(1) to φ(T) in the breadth first manner, covering all the time axes, in single steps. The recursive update for the partial path through the grid (similar to standard HMM Viterbi algorithm) can be calculated. The term δφ (t)(j) may be defined as: ( ) max ( ,..., , ,..., , (1),..., ( ); ) 1: (1) ( ) 1 1: { ,..., , (1),..., ( 1)} ( ) 1 (1) ( 1) δ φ φ λ φ φ φ φ φ φ φ j P q q j t t K t t q q j t t K t = = − = − O O (7) δφ(t)(j) is the accumulated likelihood while traversing through a multi-dimensional grid and can be described as the least cost metric from traversing from a state i to j and the timeindexed point φ(t-1) to φ(t). Traversing through the grid implies that portions of the K patterns are matched with respect to the HMM states. Thus, δφ(t)(j) may be considered a measure of the partial pattern likelihood or a cost metric. Each pattern is a sequence of MFCC vectors, of length T1 or T2 or …. Tk. The application of the MPVA can be described as having several steps. These steps may include, but are not limited to, Initialization 445, Recursion 450, Termination 455, and Path Backtracking 460. In an initialization step 445, the initial probability ( ) (1) i φ δ may be denoted as: ( ) ( ,..., , , (1); ) ( (1)) ( ,..., ) 1 1 1 (1) 1 1 (1) 1 K i i δ i P O OK q i φ λ π P φ b O O φ φ = = = (8) where i = 1, ... , N; πi = P(qφ(i) = i) is the initial state distribution; ( ,..., ) ( ,..., / , ) 1 (1) 1 1 1 1 1 λ φ b O OK P O OK q i i = = . Hence, ( ) (1) i φ δ is an initial value of equation (7) and may be considered a measure of likelihood at the starting point of the best path (e.g.., {1,1,1, … 1}) and the first set of observations from all the repeated utterances are used. The starting probability is assigned in each of the permitted HMM states and controlled by πi. Also, ( ,..., ) 1 1 1 K i b O O is the probability density function of the HMM state i. In a Recursion step 450, Δφ(t) = φ(t)− φ(t −1) =( , 2 ,..., ) 2 1 1 K K Δt Δt Δt , such that 0 ≤ i i Δt ≤ 1 Attorney’s Docket No.: “MBHB Case No. 09-1146” 21 with at least one i Δti having a non-zero value. The location of the non-zero values in the vector provide an indication of which dimensions have been incremented, or traversed, in moving from φ(t-1) to φ(t). Δφ(t) may comprise at least one non-zero value and a maximum of K non-zero values. Another constraint that may be used in the MPVA may include limiting movement backwards in time. That is, certain possible predecessor time-indexed points φ(t-1) may be removed from consideration when populating the time-indexed K-dimensional grid with state cost metrics δφ(t) as described below. These constraints form the set of Local Continuity Constraints (LCCs) for traversing the multi-dimensional grid. For K patterns, for every φ(t), an exemplary LCC may be that there are (2K−1) possible φ(t−1)s. Other types of LCCs can also be chosen. Further, { | 0, 1,2,..., } ( ) S O ti i K i i t ti = Δ ≠ = φ be the set of observations, such as MFCC vectors, that have been mapped together at φ(t). In addition,{ } ( ,.... ) ( ) n t m t tm n O = O O φ such that ( ,.... n ) t m tm n O O are all the feature vectors in the set (t ) Sφ . Moreover, { } (t ) Oφ may be a subset of the vectors ( 1 , 2 ,...., ) 1 2 k t t tk O O O , retaining only those k tk O whose k k Δt are non-zero. The set (t ) Sφ and { } (t ) Oφ can have a minimum of one feature vector and a maximum of K feature vectors. The k k Δt that are zero may indicate that the feature vector at the that time index k tk O is not emitted due to noise in the signal, time warping, etc. Thus, it can be shown that: ( ) max { ( ) ( ( ) / ( 1)) ({ }) ( 1) ( ) { , ( 1)} ( ) ( 1) t ij j t q i t t j i a P t t b O t φ φ φ φ δ δ φ φ φ = − − − = − (9) φ(t) varies from φ(1) = (1, ... ,1) to φ(T) = (T1, ... , TK); i, j = 1, 2, ... , N. Moreover, aij is the state transition probability from state i to state j (as in a standard HMM), P(φ(t)/φ(t−1)) is the probability of moving to φ(t) from φ(t − 1) and ({ }) j (t ) b Oφ is the joint likelihood of { } (t ) Oφ being Attorney’s Docket No.: “MBHB Case No. 09-1146” 22 emitted by state j. Further, ({ }) j (t ) b Oφ is the same as joint likelihood of all the vectors { ,.... n } t m tm n O O emitted by state j, where { ,.... n } t m tm n O O consist of all the feature vectors in a set (t ) Sφ . Thus, a HMM state-j can emit a variable number of vectors from the K patterns, corresponding to the number of non-zero values in the Δφ(t) vector. But, when the recursive computation of φ (t ) δ reaches φ (T ) δ , each state j would have emitted the exact number of multi-pattern feature vectors = (T1 + T2 + ... + TK), irrespective of the which time path Φ it has taken. In other words, one interpretation of portions of equation (9) is that at each of the timeindexed points φ(t) in the K-dimensional grid, the state cost metric for each state j in the HMM is determined. This is performed by looking at the cost metrics ( ) ( 1) i φ t − δ associated with each state i at each possible predecessor point φ(t-1), as well as the i-to-j state transition probabilities aij and the transition probabilities of moving from each candidate predecessor point φ(t-1) to φ(t). More specifically, one may calculate the state cost metrics for a given point in the grid by calculating a cost metric associated with each possible prior state at each possible predecessor time-indexed point and then selecting the lowest cost metric for each state. The state cost metric calculation provides a most-likely predecessor state, and also provides a corresponding most-likely predecessor time-indexed point. Equation 9 describes a maximum probability, however, persons of ordinary skill in the art would understand that equation 9 would describe a cost metric by taking the negative of the logarithm of the argument of the equation 9, and replacing the “max” function with a “min” function. The recursion step populates each point, or at least a subset of points, in a K-dimensional grid with the ( ) ( ) j φ t δ values. This populating is done by traversing the grid from left-to-right Attorney’s Docket No.: “MBHB Case No. 09-1146” 23 and bottom-to-top. For every point populated, the cost metric is incremented from a predecessor point to the current point by selecting the least cost option among the valid predecessor points; this choice of the “best predecessor” from among the various candidate predecessor points to each grid point may be stored in system memory for performing the Path Backtracking at a later time. Hence, the MPVA may not identify the optimum state sequence or optimum time path until the MPVA completes executing and reaches the end point of the grid, i.e., {T1, T2, … Tk} and then performs the path backtracking step. Figure 5 shows an example time path Φ that resulted from the decoding speech using K=2 patterns ( 1 1:T1 O (axis t1) and 2 : 1 2 T O (axis t2 ) of speech in a Multi-Pattern Viterbi Algorithm 500. In Figure 5, if coordinate (5,5) is considered to be φ(t), then φ(t−1) according to the path Φ is (5,4). There could be many such possible paths Φ and the MPVA may need to choose the optimum path Φ* from them. At time instant (3, 3), feature vectors 1 3 O and 2 3 O are emitted by a state j. At time instant (4, 3), only vector 1 4 O is emitted as vector 2 3 O is already used. A variable number of vectors are emitted and there is no reuse of vectors. In one respect, the state cost metrics are based only on observations associated with dimensions that are incremented when moving from a given predecessor time-index point φ(t−1) to a given φ(t) time-indexed point. Figure 5 also shows the LCCs 580 that may be used when K = 2. That is, for every point φ(t)=(t1,t2) in a 2-dimensional grid, there are 2K -1 predecessor points. Hence, when φ(t) = (t1, t2), then φ(t−1) could be (t1 −1, t2), (t1, t2 −1), or (t1 − 1, t2 − 1). As mentioned previously, certain candidate predecessor points may be removed from consideration to reduce the complexity of the calculations. Similarly, other restrictions or simplifications may be utilized, including global constraints, discussed below. Further, the backtracking pointers for the state and time dimensions may be denoted as Attorney’s Docket No.: “MBHB Case No. 09-1146” 24 ψφ(t)(j) and Γφ(t)(j) may be found by the following: [ ( ) ( )] argmax { ( ) ( ( ) / ( 1)) ({ }) ( 1) ( ) { , ( 1)} ( ) ( ) ( 1) t ij j t q i t t t j j i a P t t b O t φ φ φ φ φ δ φ φ φ Ψ Γ = − − − = − (10) where φ(t) varies from (1, ... ,1) to (T1, ... , TK); i, j =1, 2,...,N. In the calculation of P(φ(t)/ φ(t- 1)), P(φ(1)) = 1 if φ(1) = (1, 1, .. ,1). Otherwise, P(φ(1))=0. There may be many possible ways of calculating P(φ(t)/φ(t−1)). The MPVA may have already defined the LCCs for reaching φ(t) from φ(t − 1). Two possible methods are set forth below, and other ad hoc methods may be developed. In one embodiment, assuming a uniform probability distribution for moving from one time instant to another, the following probability may be found: L P(φ (t) /φ (t −1)) = 1 (11) where L is the total number of possible positions of φ(t − 1) from where φ(t) can be reached. The range of possible values for φ(t − 1) may be defined by the LCC. Generally L = 2K −1 unless φ(t) lies on the borders of the multi-dimensional grid. In another embodiment, the MPVA may give more weightage to the diagonal region of the LCC as in the following. Σ ∀ − − − − − − = ( 1) || ( ) ( 1) || ( ( ) / ( 1)) || ( ) ( 1) || t t t P t t t t φ φ φ φ φ φ φ (12) where ||.|| stands for second norm. This implies that transitions involving a higher number of incremental movements are favored, such as transitions associated with diagonal movements. An example of another embodiment using an ad hoc rule may include a directional bias by giving more weight to transitions that tend to move the path back towards the center diagonal (the main diagonal from (1,1,…, 1) to (T1, …TK) through the K-dimensional space). Attorney’s Docket No.: “MBHB Case No. 09-1146” 25 Still further, the directional bias weighting might be proportional to the distance of the given φ(t) from the main diagonal, or might only be applied for given φ(t)’s beyond a certain distance from the main diagonal. In another embodiment, the MPVA may calculate ({ }) j (t ) b Oφ Even though the feature vectors { 1 2 ,.... } 1 2 K t t tK O O O may be from the same class, MPVA can assume that they are independent if it is given that they occur from the same state j, so as to compute the joint likelihood of the vectors being emitted from the HMM. Thus, the joint probability may be: n r j t m j t j tm n b O b O b O 1 ( ) ({ }) = [ ( ).... ( )] φ (13) where ( ,.... n ) t m tm n O O are all the feature vectors in the set (t ) Sφ and ( i ) j ti b O is the state j emission probability for the HMM (probability of vector i ti O , emitted by state j given the HMM) and r is the cardinality of the set (t ) Sφ . A geometric mean using power of 1/r normalizes the use of r vectors emitted by a HMM state, comparable to a single vector likelihood. Such a normalization takes into account that not all feature vectors, or observations, at a time instant are emitted due to noise, time warp, etc. Therefore, MPVA can use aij’s and πi’s that are defined as in standard HMM. If i ti O is emitted from its actual state j from the correct HMM model λ, and the MPVA can expect that ( i ) j ti b O to have a higher value than that if i ti O , is emitted from state j of the incorrect model. The multi-pattern joint likelihood given in equation (13) may enhance the contrast between the likelihoods with respect to the correct model and an incorrect model. Therefore, there is an improvement in speech recognition that increases accuracy when compared to individual decoding. The MPVA performs a Termination Step 455 to stop the recursion. The recursion is Attorney’s Docket No.: “MBHB Case No. 09-1146” 26 terminated at a terminal point such as φ(t)=(T1,T2,..,TK). max ( ) 1 ( ) * i i N P φ T δ ≤ ≤ = (14) argmax ( ) ( ) 1 * ( ) q i T i N φ T φ δ ≤ ≤ = (15) ( ) ( , ,..., ) ( ) 1 2 t T * T T T t T K φ = = =φ = (16) Thereafter, the MPVA performs a Path Backtracking step 460. ( ) ( * ) ( )* ( 1) * + * = Γ t t t qφ φ φ (17) ( * ) ( 1)* ( 1) * ( )* + + * = Ψ t t t q qφ φ φ (18) where φ(t) varies from φ(T) = (T1, ... , TK) to φ(1) = (1, ... ,1). The value of T for the optimum time path is not known until backtracking is complete. Thus, at step 465, the sequence of q*φ(t)* and φ(t)* for 1 ≤ t ≤ T gives the optimum decoded state sequence q* and the optimum decoded time path Φ*, respectively. The backtracking is performed via a simple look up procedure. For the terminal time-index point (T1, ... , TK), the final state is determined to be the highest probability as determined by the cost function, such as the one set out in equation 9. It should be understood that the specific cost function may take many forms by using variations or approximations of the quantities set forth in equation 9, and/or by limiting which sets of values are considered such as by the local or global constraints described herein. From this determination, the predecessor state is known, as well as its corresponding predecessor timeindex point. The processor then simply retrieves the data element for that prior state at that predecessor time-index point, and looks up the next prior state and next prior predecessor timeindex point, and so forth until the point (1,1,…,1) is reached. Referring to Figure 5, an example of optimum MPVA path 500 for the case of K = 2 (of Attorney’s Docket No.: “MBHB Case No. 09-1146” 27 the word “Voice Dialer”) and 9-state left-to-right HMM. The optimum time alignment Φ* of the two patterns is shown. Figure 6 is a three dimensional grid that shows the optimum state sequence and the optimum time path using K=2 patterns of speech in a Multi-Pattern Viterbi Algorithm. In particular, Figure 6 shows the optimum HMM state sequence q* 600 along the z axis while the optimum time alignment Φ* is shown on the x and y axis. In addition, the MPVA may use Global Path Constraints (GPCs) to reduce the computational complexity of implementing MPVA significantly. The global path constraints may be used because it is anticipated that the optimal path will lie close to the diagonal, and not all time-indexed points need to be populated with state cost metrics. The diagonal path will be traversed if the observations are used (or consumed, or processed) generally at an equal rate. The delayed use of observations from one sequence relative to another sequence will result from a relative time warping of the observations of the repeated utterances, and is implied by the different lengths of the observation sequences. The maximum divergence from the diagonal through the K-dimensional time grid may be predetermined based on the difference of the time durations of the observation sequences, but other metrics may also be used to determine the global constraints. However, recognition performance may be difficult to evaluate because global constraints are imposed. Because of the robustness property achieved through joint decoding of equation (13), performance and thus, recognition accuracy can be expected to increase with increase in number of patterns K. At a further step 470, a decoder, as part of a processor that implements MPVA, may provide the optimum (maximum) joint probability and optimum state sequence to a system output interface. The speech recognition system may be provided with digital data representing the optimum path or state sequence such that it transforms the digital data into an analog Attorney’s Docket No.: “MBHB Case No. 09-1146” 28 electrical signal using a digital-to-analog converter. Further, the system output interface may be a speaker or some other acoustic transducer device that transforms an analog electrical signal into an audio signal. The system output interface may provide the audio signal of the decoded speech to a caller to confirm the accuracy of the caller’s speech utterances. Figure 8 is a flowchart for an example method 800 of detecting a signal from multiple signal observations using a Multi-Pattern Viterbi Algorithm. At 810 in the example method may be receiving a plurality of sets of time-sequential signal observations for each of a number K of signal repetitions. Further, each set of signal observations is associated with a respective dimension of a K-dimensional time grid having time-indexed points. A further step 820 in the method may be that at each of a plurality of the time-indexed points, a state cost metric is calculated with a processor for each state in a set of states of a hidden Markov model (HMM). Moreover, for each state and each given time-indexed point, the state cost metric calculation provides a most-likely predecessor state and a corresponding most-likely predecessor timeindexed point. In addition, for a given time-indexed point, the state cost metric for each state is determined by calculating a cost metric associated with each possible prior state at each possible predecessor time-indexed point and selecting the lowest cost metric for each state. Further, for a given possible predecessor state, the state cost metrics are based only on observations associated with dimensions that are incremented when moving from the given predecessor time-index point to the given one of the plurality of time-indexed points. An additional step in the method may be determining a sequence of states using the calculated state cost metrics 830 and determining a corresponding cumulative probability measure for the HMM 840. The method may repeatedly calculated the state cost metric for each state for the time-indexed points in a recursive fashion to determine a most likely sequence until Attorney’s Docket No.: “MBHB Case No. 09-1146” 29 it reaches a terminal point in the K-dimensional grid 850. The method may identify a most likely HMM based on the corresponding cumulative probability measures for the plurality of HMMs. Moreover, determining a most likely sequence of states includes identifying a lowest state cost metric at a final state at a terminal time-indexed point. The signal observations used in the method are signal feature vectors and can be selected from the group consisting of Mel-Frequency Cepstral Coefficients feature vectors, Linear Predictive Coding Coefficients, spectral density, spectral energy, noise ratios, length of sounds, relative power, and filter matches. Each set of observations has a respective time duration, and the predetermined distance is based on differences in the respective time durations. Further, determining the sequence of states using the calculated state cost metrics may include backtracking through the time-indexed points based on the state cost metrics. Moreover, calculating a given state cost metric for a given state is based on: (i) state cost metrics for all states associated with all candidate predecessor time-indexed points; (ii) the probability of transitioning from each state of each candidate predecessor time-indexed point to the given state; (iii) the respective probability of transitioning from the respective candidate predecessor timeindexed point; and (iv) the joint probability of the observations being emitted from the state in the set of states. In addition, the determined sequence of states determines an alignment of the sets of observations. The plurality of time-indexed points is determined with respect to a predetermined distance from a diagonal line through the K-dimensional space. Based on determining the performance of MPVA compared to other methods and/or algorithms, experiments A1, A2, A3 were conducted that include the basic Viterbi Algorithm (VA), for speaker independent Isolated Word Recognition (IWR) experiments, the results of Attorney’s Docket No.: “MBHB Case No. 09-1146” 30 which are presented in the disclosure. Experiment A1 was conducted, such that A1 uses K patterns and the best (maximum) likelihood of the K patterns is chosen. Given O1:T 1 1 ,O1:T 2 2 .. ..O1:TK K as the individual patterns belonging to the same class, the joint likelihood score as θj = max1

Documents

Application Documents

# Name Date
1 2870-CHE-2009 POWER OF ATTORNEY 15-03-2010.pdf 2010-03-15
1 Drawings.pdf 2011-09-04
2 2870-CHE-2009 FORM-1 15-03-2010.pdf 2010-03-15
2 Form-1.pdf 2011-09-04
3 2870-CHE-2009 FORM-3 01-04-2010.pdf 2010-04-01
3 Form-3.pdf 2011-09-04
4 2870-che-2009 form-3 15-09-2010.pdf 2010-09-15
4 Form-5.pdf 2011-09-04
5 2870-che-2009 form-3 15-09-2010.pdf 2010-09-15
5 Form-5.pdf 2011-09-04
6 2870-CHE-2009 FORM-3 01-04-2010.pdf 2010-04-01
6 Form-3.pdf 2011-09-04
7 2870-CHE-2009 FORM-1 15-03-2010.pdf 2010-03-15
7 Form-1.pdf 2011-09-04
8 2870-CHE-2009 POWER OF ATTORNEY 15-03-2010.pdf 2010-03-15
8 Drawings.pdf 2011-09-04