Sign In to Follow Application
View All Documents & Correspondence

"Vector Quantizer, Vector Inverse Quantizer, And The Methods"

Abstract: A vector quantizer which improves the accuracy of vector quantization in switching over a vector quantization codebook on a first stage depending on the type of feature having the correlation with a quantization target vector. In the vector quantizer, a classifier (101) generates classification information representing a type of narrowband LSP vector having the correlation with wideband LSP (Line Spectral Pairs) out of the plural types, A first codebook (103) selects one sub-codebook corresponding to the classification information as a codebook used for the quantization of the first stage from plural sub-codebooks (CBal to CBan) corresponding to each of the types of narrowband LSP vectors. A multiplier (107) multiplies the quantization residual vector of the first stage inputted from an adder (104) by a scaling factor corresponding to the classification information out of plural scaling factors stored in a scaling factor determining section (106) and outputs it to an adder (109) as the quantization target of a second stage.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
08 April 2010
Publication Number
34/2010
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
mail@lexorbis.com
Parent Application

Applicants

PANASONIC CORPORATION
1006,OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501 JAPAN.

Inventors

1. SATOH, KAORU
C/O PANASONIC CORPORATION, 1006,OAZA KADOMA, KADOMA-SHI, OSAKA, JAPAN 571-8501
2. MORLL, TOSHIYUKI
C/O PANASONIC CORPORATION, 1006,OAZA KADOMA, KADOMA-SHI, OSAKA, JAPAN 571-8501
3. EHARA, HIROYUKI
C/O PANASONIC CORPORATION, 1006,OAZA KADOMA, KADOMA-SHI, OSAKA, JAPAN 571-8501

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970)
& THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
[See section 10, Rule 13]
VECTOR QUANTIZER, VECTOR INVERSE QUANTIZER, AND THE METHODS;
PANASONIC CORPORATION, A CORPORATION ORGANIZED AND EXISTING UNDER THE LAWS OF JAPAN, WHOSE ADDRESS IS 1006, OAZA KADOMA, KADOMA-SHI, OSAKA 571-8501, JAPAN.
THE FOLLOWING SPECIFICATION
PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

DESCRIPTION
Technical Field
The present invention relates to a vector quantization apparatus, vector dequantization apparatus and quantization and dequantization methods for performing vector quantization of LSP (Line Spectral Pairs) parameters. In particular, the present invention relates to a vector quantization apparatus, vector dequantization method and quantization and dequantization methods for performing vector quantization of LSP parameters used in a speech coding and decoding apparatus that transmits speech signals in the fields of a packet communication system represented by Internet communication, a mobile communication system, and so on.
Background Art
In the field of digital wireless communication, packet communication represented by Internet communication and speech storage, speech signal coding and decoding techniques are essential for effective use of channel capacity and storage media for radio waves. In particular, a CELP (Code Excited Linear Prediction) speech coding and decoding technique is a mainstream technique
A CELP speech coding apparatus encodes input speech based on pre-stored speech models. To be more specific, the CELP speech coding apparatus separates a digital speech signal into frames of regular time intervals, for example, frames of approximately 10 to 20 ms, performs a linear prediction analysis of a speech signal on a per frame basis, finds the linear prediction coefficients ("LPC's") and linear prediction residual vector, and encodes the linear prediction coefficients and linear prediction residual vector separately. As a method of encoding linear prediction coefficients, it is general to convert linear prediction coefficients into LSP parameters and encode these LSP parameters. Also, as a method of encoding LSP parameters, vector quantization is often performed for LSP parameters. Here, vector quantization is a method for selecting the most similar code vector to the quantization target vector from a codebook having a plurality of representative vectors (i.e. code vectors), and outputting the index (code) assigned to the

selected code vector as a quantization result. In vector quantization, the codebook size is determined based on the amount of information that is available. For example, when vector quantization is performed using an amount of information of 8 bits, a codebook can be formed using 256 (=28) types of code vectors.
Also, to reduce the amount of information and the amount of
calculations in vector quantization, various techniques such as multi
stage vector quantization (MSVQ) and split vector quantization (SVQ) are
used (see Non-Patent Document 1). Here, multi-stage vector
quantization is a method of performing vector quantization of a vector once and further performing vector quantization of the quantization error, and split vector quantization is a method of quantizing a plurality of split vectors acquired by splitting a vector.
Also, there is a technique of performing vector quantization suitable for LSP features and further improving LSP coding performance, by adequately switching the codebooks to use for vector quantization based on speech features that are correlated with the LSP's of the quantization target (e.g. information about the voiced characteristic, unvoiced characteristic and mode of speech). For example, in scalable coding, by utilizing the correlation between wideband LSP's (which are LSP's found from wideband signals) and narrowband LSP's (which are LSP's found from narrowband signals), classifying the narrowband LSP's by their features and switching codebooks in the first stage of multi-stage vector quantization based on the types of features of narrowband LSP's (hereinafter abbreviated to "types of narrowband LSP's"), wideband LSP's are subjected to vector quantization (see Patent Document 1). Non-Patent Document 1: Allen Gersho, Robert M. Gray, translated by Yoshii and other three people, "Vector Quantization and Information Compression," Corona Publishing Co.,Ltd, 10 November 1998, pages 524 to 531 Patent Document 1: International publication No.2006/030865 pamphlet
Disclosure of Invention
Problems to be Solved by the Invention
In multi-stage vector quantization disclosed in Patent Document 1, vector quantization in the first stage is performed using codebooks associated with the types of narrowband LSP's, and therefore the

dispersion of quantization errors in vector quantization in the first stage varies between the types of narrowband LSP's. However, a single common codebook is used in a second or subsequent stage regardless of the types of narrowband LSP's, and therefore a problem arises that the accuracy of vector quantization in the second or subsequent stage is insufficient.
In view of the above points, it is therefore an object of the present invention to provide a vector quantization apparatus, vector dequantization apparatus and quantization and dequantization methods for improving the quantization accuracy in vector quantization in a second or subsequent stage, in multi-stage vector quantization in which the codebooks in the first stage are switched based on the types of features correlated with the quantization target vector.
Means for Solving the Problem
The vector quantization apparatus of the present invention employs a configuration having: a classifying section that generates classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types; a selecting section that selects one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively; a first quantization section that acquires a first code by quantizing the quantization target vector using a plurality of first code vectors forming the selected first codebook; a scaling factor codebook comprising scaling factors associated with the plurality of types, respectively; and a second quantization section that has a second codebook comprising a plurality of second code vectors and acquires a second code by quantizing a residual vector between one first code vector indicated by the first code and the quantization target vector, using the second code vectors and a scaling factor associated with the classification information.
The vector dequantization apparatus of the present invention employs a configuration having: a classifying section that generates classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types; a demultiplexing section that demultiplexes a first code that is a quantization result of the quantization target vector in a first stage and a. second code that is a

quantization result of the quantization target vector in a second stage, from received encoded data; a selecting section that selects one fust codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respect.vely; a first dequantization section that selects one first code vector associated with the first code from the selected first codebook; a scaling factor codebook comprising scaling factors associated with the plurality of types, respectively; and a second dequantization section that selects one second code vector associated with the second code from a second codebook comprising a plurality of second code vectors, and acquires the quantization target vector using the one second code vector, a scaling factor associated with the classification information and the one first code vector.
The vector quantization method of the present invention includes the steps of: generating classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types; selecting one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively; acquiring a first code by quantizing the quantization target vector using a plurality of first code vectors forming the selected first codebook; and acquiring a second code by quantizing a residual vector between a first code vector associated with the first code and the quantization target vector, using a plurality of second code vectors forming a second codebook and a scaling factor associated with the classification information.
The vector dequantization method of the present invention includes the steps of: generating classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types; demultiplexing a first code that is a quantization result of the quantization target vector in a first stage and a second code that is a quantization result of the quantization target vector in a second stage, from received encoded data; selecting one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively; selecting one first code vector associated with the first code from the selected first codebook; and selecting one second code vector associated with the second code from a second codebook comprising a plurality of second

code vectors, and generating the quantization target vector using the one second code vector, a scaling factor associated with the classification information and the one first code vector.
Advantageous Effect of the Invention
According to the present invention, in multi-stage vector quantization in which codebooks in the first stage are switched based on the types of feature correlated with the quantization target vector, by performing vector quantization in a second or subsequent stage using scaling factors associated with the above types, it is possible to improve the quantization accuracy in vector quantization in a second or subsequent stage.
Brief Description of Drawings
FIG.l is a block diagram showing main components of an LSP vector quantization apparatus according to Embodiment 1;
FIG.2 is a block diagram showing main components of an LSP vector dequantization apparatus according to Embodiment 1;
FIG.3 is a block diagram showing main components of an LSP vector quantization apparatus according to Embodiment 2;
FIG.4 is a block diagram showing main components of an LSP vector quantization apparatus according to Embodiment 3; and
FIG.5 is a block diagram showing main components of an LSP vector dequantization apparatus according to Embodiment 3.
Best Mode for Carrying Out the Invention
Embodiments of the present invention will be explained below in detail with reference to the accompanying drawings. Here, example cases will be explained using an LSP vector quantization apparatus, LSP vector dequantization apparatus and quantization and dequantization methods as the vector quantization apparatus, vector dequantization apparatus and quantization and dequantization methods according to the present invention.
Also, example cases will be explained with embodiments of the present invention, where wideband LSP's are used as the vector quantization target in a wideband LSP quantizer for scalable coding, and

the codebooks used for quantization in the first stage are switched using the types of narrowband LSP's correlated with the vector quantization target. Also, it is equally possible to switch the codebooks used for quantization in the first stage using quantized narrowband LSP's (which are narrowband LSP's quantized in advance by a narrowband LSP quantizer (not shown)), instead of narrowband LSp's. Also, it is equally possible to convert quantized narrowband LSP's into a wideband format and switch the codebooks used for quantization in the first stage using the converted quantized narrowband LSP's.
(Embodiment 1)
FIG.l is a block diagram showing main components of LSP vector quantization apparatus 100 according to Embodiment 1 of the present invention. Here, an example case will be explained where an input LSP vector is quantized by multi-stage vector quantization of three steps in LSP vector quantization apparatus 100.
In FIG.l, LSP vector quantization apparatus 100 is provided with classifier 101, switch 102, first codebook 103, adder 104, error minimization section 105, scaling factor determining section 106, multiplier 107, second codebook 108, adder 109, third codebook 110 and adder 111.
Classifier 101 stores in advance a classification codebook formed with a plurality items of classification information indicating a plurality of types of narrowband LSP vectors, selects classification information indicating the type of a wideband LSP vector of the vector quantization target from the classification codebook, and outputs the classification information to switch 102 and scaling factor determining section 106. To be more specific, classifier 101 has a built-in classification codebook formed with code vectors associated with various types of narrowband LSP vectors, and finds a code vector to minimize the square error with an input narrowband LSP vector by searching the classification codebook. Further, classifier 101 uses the index of the code vector found by search, as classification information in dicating the type of the LSP vector.
From first codebook 103, switch 102 selects one sub-codebook associated with the classification information received as input from classifier 101, and connects the output terminal of the sub-codebook to adder 104.

First codebook 103 stores in advance sub-codebooks (CBal to CBan) associated with the types of narrowband LSP's. That is, for example, when the number of types of narrowband LSP's is n, the number of sub-codebooks forming first codebook 103 is equally n. From a plurality of first code vectors forming the first codebook, first codebook 103 outputs first code vectors designated by designation from error minimization section 105, to switch 102.
Adder 104 calculates the differences between a wideband LSP vector received as an input vector quantization target and the code vectors received as input from switch 102, and outputs these differences to error minimization section 105 as first residual vectors. Further, out of the first residual vectors associated with all first code vectors, adder 104 outputs to multiplier 107 one minimum residual vector identified by searching in error minimization section 105,
Error minimization section 105 uses the results of squaring first residual vectors received as input from adder 104, as square errors of the wideband LSP vector and the first code vectors, and finds the first code vector to minimize the square error by searching the first codebook. Similarly, error square minimization section 105 uses the results of squaring second residual vectors received as input from adder 109, as square errors of the first residual vector and the second code vectors, and finds the second code vector to minimize the square error by searching the second codebook. Similarly, error square minimization section 105 uses the results of squaring third residual vectors received as input from adder 111, as square errors of the second residual vector and the third code vectors, and finds the third code vector to minimize the square error by searching the third codebook. Further, error minimization section 105 collectively encodes the indices assigned to the three code vectors acquired by searching, and outputs the result as encoded data.
Scaling factor determining section 106 stores in advance a scaling factor codebook formed with scaling factors associated with the types of narrowband LSP vectors. Further, from the scaling factor codebook, scaling factor determining section 106 selects a scaling factor associated with classification information received as input from classifier 101, and outputs the reciprocal of the selected scaling factor to multiplier 107. Here, a scaling factor may be a scalar or vector.
Multiplier 107 multiplies the first residual vector received as

input from adder ! 04 by the reciprocal of the scaling factor received as input from scaling factor determining section 106, and outputs the result to adder 109.
Second codebook (CBb) 108 is formed with a plurality of second code vectors, and outputs second code vectors designated by designation from error minimization section 105 to adder 109.
Adder 109 calculates the differences between the first residual vector, which is received as input from multiplier 107 and multiplied by the reciprocal of the scaling factor, and the second code vectors received as input from second codebook 108, and outputs these differences to error minimization section 105 as second residual vectors. Further, out of the second residual vectors associated with all second code vectors, adder 109 outputs to adder 1 1 1 one minimum second residual vector identified by searching in error minimization section 105.
Third codebook 110 (CBc) is formed with a plurality of third code vectors, and outputs third code vectors designated by designation from error minimization section 105 to adder 111.
Adder 11 1 calculates the difference between the second residual vector received as input from adder 109 and the third code vectors received as input from third codebook 110, and outputs these differences to error minimization section 105 as third residual vectors.
Next, the operations performed by LSP vector quantization apparatus 100 will be explained, using an example case where the order of wideband LSP vectors of the quantization targets is R. Also, in the following explanation, wideband LSP vectors will be expressed by "LSP(i) (i=0, I, ..., R-l)."
Classifier 101 has a built-in classification codebook formed with n code vectors associated with n types of narrowband LSP vectors, and, by searching for code vectors, finds the m-th code vector to minimize the square error with an input narrowband LSP vector. Further, classifier 101 outputs m (l(i) (d2=0, 1, ..., D2-1, i=0, I, ..., R-l) forming the second codebook, second codebook 206 outputs to multiplier 207 second code vector CODE_2(d2-m'n)(i) (i=0, 1, ..., R-l) designated by designation d2_min from code demultiplexing section 202.
Multiplier 207 multiplies second code vector CODE_2(d2-min)(i) (i=0, 1, ..., R-l) received as input from second codebook 206 by scaling factor ScaleCra)(i) (i=0, 1, ..., R-l) received as input from scaling factor determining section 205 according to the following equation 9, and outputs the result to adder 208. [9]
Sca_CODEJl^--^{i) = CODE_2^-^{i)xScal^{j) (/ = 0,1,-,*-])(Equation 9)
According to the following equation 10, adder 208 adds first code vector CODE_l(dl-min)(i) (i=0, 1, ..., R-l) received as input from

first codebook 204 and second code vector multiplied by the scaling factor CODE_2(d2-min)(i) (i=0, 1, ..., R-l) received as input from multiplier 207, and outputs the vector TMP(i) (i=o, 1, ..., R-l) of the addition result to adder 211. [101
TMP{j)= CODE Jd'-^{i)+Sea _CODE_2^-^{i) ('=0,l,",*-l)...(Equation 10)
Among third code vectors CODE_3(d3)(i) (d3=0, 1, ..., D3-1, i=0, 1, ..., R-l) forming the codebook, third codebook 209 outputs third code vector CODE_3(d3-min)(i)(i=0,l,...,R-l) designated by designation d3_min from code demultiplexing section 202, to multiplier 210.
According to the following equation 11, multiplier 210 multiplies third code vector CODE_3\i)xScaUm)(i) (< = 0,l,-,.K-l)...(Equation 11)
According to the following equation 12, adder 211 adds vector TMP(i) (i=0, 1, ..., R-l) received as input from adder 208 and third code vector multiplied by the scaling factor Sca_CODE_3(d3-min)(i) (i = 0, 1, ..., R-l) received as input from multiplier 210, a.nd outputs the vector
QLSP(i) (i=0, 1 R-l) of the addition result a.s a quantized wideband
LSP vector. [12]
Q_LSP(i) = TMP(i)+Sca„CODE_^-^{i) ' J '...(Equation 16)
According to the following equation 17, adder 311 calculates the differences between second residual vector Err_2(d2-m'n)(i) (i=0, I, ..., R-1) received as input from adder 309 and third code vectors multiplied by the scaling factor Sca_CODE_3(i) (i=0, I, •••> R'1)- Further, among second residual vectors Err 2cdr)(i) (i=0, 1, ..., R-l) associated with the values of d2' from d2'=0 to d2'=Dl-l, adder 409 outputs minimum second residual vector Err 2(d2-min>(i) (i=0, 1, ..., R-l) identified by searching in error minimization section 105, to multiplier 411.
[20]
Err_2^) = Sca_ErrJ'n-^ii)-CODEJ.{dl'){i) (/ = 0,l,-,tf-l)...(Equation 20) According to the following equation 21, multiplier 411
multiplies second residual vector Err_2(d2-m,n)(i) (i=0, 1 R-l)
received as input from adder 409 by the reciprocal of second scaling factor Rec_Scale_2(m)(i) (i=0, 1, ..., R-l) received as input from scaling factor determining section 406, and outputs the result to adder 412. [21]
Sca_ErrJ.(a2-m^{i) = ErrJl{dl-^{i)xRec_ScaleJZ(m){i) (f = 0,1,---,/?—l)..-(Equation 2 1) According to the following equation 22, adder 412 calculates the differences between second residual vector multiplied by the reciprocal of second scaling factor Sca_Err_2(d2-min)(i) (i=0, 1, ..., R-l) received as input from multiplier 411 and third code vectors CODE_3td3)(i) (i=0, 1, ..., R-l) received as input from third codebook 110, and outputs these differences to error minimization section 105 as third residual vectors Err_3(d3)(i) (i=0, 1, .... R-l). [22]
Etr_3^{i) = Sca_Erryd2-^{i)~CODE_3,^{j) (/ = 0,l,-..,*-l)...(Equation 22) Thus, according to the present embodiment, in multi-stage vector quantization in which codebooks for vector quantization in the first stage are switched based on the types of narrowband LSP vectors correlated with wideband LSP vectors and the statistical dispersion of vector quantization errors (i.e. first residual vectors) in the first stage varies between types a second codebook used for vector quantization in the second and third stages and code vectors of the third codebook are multiplied by scaling factors associated with a classification result of a narrowband LSP vector, so that it is possible to change the dispersion of vectors of the vector quantization targets in the second and third stages according to the statistical dispersion of vector quantization errors in the

first stage, and therefore improve the accuracy of quantization of wideband LSP vectors. Here, by preparing the scaling factor used in the second stage and the scaling factor used in the third stage separately, more detailed adaptation is possible.
FIG.5 is a block diagram showing main components of LSP vector dequantization apparatus 500 according to the present embodiment. LSP vector dequantization apparatus 500 decodes encoded data outputted from LSP vector quantization apparatus 400 and generates quantized LSP vectors. Also, LSP vector dequantization apparatus 500 has the same basic configuration as in LSP vector dequantization apparatus 200 (see FIG.2) shown in Embodiment 1, and the same components will be assigned the same reference numerals and their explanations will be omitted.
LSP vector dequantization apparatus 500 is provided with classifier 201, code demultiplexing section 202, switch 203, first codebook 204, scaling factor determining section 505, second codebook (CBb) 206, multiplier 507, adder 208, third codebook (CBc) 209, multiplier 510 and adder 211. Here, first codebook 204 provides sub-codebooks having the same contents as the sub-codebooks (CBal to CBan) of first codebook 103, and scaling factor determining section 505 provides a scaling factor codebook having the same contents as the scaling codebook of scaling factor determining section 406. Also, second codebook 206 provides a codebook having the same contents as the codebook of second codebook 108, and third codebook 209 provides a codebook having the same contents as the codebook of third codebook 110.
From a scaling factor codebook, scaling factor determining
section 505 selects first scaling factor Scale_l(m)(i) (i = 0, 1 R-l) and
second scaling factor Scale_2(m'(i) (i=0, 1, ..., R-l) associated with classification information m received as input froni classifier 201, outputs first scaling factor Scale_l(m)(i) (i = 0, 1, ..., R-l) to multiplier 507 and multiplier 510, and outputs second scaling factor Scale_2(m)(i) (i = 0, 1, ..., R-l) to multiplier 510.
According to the following equation 23, multiplier 507
multiplies second code vector CODE_2cd2-0,lI°(i) (i=0, 1 R-l) received
as input from second codebook 206 and first scaling factor Scale_l(m'(i) (i = 0, 1, ..., R-l) received as input from scaling factor determining section

505, and outputs the result to adder 208. [23]
Sca_CODE^2-^) = CODE_2^-^{i)xScaleJ^(i) (i =0,],..yR-l)...(Equation 23) According to the following equation 24, multiplier 510 multiplies third code vector CODE_3(d3-min)(i) (i=O, 1, ..., R-l) received as input from third codebook 209 by first scaling factor Scale I(mJ(i) (i=0, 1, ..., R-l) and second scaling factor Scale 2(m)(i) (i=0, I, ..., R-l) received as input from scaling factor determining section 505, and outputs the result to adder 211. [24] ScaJJODEi^^CODEi^-^xScalei^xScalei^ (i=Q1...,R-l)-(Equation 24)
Thus, according to the present embodiment, an LSP vector dequantization apparatus receives as input and performs vector dequantization of encoded data of wideband LSP vectors generated by the quantizing method with improved quantization accuracy, so that it is possible to generate accurate quantized* wideband LSP vectors. Also, by using such a vector dequantization apparatus in a speech decoding apparatus, it is possible to decode speech using accurate quantized wideband LSP vectors, so that it is possible to acquire decoded speech of high quality.
Also, although a case has been described above where LSP vector dequantization apparatus 500 decodes encoded data outputted from LSP vector quantization apparatus 400, the present invention is not limited to this, and it is needless to say that LSp vector dequantization apparatus 500 can receive and decode encoded data, as long as the encoded data is in a form that can be decoded by LSP vector dequantization apparatus 500.
Embodiments of the present invention have been described
above.
Also, the vector quantization apparatus, the vector dequantization apparatus and the vector quantization and dequantization methods according to the present embodiment are not limited to the above embodiments, and can be implemented with various changes.
For example, although the vector quantization apparatus, the vector dequantization apparatus and the vector quantization and dequantization methods have been described above with embodiments targeting speech signals, these apparatuses and methods are equally

applicable to audio signals and so on.
Also, LSP can be referred to as "LSF (Line Spectral Frequency)," and it is possible to read LSP as LSF. Also, when ISP (Immittance Spectrum Pairs) is quantized as spectrum parameters instead of LSP, it is possible to read LSP as ISP and utilize an ISP quantization/dequantization apparatus in the present embodiments. Also, when ISF (Immittance Spectrum Frequency) is quantized as spectrum parameters instead of LSP, it is possible to read LSP as ISF and utilize an ISF quantization/dequantization apparatus in the present embodiments.
Also, the vector quantization apparatus, the vector dequantization apparatus and the vector quantization and dequantization methods according to the present invention can be used in a CELP coding apparatus and CELP decoding apparatus that encodes and decodes speech signals, audio signals, and so on. For example, in a case where the LSP vector quantization apparatus according to the present invention is applied to a CELP speech coding apparatus, in the CELP coding apparatus, LSP vector quantization apparatus 100 according to the present invention is provided in an LSP quantization section that: receives as input and performs quantization processing of LSP converted from linear prediction coefficients acquired by performing a liner prediction analysis of an input signal; outputs the quantized LSP to a synthesis filter; and outputs a quantized LSP code indicating the quantized LSP as encoded data. By this means, it is possible to improve the accuracy of vector quantization, so that it is equally possible to improve speech quality upon decoding. Similarly, in the case where the LSP vector dequantization apparatus according to the present invention is applied to a CELP speech decoding apparatus, in the CELP decoding apparatus, by providing LSP vector quantization apparatus 200 according to the present invention in an LSP dequantization section that decodes quantized LSP from a quantized LSP code acquired by demultiplexing received, multiplexed encoded data and outputs the decoded quantized LSP to a synthesis filter, it is possible to provide the same effect as above.
The vector quantization apparatus and the vector dequantization apparatus according to the present invention can be mounted on a communication terminal apparatus in a mobile communication system that transmits speech, audio and such, so that it is possible to provide a communication terminal apparatus having the same operational effect as

above.
Although a case has been described with the above embodiments as an example where the present invention is implemented with hardware, the present invention can be implemented with software. For example, by describing the vector quantization method and vector dequantization method according to the present invention in a programming language, storing this program in a memory and making the information processing section execute this program, it is possible to implement the same function as in the vector quantization apparatus and vector dequantization apparatus according to the present invention.
Furthermore, each function block employed in the description of each of the aforementioned embodiments may typically be implemented as an LSI constituted by an integrated circuit. These may be individual chips or partially or totally contained on a single chip.
"LSI" is adopted here but this may also be referred to as "IC," "system LSI," "super LSI," or "ultra LSI" depending on differing extents of integration.
Further, the method of circuit integration is not limited to LSI's, and implementation using dedicated circuitry or general purpose processors is also possible. After LSI manufacture, utilization of an FPGA (Field Programmable Gate Array) or a reconfigurable processor where connections and settings of circuit cells in an LSI can be reconfigured is also possible.
Further, if integrated circuit technology comes out to replace LSI's as a result of the advancement of semiconductor technology or a derivative other technology, it is naturally also possible to carry out function block integration using this technology. Application of biotechnology is also possible.
The disclosures of Japanese Patent Application No.2007-266922, filed on October 12, 2007, and Japanese Patent Application No.2007-285602, filed on November 1, 2007, including the specifications, drawings and abstracts, are included herein by reference in their entireties.
Industrial Applicability
The vector quantization apparatus, vector dequantization apparatus and vector quantization and dequantization methods according

to the present invention are applicable to such uses as speech coding and speech decoding.

We Claim :
1. A vector quantization apparatus comprising:
a classifying section that generates classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types;
a selecting section that selects one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively;
a first quantization section that acquires a first code by quantizing the quantization target vector using a. plurality of first code vectors forming the selected first codebook;
a scaling factor codebook comprising scaling factors associated with the plurality of types, respectively; and
a second quantization section that has a second codebook comprising a plurality of second code vectors and acquires a second code by quantizing a residual vector between one first Code vector indicated by the first code and the quantization target vector, using the second code vectors and a scaling factor associated with the classification information.
2. The vector quantization apparatus according to claim 1, further
comprising a multiplying section that acquires a multiplication vector by
multiplying the residual vector by a reciprocal of the scaling factor
associated with the classification information,
wherein the second quantization Section quantizes the multiplication vector using the plurality of second code vectors.
3. The vector quantization apparatus according to claim 1, further
comprising a multiplying section that acquires a plurality of
multiplication vectors by multiplying each of the plurality of second code
vectors by the scaling factor associated with the classification
information,
wherein the second quantization section quantizes the residual vector using the plurality of multiplication vectors.
4. The vector quantization apparatus according to claim 1, further
comprising a third quantization section that has a third codebook

comprising a plurality of third code vectors and acquires a third code by quantizing a second residual vector between one second code vector indicated by the second code and the residual vector, using the th.rd code vectors and the scaling factor associated with the classification information.
5. The vector quantization apparatus according to claim 4, further
comprising a second multiplication section that acquires a second
multiplication vector by multiplying the second residual vector by a
reciprocal of the scaling factor associated with the classification
information,
wherein the third quantization section quantizes the second multiplication vector using the plurality of third code vectors.
6. The vector quantization apparatus according to claim 4, further
comprising a second multiplication section that acquires a plurality of
second multiplication vectors by multiplying each of the plurality of third
code vectors by the scaling factor associated with the classification
information,
wherein the third quantization section quantizes the second residual vector using the plurality of second multiplication vectors.
7■ A vector dequantization apparatus comprising:
a classifying section that generates classification information indicating a type of a feature correlated with a quantization target vector among a plurality of types;
a demultiplexing section that demultiplexes a first code that is a quantization result of the quantization target vector in a first stage and a second code that is a quantization result of the quantization target vector in a second stage, from received encoded data;
a selecting section that selects one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively;
a first dequantization section that selects one first code vector associated with the first code from the selected first codebook;
a scaling factor codebook comprising scaling factors associated with the plurality of types, respectively; and

a second dequantization section that selects one second code vector associated with the second code from a second codebook comprising a plurality of second code vectors, and acquires the quantization target vector using the one second code vector, a scaling factor associated with the classification information and the one first code vector.
8. A vector quantization method comprising the steps of:
generating classification information indicating a type of a
feature correlated with a quantization target vector among a plurality of types;
selecting one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively;
acquiring a first code by quantizing the quantization target vector using a plurality of first code vectors forming the selected first codebook; and
acquiring a second code by quantizing a residual vector between a first code vector associated with the first code and the quantization target vector, using a plurality of second code vectors forming a second codebook and a scaling factor associated with the classification information.
9. A vector dequantization method comprising the steps of:
generating classification information indicating a type of a
feature correlated with a quantization target vector among a plurality of types;
demultiplexing a first code that is a quantization result of the quantization target vector in a first stage and a second code that is a quantization result of the quantization target vector in a second stage, from received encoded data;
selecting one first codebook associated with the classification information from a plurality of first codebooks associated with the plurality of types, respectively;
selecting one first code vector associated with the first code from the selected first codebook; and
selecting one second code vector associated with the second code

from a second codebook comprising a plurality of second code vectors, and generating the quantization target vector using the one second code vector, a scaling factor associated with the classification information and the one first code vector.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 693-MUMNP-2010-HearingNoticeLetter.pdf 2019-02-06
1 Other Patent Document [05-10-2016(online)].pdf 2016-10-05
2 693-mumnp-2010-abstract.doc 2018-08-10
2 Power of Attorney [18-05-2017(online)].pdf 2017-05-18
3 Other Document [18-05-2017(online)].pdf 2017-05-18
3 693-mumnp-2010-abstract.pdf 2018-08-10
4 Form 6 [18-05-2017(online)].pdf 2017-05-18
4 693-mumnp-2010-claims.doc 2018-08-10
5 Form 13 [18-05-2017(online)].pdf 2017-05-18
5 693-mumnp-2010-claims.pdf 2018-08-10
6 Assignment [18-05-2017(online)].pdf 2017-05-18
6 693-MUMNP-2010-CORRESPONDENCE(15-9-2011).pdf 2018-08-10
7 693-MUMNP-2010-ORIGINAL UNDER RULE 6 (1A)-23-05-2017.pdf 2017-05-23
7 693-MUMNP-2010-CORRESPONDENCE(18-5-2010).pdf 2018-08-10
8 Form 3 [21-06-2017(online)].pdf 2017-06-21
8 693-MUMNP-2010-CORRESPONDENCE(7-10-2010).pdf 2018-08-10
9 693-MUMNP-2010-Correspondence-260416.pdf 2018-08-10
9 693-MUMNP-2010-OTHERS [09-08-2017(online)].pdf 2017-08-09
10 693-mumnp-2010-correspondence.pdf 2018-08-10
10 693-MUMNP-2010-FER_SER_REPLY [09-08-2017(online)].pdf 2017-08-09
11 693-MUMNP-2010-COMPLETE SPECIFICATION [09-08-2017(online)].pdf 2017-08-09
11 693-mumnp-2010-description(complete).pdf 2018-08-10
12 693-MUMNP-2010-CLAIMS [09-08-2017(online)].pdf 2017-08-09
12 693-mumnp-2010-drawing.pdf 2018-08-10
13 693-MUMNP-2010-ABSTRACT [09-08-2017(online)].pdf 2017-08-09
13 693-MUMNP-2010-ENGLISH TRANSLATION(18-5-2010).pdf 2018-08-10
14 693-MUMNP-2010-FER.pdf 2018-08-10
14 693-MUMNP-2010-FORM 3 [18-12-2017(online)].pdf 2017-12-18
15 693-MUMNP-2010-FORM 1(18-5-2010).pdf 2018-08-10
15 POA,FORM-1,2.pdf 2018-08-10
16 693-mumnp-2010-form 1.pdf 2018-08-10
16 FORM-6.pdf 2018-08-10
17 ASSIGNMENT.pdf 2018-08-10
17 693-MUMNP-2010-FORM 18(15-9-2011).pdf 2018-08-10
18 693-mumnp-2010-form 2(title page).pdf 2018-08-10
18 abstract1.jpg 2018-08-10
19 693-mumnp-2010-wo international publication report a1.pdf 2018-08-10
20 693-mumnp-2010-form 2.pdf 2018-08-10
20 693-MUMNP-2010-POWER OF ATTORNEY(18-5-2010).pdf 2018-08-10
21 693-MUMNP-2010-FORM 3(7-10-2010).pdf 2018-08-10
21 693-mumnp-2010-other document.pdf 2018-08-10
22 693-MUMNP-2010-Form 3-260416.pdf 2018-08-10
22 693-mumnp-2010-form pct-isa-210.pdf 2018-08-10
23 693-mumnp-2010-form 3.pdf 2018-08-10
23 693-mumnp-2010-form pct-ib-304.pdf 2018-08-10
24 693-mumnp-2010-form 5.pdf 2018-08-10
25 693-mumnp-2010-form pct-ib-304.pdf 2018-08-10
25 693-mumnp-2010-form 3.pdf 2018-08-10
26 693-MUMNP-2010-Form 3-260416.pdf 2018-08-10
26 693-mumnp-2010-form pct-isa-210.pdf 2018-08-10
27 693-MUMNP-2010-FORM 3(7-10-2010).pdf 2018-08-10
27 693-mumnp-2010-other document.pdf 2018-08-10
28 693-mumnp-2010-form 2.pdf 2018-08-10
28 693-MUMNP-2010-POWER OF ATTORNEY(18-5-2010).pdf 2018-08-10
29 693-mumnp-2010-wo international publication report a1.pdf 2018-08-10
30 693-mumnp-2010-form 2(title page).pdf 2018-08-10
30 abstract1.jpg 2018-08-10
31 693-MUMNP-2010-FORM 18(15-9-2011).pdf 2018-08-10
31 ASSIGNMENT.pdf 2018-08-10
32 693-mumnp-2010-form 1.pdf 2018-08-10
32 FORM-6.pdf 2018-08-10
33 693-MUMNP-2010-FORM 1(18-5-2010).pdf 2018-08-10
33 POA,FORM-1,2.pdf 2018-08-10
34 693-MUMNP-2010-FER.pdf 2018-08-10
34 693-MUMNP-2010-FORM 3 [18-12-2017(online)].pdf 2017-12-18
35 693-MUMNP-2010-ABSTRACT [09-08-2017(online)].pdf 2017-08-09
35 693-MUMNP-2010-ENGLISH TRANSLATION(18-5-2010).pdf 2018-08-10
36 693-MUMNP-2010-CLAIMS [09-08-2017(online)].pdf 2017-08-09
36 693-mumnp-2010-drawing.pdf 2018-08-10
37 693-mumnp-2010-description(complete).pdf 2018-08-10
37 693-MUMNP-2010-COMPLETE SPECIFICATION [09-08-2017(online)].pdf 2017-08-09
38 693-MUMNP-2010-FER_SER_REPLY [09-08-2017(online)].pdf 2017-08-09
38 693-mumnp-2010-correspondence.pdf 2018-08-10
39 693-MUMNP-2010-Correspondence-260416.pdf 2018-08-10
39 693-MUMNP-2010-OTHERS [09-08-2017(online)].pdf 2017-08-09
40 693-MUMNP-2010-CORRESPONDENCE(7-10-2010).pdf 2018-08-10
40 Form 3 [21-06-2017(online)].pdf 2017-06-21
41 693-MUMNP-2010-CORRESPONDENCE(18-5-2010).pdf 2018-08-10
41 693-MUMNP-2010-ORIGINAL UNDER RULE 6 (1A)-23-05-2017.pdf 2017-05-23
42 693-MUMNP-2010-CORRESPONDENCE(15-9-2011).pdf 2018-08-10
42 Assignment [18-05-2017(online)].pdf 2017-05-18
43 Form 13 [18-05-2017(online)].pdf 2017-05-18
43 693-mumnp-2010-claims.pdf 2018-08-10
44 Form 6 [18-05-2017(online)].pdf 2017-05-18
45 Other Document [18-05-2017(online)].pdf 2017-05-18
45 693-mumnp-2010-abstract.pdf 2018-08-10
46 Power of Attorney [18-05-2017(online)].pdf 2017-05-18
47 Other Patent Document [05-10-2016(online)].pdf 2016-10-05
47 693-MUMNP-2010-HearingNoticeLetter.pdf 2019-02-06

Search Strategy

1 Searchstrategy_21-02-2017.pdf