Sign In to Follow Application
View All Documents & Correspondence

Apparatus And Method For Mdct M/S Stereo With Global Ild With Improved Mid/Side Decision

Abstract: Fig. illustrates an apparatus for encoding a first channel and a second channel of an audio input signal comprising two or more channels to obtain an encoded audio signal according to an embodiment. The apparatus comprises a normalizer (110) configured to determine a normalization value for the audio input signal depending on the first channel of the audio input signal and depending on the second channel of the audio input signal wherein the normalizer (110) is configured to determine a first channel and a second channel of a normalized audio signal by modifying depending on the normalization value at least one of the first channel and the second channel of the audio input signal. Moreover the apparatus comprises an encoding unit (120) being configured to generate a processed audio signal having a first channel and a second channel such that one or more spectral bands of the first channel of the processed audio signal are one or more spectral bands of the first channel of the normalized audio signal such that one or more spectral bands of the second channel of the processed audio signal are one or more spectral bands of the second channel of the normalized audio signal such that at least one spectral band of the first channel of the processed audio signal is a spectral band of a mid signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal and such that at least one spectral band of the second channel of the processed audio signal is a spectral band of a side signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal. The encoding unit (120) is configured to encode the processed audio signal to obtain the encoded audio signal.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2018
Publication Number
34/2018
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2024-01-25
Renewal Date

Applicants

FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München

Inventors

1. RAVELLI, Emmanuel
Donato-Polli-Str. 58 91056 Erlangen
2. SCHNELL, Markus
Labenwolfstr. 15 90409 Nürnberg
3. DÖHLA, Stefan
Saidelsteig 61 91058 Erlangen
4. JÄGERS, Wolfgang
Kulmbacher Straße 47 91056 Erlangen
5. DIETZ, Martin
Deutschherrnstraße 37 90429 Nürnberg
6. HELMRICH, Christian
Fraunhoferstr. 21 10587 Berlin
7. MARKOVIC, Goran
Aachener Straße 19 90425 Nürnberg
8. FOTOPOULOU, Eleni
Berckhauserstr. 33 90409 Nürnberg
9. MULTRUS, Markus
Etzlaubweg 7 90469 Nürnberg
10. BAYER, Stefan
Dortmunder Strasse 14 90425 Nürnberg
11. FUCHS, Guillaume
Joseph-Otto-Kolb-Str. 31 91088 Bubenreuth
12. HERRE, Jürgen
Rathsberger Str. 10a 91054 Erlangen

Specification

APPARATUS AND METHOD FOR MDCT M/S STEREO WITH GLOBAL ILD TO IMPROVE MID/SIDE DECISION Description The present invention relates to audio signal encoding and audio signal decoding and, in particular, to an apparatus and method for MDCT M/S Stereo with Global ILD with improved Mid/Side Detection. Band-wise M/S processing (M/S = Mid/Side) in MDCT-based coders (MDCT = Modified Discrete Cosine Transform) is a known and effective method for stereo processing. Yet, it is not sufficient for panned signals and an additional processing, such as complex prediction or a coding of angles between a mid and a side channel, is required. In [1 ], [2], [3] and [4], M/S processing on windowed and transformed non-normalized (not whitened) signals is described. In [7], prediction between mid and side channels is described. In [7], an encoder is disclosed which encodes an audio signal based on a combination of two audio channels. The audio encoder obtains a combination signal being a mid-signal, and further obtains a prediction residual signal being a predicted side signal derived from the mid signal. The first combination signal and the prediction residual signal are encoded and written into a data stream together with the prediction information. Moreover, [7] discloses a decoder which generates decoded first and second audio channels using the prediction residual signal, the first combination signal and the prediction information. In [5], the application of M/S stereo coupling after normalization separately on each band is described. In particular, [5] refers to the Opus codec. Opus encodes the mid signal and side signal as normalized signals m = / | | M | | and s = s / | | S | I . To recover M and S from m and s , the angle es = a r ct an ( | | S | | / | | M | | ) is encoded. With N being the size of the band and with a being the total number of bits available for m and s , the optimal allocation for m is am,; 1 then the right channel is scaled with— -— , otherwise the left channel ratio my is scaled with ratio ltiB. This effectively means that the louder channel is scaled. If the perceptual spectrum whitening in the time domain is used (as, for example, described in [13]), the single global ILD can also be calculated and applied in the time domain, before the time to frequency domain transformation (i.e. before the MDCT). Or, alternatively, the perceptual spectrum whitening may be followed by the time to frequency domain transformation followed by the single global ILD in the frequency domain. Alternatively the single global ILD may be calculated in the time domain before the time to frequency domain transformation and applied in the frequency domain after the time to frequency domain transformation. The mid MDCTMik and the side MDCT£_k channels are formed using the left channel MDCTUk and the right channel M DCTR as M DCTM k = 1 r (MDCTL k + MD CTR k ) and MDCTSM = l( r {MDCTL k ~ MDCTR k) . The spectrum is divided into bands and for each band it is decided if the left, right, mid or side channel is used. A global gain GBSt \s estimated on the signal comprising the concatenated Left and Right channels. Thus is different from [6b] and [6a]. The first estimate of the gain as described in chapter 5.3.3.2.8.1 .1 "Global gain estimator" of [6b] or of [6a] may, for example, be used, for example, assuming an SNR gain of 6 dB per sample per bit from the scalar quantization. The estimated gain may be multiplied with a constant to get an underestimation or an overestimation in the final Gast■ Signals in the left, right, mid and side channels are then quantized using 6rt?t , that is the quantization step size is MGpst . The quantized signals are then coded using an arithmetic coder, a Huffman coder or any other entropy coder, in order to get the number of required bits. For example, the context based arithmetic coder described in chapter 5.3.3.2.8.1 .3 - chapter 5.3.3.2.8.1 .7 of [6b] or of [6a] may be used. Since the rate loop (e.g. 5.3.3.2.8.1.2 in [6b] or in [6a]) will be run after the stereo coding, an estimation of the required bits is enough. As an example, for each quantized channel required number of bits for context based arithmetic coding is estimated as described in chapter 5.3.3.2.8.1.3 - chapter 5.3.3.2.8.17 of [6b] or of [6a]. According to an embodiment, the bit estimation for each quantized channel (left, right, mid or side) is determined based on the following example code: int context__based__arihmetic_coder_estimate ( int spectrum[] , int start_line, int end_line, int lastnz, // lastnz = last non-zero spectrum line int & ctx, // ctx = context int & probability, // 14 bit fixed point probability const unsigned int cum_freq [N_CONTEXTS] [] // cum_freq = cumulative frequency tables, 14 bit fixed point ) { int nBits = 0; for (int k = start_line; k < min(lastnz, end_line) ; k+=2) { int al = abs ( spectrum [k] ) ; int bl = abs ( spectrum [k+1] ) ; /* Signs Bits */ nBits += min(al, 1) ; nBits += min(bl, 1) ; while (max(al, bl) >= 4) { probability *= cum_freq [ctx] [VAL_ESC] ; int nlz = Nuitiber_of_leading_zeros (probability) ; nBits += 2 + nlz; probability >>= 14 - nlz; al >>= 1 ; bl >>= 1; ctx = update_context (ctx, VAL_ESC) ; int symbol = al + 4*bl; probability *= (cum_freq [ctx] [symbol] - cum_freq [ctx] [symbol+1] ) ; int nlz = Number_of_leading_zeros (probability) ; nBits += nlz; hContextMem- >proba >>= 14 - nlz; ctx = update_context (ctx, al+bl) ; return nBits; } where spectrum is set to point to the quantized spectrum to be coded, start_iine is set to 0, end_line is set to the length of the spectrum, lastnz is set to the index of the last non-zero element of spectrum, ctx is set to 0 and probability is set to 1 in Mbit fixed point notation (16384=1 «14). As outlined, the above example code may be employed, for example, to obtain a bit estimation for at least one of the left channel, the right channel, the mid channel and the side channel. Some embodiments employ an arithmetic coder as described in [6b] and [6a]. Further details may, e.g., be found in chapter 5.3.3.2.8 "Arithmetic coder" of [6b], An estimated number of bits for "full dual mono" { bLR ) is then equal to the sum of the bits required for the right and the left channel. An estimated number of bits for the "full M/S" ( bMS ) is then equal to the sum of the bits required for the Mid and the Side channel. In an alternative embodiment, which is an alternative to the above example code, the formula: Σ may, e.g., be employed to calculate an estimated number of bits for "full dual mono" ( bLS ) . Moreover, in an alternative embodiment, which is an alternative to the above example code, the formula: Σ may, e.g., be employed to calculate an estimated number of bits for the "full M/S" ( bMS ) . For each band i with borders [lbt, ub,], it is checked how many bits would be used for coding the quantized signal in the band in the L/R ( bhl vjLR ) and in the M/S { bbl vJ,MS ) mode. In other words, a band-wise bit estimation is conducted for the L/R mode for each band i: bh' ^,l R , which results in the L/R mode band-wise bit estimation for band i, and a band-wise bit estimation is conducted for the M/S mode for each band i, which results in the M/S mode band-wise bit estimation for band i: b^w f _ The mode with fewer bits is chosen for the band. The number of required bits for arithmetic coding is estimated as described in chapter 5.3.3.2.8.1 .3 - chapter 5.3.3.2.8.1.7 of [6b] or of [6a]. The total number of bits required for coding the spectrum in the "band-wise M/S" mode { bBW ) is equal to the sum of mm ( ¾v>.£Ji, bbl wMS) : nBands - l bmv = nBa ds + The "band-wise M/S" mode needs additional nBands bits for signaling in each band whether L/R or M/S coding is used. The choice between the "band-wise M/S", the "full dual mono" and the "full M/S" may, e.g., be coded as the stereo mode in the bitstream and then the "full dual mono" and the "full M/S" don't need additional bits, compared to the "band-wise M/S", for signaling. For the context based arithmetic coder, bb'wLR used in the calculation of bLR is not equal to bb'wLR used in the calculation of bB , nor is bb'wMS used in the calculation of bMS equal to bb'wMS used in the calculation of bBW, as the bb'wIR and the bb'wMS depend on the choice of the context for the previous bbwLR and bbwMS , where j < i. bLR may be calculated as the sum of the bits for the Left and for the Right channel and bMS may be calculated as the sum of the bits for the Mid and for the Side channel, where the bits for each channel can be calculated using the example code context_based_arihmetic_coder_estimate_bandwise where start_line is set to 0 and end_line is set to lastnz . In an alternative embodiment, which is an alternative to the above example code, the formula: nBands -I bLR = nBands + ^ ¾wi/J ;=0 may, e.g., be employed to calculate an estimated number of bits for "full dual mono" { bhR ) and signaling in each band L/R coding may be used. Moreover, in an alternative embodiment, which is an alternative to the above example code, the formula: nBands -1 b MS = nBands + J ¾U,.M5 may, e.g., be employed to calculate an estimated number of bits for the "full M/S" ( bms ) and signaling in each band M/S coding may be used. In some embodiments, at first, a gain G may, e.g., be estimated and a quantization step size may, e.g., estimated, for which it is expected that there are enough bits to code the channels in L/R. In the following , embodiments are provided which describe different ways how to determine a band-wise bit estimation, e.g., it is described how to determine bhlvl!hR and ^hxM according to particular embodiments. As already outlined, according to a particular embodiment, for each quantized channel, the required number of bits for arithmetic coding is estimated, for example, as described in chapter 5.3.3.2.8.1.7 "Bit consumption estimation" of [6b] or of the similar chapter of [6a]. According to an embodiment, the band-wise bit estimation is determined using context__based_arihmetic_coder_estimate for calculating each of bb'wJR and KwMs f°r everY t. by setting start_line to lb,, end__line to lib, , lastnz to the index of the last non-zero element of spectrum. Four contexts (ctxL, ctxR, ctxM, ctxM) and four probabilities (pL, pR, pM, p ) are initialized and then repeatedly updated. At the beginning of the estimation (for ί = 0) each context (ctxL, ctxR, ctxM, ctxM) is set to 0 and each probability (pL, PR, PM, PM) is set to 1 in 14bit fixed point notation ( 16384=1 «14). ¾w£R 's calculated as sum of ¾wt and ¾wi , where ¾Wi is determined using context_based_arihmetic__coder_estimate by setting spectrum to point to the quantized left spectrum to be coded, ctx is set to ctxL and probability is set to pL and bb' v.-R is determined using context__based__arihmetic__coder___estimate by setting spectrum to point to the quantized right spectrum to be coded, ctx is set to ctxR and probability is set to pR. ¾κΜΪ is calculated as sum of and ^ w , where WM is determined using context_based_arihmetic_coder_estimate by setting spectrum to point to the quantized mid spectrum to be coded, ctx is set to ctxM and probability is set to p and ¾v.-s is determined using context_based_arihmetic_coder_estimate by setting spectrum to point to the quantized side spectrum to be coded, ctx is set to ctxs and probability is set to ps. i {: ¾.-£.κ< ¾νν- <τ tnen CTXL is set to ctxM, ctxR is set to ctxs, PL is set to pM, pR is set to ps. 'f «· >=¾-,..- ¾s TNE N CTXM I S SET TO CTXL> ctxs is set to ctxR, PM is set to pL, ps is set to pR. In an alternative embodiment, the band-wise bit estimation is obtained as follows: The spectrum is divided into bands and for each band it is decided if M/S processing should be done. For all bands where M/S is used, MD CTL k and MDCTR k are replaced with MDCTM k = 0S(M DCTLik + NlDCTR k) and MDCTSrk = 0.5(MDCTl k - MDCTRtk). Band-wise M/S vs L/R decision may, e.g. , be based on the estimated bit saving with the M/S processing: bitsSavedi = nlinesi ■ where MRGR i is the energy in the f-th band of the right channel, NRGU is the energy in the i-th band of the left channel, NRGM . is the energy in the i-th band of the mid channel, NRGS i is the energy in the i-th band of the side channel and nlinesi is the number of spectral coefficients in the i-th band. Mid channel is the sum of the left and the right channel, side channel is the differences of the left and the right channel. hitsSeivedi is limited with the estimated number of bits to be used for the i-th band: max Bi " bitsAvailable / NRG f i NRGS . max Bits.,,, = — -I - bitsAvailable MS \ NRGM NRGS J bitsSavedi = max maxBitsLR) min {—maxB itsMS. hitsSavedi) ' Fig. 7 illustrates calculating a bitrate for band-wise M/S decision according to an embodiment. In particular, in Fig. 7, the process for calculating bmv is depicted. To reduce the complexity, arithmetic coder context for coding the spectrum up to band i— 1 is saved and reused in the band i. It should be noted that for the context based arithmetic coder, b^' ,! R and ¾u_,[f !; depend on the arithmetic coder context, which depends on the M/S vs L/R choice in all bands j < i, as, e.g. , described above. Hg. 8 illustrates a stereo mode decision according to an embodiment. If "full dual mono" is chosen then the complete spectrum consists of MDCTL and M.DCTRJR. If "full M/S" is chosen then the complete spectrum consists of MDCTM K and M DCTS K. If "band-wise M/S" is chosen then some bands of the spectrum consist of M DCTL K and MDCTR K and other bands consist of MDCTM K and MDCTS K. The stereo mode is coded in the bitstream. In "band-wise M/S" mode also band-wise M/S decision is coded in the bitstream. The coefficients of the spectrum in the two channels after the stereo processing are denoted as MDCTLM and M DCTR5JI. MDCTL> K is equal to MDCTM k in M/S bands or to M DCTLtk in L/R bands and MD CT ∑ik is equal to MDC. TSK in M/S bands or to MDCTR K in L/R bands, depending on the stereo mode and band-wise M/S decision. The spectrum consisting of MDCTLMik may, e.g., be referred to as jointly coded channel 0 (Joint Chn 0) or may, e.g., be referred to as first channel, and the spectrum consisting of MD CTRSik may, e.g., be referred to as jointly coded channel 1 (Joint Chn 1 ) or may, e.g., be referred to as second channel. The bitrate split ratio is calculated using the energies of the stereo processed channels: NRGLM FSPLLT ~ NRGLM + N.RGRS The bitrate split ratio is uniformly quantized: = ^- {l.min f ^^. - ■ „ + 0.5])) RSPlitrans8 = 1 « rsPlit tS where rsplithjr!l is the number of bits used for coding the bitrate split ratio. If rsp ,lt < -and — -j^jj , Mg" then f sp 7li~t is decreased for g , Mgs. If r j„pi„i r, > - cj and fsPut < tn e n ^»itt IS increased for -'---· . rspl ~ is stored in the bitstream. The bitrate distribution among channels is: rspl bin LM {totalB its Available— stereoBits) r split. range bits = (totalB its Available— stereoBits)— bitsLM Additionally it is made sure that there are enough bits for the entropy coder in each channel by checking that bitsLM— side B its ^ > mvnBits and bitsRS — sideBitSf > minBits, where rninBits is the minimum number of bits required by the entropy coder. If there is not enough bits for the entropy coder then τ ^. is increased/decreased by 1 till b its LM— side Bits LM > rnmB its and bitsRS — side B its > rninBits are fulfilled. Quantization, noise filling and the entropy encoding, including the rate-loop, are as described in 5.3.3.2 "General encoding procedure" of 5.3.3 "MDCT based TCX" in [6b] or in [6a]. The rate-loop can be optimized using the estimated GBst . The power spectrum P (magnitude of the MCLT) is used for the tonality/noise measures in the quantization and Intelligent Gap Filling (IGF) as described in [6a] or [6b]. Since whitened and band-wise M/S processed MDCT spectrum is used for the power spectrum, the same FDNS and M/S processing is to be done on the MDST spectrum. The same scaling based on the global ILD of the louder channel is to be done for the MDST as it was done for the MDCT. For the frames where TNS is active, MDST spectrum used for the power spectrum calculation is estimated from the whitened and M/S processed MDCT spectrum: Pk = MDCTk2 + (MDCTk+i_-MDCTk_-i )2. The decoding process starts with decoding and inverse quantization of the spectrum of the jointly coded channels, followed by the noise filling as described in 6.2.2 "MDCT based TCX" in [6b] or [6a], The number of bits allocated to each channel is determined based on the window length, the stereo mode and the bitrate split ratio that are coded in the bitstream. The number of bits allocated to each channel must be known before fully decoding the bitstream. In the intelligent gap filling (IGF) block, lines quantized to zero in a certain range of the spectrum, called the target tile are filled with processed content from a different range of the spectrum, called the source tile. Due to the band-wise stereo processing, the stereo representation (i.e. either L/R or M/S) might differ for the source and the target tile. To ensure good quality, if the representation of the source tile is different from the representation of the target tile, the source tile is processed to transform it to the representation of the target file prior to the gap filling in the decoder. This procedure is already described in [9]. The IGF itself is, contrary to [6a] and [6b], applied in the whitened spectral domain instead of the original spectral domain. In contrast to the known stereo codecs (e.g. [9]), the IGF is applied in the whitened, ILD compensated spectral domain. Based on the stereo mode and band-wise M/S decision, left and right channel are constructed from the jointly coded channels: MDCTL K = 1f ^ (MDCTLM K + MDCTSS K ) and MDCTR.K - 1/^ (MDCTLM <, - MDCTRSRK). If ratioILD > 1 then the right channel is scaled with ratio1LD , otherwise the left channel is scaled with— -— . ratio lLD For each case where division by 0 could happen, a small epsilon is added to the denominator. For intermediate bitrates, e.g. 48 kbps, MDCT-based coding may, e.g., lead to too coarse quantization of the spectrum to match the bit-consumption target. That raises the need for parametric coding, which combined with discrete coding in the same spectral region, adapted on a frame-to-frame basis, increases fidelity. In the following, aspects of some of those embodiments, which employ stereo filling, are described. It should be noted that for the above embodiments, it is not necessary that stereo filling is employed. So, only some of the above-described embodiments employ stereo filling. Other embodiments of the above-described embodiments do not employ stereo filling at all. Stereo frequency filling in MPEG-H frequency-domain stereo is, for example, described in [1 ]. In [1 1] the target energy for each band is reached by exploiting the band energy sent from the encoder in the form of scale factors (for example in AAC). If frequency-domain noise (FDNS) shaping is applied and the spectral envelope is coded by using the LSFs (line spectral frequencies) (see [6a], [6b], [8]) it is not possible to change the scaling only for some frequency bands (spectral bands) as required from the stereo filling algorithm described in [1 1]. At first some background information is provided. When mid/side coding is employed, it is possible to encode the side signals in different ways. According to a first group of embodiments, a side signal S is encoded in the same way as a mid signal M. Quantization is conducted, but no further steps are conducted to reduce the necessary bit rate. In general, such an approach aims to allow a quite precise reconstruction of the side signal S on the decoder side, but, on the other hand requires a large amount of bits for encoding. According to a second group of embodiments, a residual side signal Sres is generated from the original side signal S based on the M signal. In an embodiment, the residual side signal may, for example, be calculated according to the formula: Sres = S - g ■ M . Other embodiments may, e.g., employ other definitions for the residual side signal. The residual signal Sres is quantized and transmitted to the decoder together with parameter g. By quantizing the residual signal Sres instead of the original side signal S, in general, more spectral values are quantized to zero. This, in general, saves the amount of bits necessary for encoding and transmitting compared to the quantized original side signal S. In some of these embodiments of the second group of embodiments, a single parameter g is determined for the complete spectrum and transmitted to the decoder. In other embodiments of the second group of embodiments, each of a plurality of frequency bands/spectral bands of the frequency spectrum may, e.g., comprise two or more spectral values, and a parameter g is determined for each of the frequency bands/spectral bands and transmitted to the decoder. Fig. 2 illustrates stereo processing of an encoder side according to the first or the second groups of embodiments, which do not employ stereo filling. Fig. 13 illustrates stereo processing of a decoder side according to the first or the second groups of embodiments, which do not employ stereo filling. According to a third group of embodiments, stereo filling is employed. In some of these embodiments, on the decoder side, the side signal S for a certain point-in-time t is generated from a mid signal of the immediately preceding point-in-time t-1. Generating the side signal S for a certain point-in-time t from a mid signal of the immediately preceding point-in-time t-1 on the decoder side may, for example, be conducted according to the formula: S(t) = hb - M(t-1 ). On the encoder side, the parameter hb is determined for each frequency band of a plurality of frequency bands of the spectrum. After determining the parameters hb, the encoder transmits the parameters hb to the decoder. In some embodiments, the spectral values of the side signal S itself or of a residual of it are not transmitted to the decoder, Such an approach aims to save the number of required bits. In some other embodiments of the third group of embodiments, at least for those frequency bands where the side signal is louder than the mid signal, the spectral values of oiCic lusc I i ci uanuS l uucu ιυ Qeouuci . According to a fourth group of embodiments, some of the frequency bands of the side signal S are encoded by explicitly encoding the original side signal S (see the first group of embodiment) or a residual side signal Sres, while for the other frequency bands, stereo filling is employed. Such an approach combines the first or the second groups of embodiments, with the third group of embodiments, which employs stereo filling. For example, lower frequency bands may, e.g., be encoded by quantizing the original side signal S or the residual side signal Sres, while for the other, upper frequency bands, stereo filling may, e.g., be employed. Fig. 9 illustrates stereo processing of an encoder side according to the third or the fourth groups of embodiments, which employ stereo filling. Fig. 10 illustrates stereo processing of a decoder side according to the third or the fourth groups of embodiments, which employ stereo filling. Those of the above-described embodiments, which do employ stereo filling, may, for example, employ stereo filling as described in in MPEG-H, see MPEG-H frequency-domain stereo (see, for example, [1 1]). Some of the embodiments, which employ stereo filling, may, for example, apply the stereo filling algorithm described in [1 1 ] on systems where the spectral envelope is coded as LSF combined with noise filling. Coding the spectral envelope, may, for example, be implemented as for example, described in [6a], [6b], [8]. Noise filling, may, for example, be implemented as described in [6a] and [6b]. In some particular embodiments, stereo-filling processing including stereo filling parameter calculation may, e.g., be conducted in the M/S bands within the frequency region, for example, from a lower frequency, such as 0.08 Fs (Fs = sampling frequency), to, for example, an upper frequency, for example, the IGF cross-over frequency. For example, for frequency portions lower than the lower frequency (e.g., 0.08 Fs), the original side signal S or a residual side signal derived from the original side signal S, may, e.g., be quantized and transmitted to the decoder. For frequency portions greater than the upper frequency (e.g., the IGF cross-over frequency), Intelligent Gap Filling (IGF) may, e.g., be conducted. More particularly, in some of the embodiments, the side channel (the second channel), for those frequency bands within the stereo filling range (for example, 0.08 times the sampling frequency up to the IGF cross-over frequency) that are fully quantized to zero, may, for example, be filled using a "copy-over" from the previous frame's whitened MDCT spectrum downmix (IGF = Intelligent Gap Filling). The "copy-over" may, for example, be applied complimentary to the noise filling and scaled accordingly depending on the correction factors that are sent from the encoder. In other embodiments, the lower frequency may exhibit other values than 0.08 Fs . Instead of being 0.08 Fs , in some embodiments, the lower frequency may, e.g., be a value in the range from 0 to 0.50 Fs In particular, embodiments, the lower frequency may be a value in the range from 0.01 Fs to 0.50 Fs . For example, the lower frequency may, e.g., be for example, 0.12 Fs or 0.20 Fs or 0.25 Fs . In other embodiments, in addition to or instead of employing Intelligent Gap Filling, for frequencies greater than the upper frequency, Noise Filling may, e.g., be conducted. In further embodiments, there is no upper frequency and stereo filling is conducted for each frequency portion greater than the lower frequency. In still further embodiments, there is no lower frequency, and stereo filling is conducted for frequency portions from the lowest frequency band up to the upper frequency. In still further embodiments, there is no lower frequency and no upper frequency and stereo filling is conducted for the whole frequency spectrum. In the following, particular embodiments, which employ stereo filling, are described. In particular, stereo filling with correction factors according to particular embodiments is described. Stereo Filling with correction factors may, e.g., be employed in the embodiments of the stereo filling processing blocks of Fig. 9 (encoder side) and of Fig. 10 (decoder side). In the following, Dmxji may, e.g., denote the Mid signal of the whitened MDCT spectrum, - Ss may, e.g., denote the Side signal of the whitened MDCT spectrum, Dtnx, may, e.g., denote the Mid signal of the whitened MOST spectrum, Si may, e.g., denote the Side signal of the whitened MDST spectrum, prevDmxn may, e.g., denote the Mid signal of whitened MDCT spectrum delayed by one frame, and ■prevDmx; may, e.g., denote the Mid signal of whitened MDST spectrum delayed by one frame. Stereo filling encoding may be applied when the stereo decision is M/S for all bands (full M/S) or M/S for all stereo filling bands (bandwise M/S). When it was determined to apply full dual-mono processing stereo filling is bypassed. Moreover, when L/R coding is chosen for some of the spectral bands (frequency bands), stereo filling is also bypassed for these spectral bands. Now, particular embodiments employing stereo filling are considered. There, processing within the block may, e.g., be conducted as follows: For the frequency bands (fb) that fall within the frequency region starting from the lower frequency (e.g., 0.08 Fs (Fs = sampling frequency)), up to the upper frequency, (e.g., the IGF cross-over frequency): A residual Res of the side signal SR is calculated, e.g., according to: Resn = SR - aRDmxx - aiDmx . where aR is the real part and a; is the imaginary part of the complex prediction coefficient (see [10]) . A residual Resi of the side signal 57 is calculated, e.g., according to: Res I = Si— aRDmxR— α/Ζ¾η¾ . Energies, e.g., complex-valued energies, of the residual Res and of the previous frame downmix (mid signal) pi-evDmx are calculated: EResfb =∑fb Resg +∑fb Mesf , EprevDmXfb = j prevDmXg + ^ prevDmx fb fb In the above formulae: ∑fb JResg sums the squares of all spectral values within frequency band ft? of ResR. ∑fb Resf sums the squares of all spectral values within frequency band fb of Resf. prevDtnx^ 1 fb sums the squares of all spectral values within frequency band fb of prevDmxR. sums the squares of all spectral values within frequency band jb of prevDmx/. From these calculated energies, {ERes b , EprevDmxfb ) , stereo filling correction factors are calculated and transmitted as side information to the decoder: correction_f actor f b = ERe ^( EprevDmx^ -+- ε) In an embodiment, ε = 0. In other embodiments, e.g., 0.1 > ε > 0, e.g., to avoid a division by 0. A band-wise scaling factor may, e.g., be calculated depending on the calculated stereo filling correction factors, e.g., for each spectral band, for which stereo filling is employed. Band-wise scaling of output Mid and Side (residual) signals by a scaling factor is introduced in order to compensate for energy loss, as there is no inverse complex prediction operation to reconstruct the side signal from the residual on the decoder side (aR = al = 0). In a particular embodiment, the band-wise scaling factor, may, e.g., be calculated according to: Σ *,(5* - RDmxR f +∑ b (S, - a,Dmx, f + EDmxf& scaling _ factor fh = I — — 1 11 EReSfb + EDmxfi + s where EDmxf b is the (e.g., complex) energy of the current frame downmix (which may, e.g., be calculated as described above). In some embodiments, after the stereo filling processing in the stereo processing block and prior to quantization, the bins of the residual that fall within the stereo filling frequency range may, e.g., be set to zero, if for the equivalent band the downmix (Mid) is louder than the residual (Side): > tnreshoid "fh Dmxg fb > Rei L ■ Therefore, more bits are spent on coding the downmix and the lower frequency bins of the residual, improving the overall quality. In alternative embodiments, all bits of the residual (Side) may, e.g., be set to zero. Such alternative embodiments may, e.g., be based on the assumption that the downmix is in most cases louder than the residual. 10 Fig. 1 illustrates stereo filling of a side signal according to some particular embodiments on the decoder side. Stereo filling is applied on the side channel after decoding, inverse quantization and noise 15 filling. For the frequency bands, within the stereo filling range, that are quantized to zero, a "copy-over" from the last frame's whitened MDCT spectrum downmix may, e.g., be applied (as seen in Fig. 1 1 ), if the band energy after noise filling does not reach the target energy. The target energy per frequency band is calculated from the stereo correction factors that are sent as parameters from the encoder, for example according to the £-\J ETfb = correction _f actor fb · Epr evDmx^b . The generation of the side signal on the decoder side (which may, e.g, be referred to as a 25 previous downmix "copy-over") is conducted, for example according to the formula: 5. = Ni -f facDmXfb ■ pre vDrnxi , i G [fb, fb + 1], where i denotes the frequency bins (spectral values) within the frequency band fb, N is 30 the noise filled spectrum and facDmxfb is a factor that is applied on the previous downmix, that depends on the stereo filling correction factors sent from the encoder. facD nxfb may, in a particular embodiment, e.g., be calculated for each frequency band fb as: 35 facDmxfb = {correction J 'actor ¾ - ENfb/(EprevDmXfb + ε) where ENfb, is the energy of the noise-filled spectrum in band fb and EprevDmx fb, is the respective previous frame downmix energy. On the encoder side, alternative embodiments do not take the MDST spectrum (or the MDCT spectrum) into account. In those embodiments, the proceeding on the encoder side is adapted, for example, as follows: For the frequency bands (fb) that fall within the frequency region starting from the lower frequency (e.g., 0.08 Fs (Fs = sampling frequency)), up to the upper frequency, (e.g., the IGF cross-over frequency): A residual Res of the side signal SR is calculated, e.g., according to: Res— SR— aRDmxR , where aR is a (e.g., real) prediction coefficient. Energies of the residual Res and of the previous frame downmix (mid signal) prevDmx are calculated: EprevDmx fb = ^ prevDmx^■ fb From these calculated energies, (ERes^ , EprevDmx^ ) , stereo filling correction factors are calculated and transmitted as side information to the decoder: carrection_f actor f b = ER es^/( EprevDmx ^ -j- ε) In an embodiment, ε = 0. In other embodiments, e.g., 0.1 > ε > 0, e.g., to avoid a division by 0. A band-wise scaling factor may, e.g., be calculated depending on the calculated stereo filling correction factors, e.g., for each spectral band, for which stereo filling is employed. In a particular embodiment, the band-wise scaling factor, may, e.g., be calculated according to: fb {SR - aRDmxR )2 + EDmx fb scaling factor fb i ERes j b + EDmx j b + ε where EDmxfb is the energy of the current frame downmix (which may, e.g., be calculated as described above). In some embodiments, after the stereo filling processing in the stereo processing block and prior to quantization, the bins of the residual that fall within the stereo filling frequency range may, e.g., be set to zero, if for the equivalent band the downmix (Mid) is louder than the residual (Side): > threshold Es Therefore, more bits are spent on coding the downmix and the lower frequency bins of the residual, improving the overall quality. In alternative embodiments, all bits of the residual (Side) may, e.g., be set to zero. Such alternative embodiments may, e.g., be based on the assumption that the downmix is in most cases louder than the residual. According to some of the embodiments, means may, e.g., be provided to apply stereo filling in systems with FDNS, where spectral envelope is coded using LSF (or a similar coding where it is not possible to independently change scaling in single bands). According to some of the embodiments, means may, e.g., be provided to apply stereo filling in systems without the complex/real prediction. Some of the embodiments may, e.g., employ parametric stereo filling, in the sense that explicit parameters (stereo filling correction factors) are sent from encoder to decoder, to control the stereo filling (e.g. with the downmix of the previous frame) of the whitened left and right MDCT spectrum. In more general: In some of the embodiments, the encoding unit 120 of Fig. 1 a - Fig. 1 e may, e.g., be configured to generate the processed audio signal, such that said at least one spectral band of the first channel of the processed audio signal is said spectral band of said mid signal, and such that said at least one spectral band of the second channel of the processed audio signal is said spectral band of said side signal. To obtain the encoded audio signal, the encoding unit 120 may, e.g., be configured to encode said spectral band of said side signal by determining a correction factor for said spectral band of said side signal. The encoding unit 120 may, e.g., be configured to determine said correction factor for said spectral band of said side signal depending on a residual and depending on a spectral band of a previous mid signal, which corresponds to said spectral band of said mid signal, wherein the previous mid signal precedes said mid signal in time. Moreover, the encoding unit 120 may, e.g., be configured to determine the residuai depending on said spectral band of said side signal, and depending on said spectral band of said mid signal. According to some of the embodiments, the encoding unit 120 may, e.g., be configured to determine said correction factor for said spectral band of said side signal according to the formula correction^/ actor fb = ERe sfb f (Eprev Dmxfb + ε") wherein correction J actor fh jncjjcates saj< correction factor for said spectral band of said side signal, wherein Resfb indicates a residual energy depending on an energy of a spectral band of said residual, which corresponds to said spectral band of said mid signal, wherein EPrevDmxfb indicates a previous energy depending on an energy of the spectral band of the previous mid signal, and wherein ε = 0, or wherein 0.1 > ε > 0. In some of the embodiments, said residual may, e.g., be defined according to K£SR— J> G.pD' TiXR , wherein ResR is said residual, wherein SR is said side signal, wherein R is a (e.g. , real) coefficient (e.g., a prediction coefficient), wherein DmxR is said mid signal, wherein the encoding unit ( 120) is configured to determine said residual energy according to EResf b —∑jf ¾ ResR . According to some of the embodiments, said residual is defined according to ResR — SR - RDmxR ~ wherein ResR is said residual, wherein SR is said side signal, wherein H is a real part of a complex (prediction) coefficient, and wherein as is an imaginary part of said complex (prediction) coefficient, wherein DmxR is said mid signal, wherein Dmxj is another mid signal depending on the first channel of the normalized audio signal and depending on the second channel of the normalized audio signal, wherein another residual of another side signal Sj depending on the first channel of the normalized audio signal and depending on the second channel of the normalized audio signal is defined according to Res I = Si— aRDmxR - ajDmxj , wherein the encoding unit 120 may, e.g., be configured to determine said residual energy according to EResfh =∑fb i?es| +∑fb Resf wherein the encoding unit 120 may, e.g. , be configured to determine the previous energy depending on the energy of the spectral band of said residual, which corresponds to said spectral band of said mid signal, and depending on an energy of a spectral band of said another residual, which corresponds to said spectral band of said mid signal. In some of the embodiments, the decoding unit 210 of Fig. 2a - Fig. 2e may, e.g., be configured to determine for each spectral band of said plurality of spectral bands, whether said spectral band of the first channel of the encoded audio signal and said spectral band of the second channel of the encoded audio signal was encoded using dual-mono encoding or using mid-side encoding. Moreover, the decoding unit 210 may, e.g., be configured to obtain said spectral band of the second channel of the encoded audio signal by reconstructing said spectral band of the second channel. If mid-side encoding was used, said spectral band of the first channel of the encoded audio signal is a spectral band of a mid signal, and said spectral band of the second channel of the encoded audio signal is spectral band of a side signal. Moreover, if mid-side encoding was used, the decoding unit 210 may, e.g., be configured to reconstruct said spectral band of the side signal depending on a correction factor for said spectral band of the side signal and depending on a spectral band of a previous mid signal, which corresponds to said spectral band of said mid signal, wherein the previous mid signal precedes said mid signal in time. According to some of the embodiments, if mid-side encoding was used, the decoding unit 210 may, e.g., be configured to reconstruct said spectral band of the side signal, by reconstructing spectral values of said spectral band of the side signal according to Si = ΛΓ; -r facDrnXf b ■ prevDmxi wherein 5, indicates the spectral values of said spectral band of the side signal, wherein prevDmXj indicates spectral values of the spectral band of said previous mid signal, wherein JV, indicates spectral values of a noise filled spectrum, wherein facDm,xfb is defined according to facOmxJb = wherein is said correction factor for said spectral band of the side signal, wherein ENfb, is an energy of the noise-filled spectrum, wherein EprevD/ηχβ is an energy of said spectral band of said previous mid signal, and wherein ε = 0, or wherein 0.1 > ε > 0. In some of the embodiments, a residual may, e.g., be derived from complex stereo prediction algorithm at encoder, while there is no stereo prediction (real or complex) at decoder side. According to some of the embodiments, energy correcting scaling of the spectrum at encoder side may, e.g., be used, to compensate for the fact that there is no inverse prediction processing at decoder side. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software or at least partially in hardware or at least partially in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory. A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet. A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus. The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein. Bibliography [1 ] j. Herre, E. Eberlein and K. Brandenburg, "Combined Stereo Coding," in 93rd AES Convention, San Francisco, 1992. [2] J. D. Johnston and A. J. Ferreira, "Sum-difference stereo transform coding," in Proc. ICASSP, 1992. [3] ISO/IEC 1 1 172-3, Information technology - Coding of moving pictures and associated audio for digital storage media at up to about 1 ,5 Mbit/s - Part 3: Audio, 1993. [4] ISO/IEC 13818-7, Information technology - Generic coding of moving pictures and associated audio information - Part 7: Advanced Audio Coding (AAC), 2003. [5] J.- . Valin, G. Maxwell, T. B. Terriberry and K. Vos, "High-Quality, Low-Delay Music Coding in the Opus Codec," in Proc. AES 135th Convention, New York, 2013. [6a] 3GPP TS 26.445, Codec for Enhanced Voice Services (EVS); Detailed algorithmic description, V 12.5.0, Dezember 2015. [6b] 3GPP TS 26.445, Codec for Enhanced Voice Services (EVS); Detailed algorithmic description, V 13.3.0, September 2016. H. Purnhagen, P. Carlsson, L. Villemoes, J. Robilliard, M. Neusinger, C. Helmrich, J. Hilpert, N. Rettelbach, S. Disch and B. Edler, "Audio encoder, audio decoder and related methods for processing multi-channel audio signals using complex prediction". US Patent 8,655,670 B2, 18 February 2014. G. Markovic, F. Guillaume, N. Rettelbach, C. Helmrich and B. Schubert, "Linear prediction based coding scheme using spectral domain noise shaping". European Patent 2676266 B1 , 14 February 201 1 . S. Disch, F. Nagel, R. Geiger, B. N. Thoshkahna, K. Schmidt, S. Bayer, C. Neukam, B. Edler and C. Helmrich, "Audio Encoder, Audio Decoder and Related Methods Using Two-Channel Processing Within an Intelligent Gap Filling Framework". International Patent PCT/EP2014/065106, 15 07 2014. [10] C. Helmrich, P. Carlsson, S. Disch, B. Edler, J. Hilpert, M. Neusinger, H. Purnhagen, N. Rettelbach, J. Robilliard and L. Villemoes, "Efficient Transform Coding Of Two-channel Audio Signals By Means Of Complex-valued Stereo Prediction," in Acoustics, Speech and Signal Processing (ICASSP), 201 1 IEEE International Conference on, Prague, 201 1. [1 1 ] C. R. Helmrich, A. Niedermeier, S. Bayer and B. Edler, "Low-complexity semi- parametric joint-stereo audio transform coding," in Signal Processing Conference (EUSIPCO), 2015 23rd European, 2015. [12] H. Malvar, "A Modulated Complex Lapped Transform and its Applications to Audio Processing" in Acoustics, Speech, and Signal Processing (ICASSP), 1999. Proceedings., 1999 IEEE International Conference on, Phoenix, AZ, 1999. [13] B. Edler and G. Schuller, "Audio coding using a psychoacoustic pre- and post- filter," Acoustics, Speech, and Signal Processing, 2000. ICASSP '00. Claims An apparatus for encoding a first channel and a second channel of an audio input signal comprising two or more channels to obtain an encoded audio signal, wherein the apparatus comprises: a normalizer (1 10) configured to determine a normalization value for the audio input signal depending on the first channel of the audio input signal and depending on the second channel of the audio input signal, wherein the normalizer (1 10) is configured to determine a first channel and a second channel of a normalized audio signal by modifying, depending on the normalization value, at least one of the first channel and the second channel of the audio input signal, an encoding unit (120) being configured to generate a processed audio signal having a first channel and a second channel, such that one or more spectral bands of the first channel of the processed audio signal are one or more spectral bands of the first channel of the normalized audio signal, such that one or more spectral bands of the second channel of the processed audio signal are one or more spectral bands of the second channel of the normalized audio signal, such that at least one spectral band of the first channel of the processed audio signal is a spectral band of a mid signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, and such that at least one spectral band of the second channel of the processed audio signal is a spectral band of a side signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, wherein the encoding unit (120) is configured to encode the processed audio signal to obtain the encoded audio signal. An apparatus according to claim 1 , wherein the encoding unit (120) is configured to choose between a full-mid-side encoding mode and a full-dual-mono encoding mode and a band-wise encoding mode depending on a plurality of spectral bands of a first channel of the normalized audio signal and depending on a plurality of spectral bands of a second channel of the normalized audio signal, wherein the encoding unit (120) is configured, if the full-mid-side encoding mode is chosen, to generate a mid signal from the first channel and from the second channel of the normalized audio signal as a first channel of a mid-side signal, to generate a side signal from the first channel and from the second channel of the normalized audio signal as a second channel of the mid-side signal, and to encode the mid-side signal to obtain the encoded audio signal, wherein the encoding unit (120) is configured, if the full-dual-mono encoding mode is chosen, to encode the normalized audio signal to obtain the encoded audio signal, and wherein the encoding unit (120) is configured, if the band-wise encoding mode is chosen, to generate the processed audio signal, such that one or more spectral bands of the first channel of the processed audio signal are one or more spectral bands of the first channel of the normalized audio signal, such that one or more spectral bands of the second channel of the processed audio signal are one or more spectral bands of the second channel of the normalized audio signal, such that at least one spectral band of the first channel of the processed audio signal is a spectral band of a mid signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, and such that at least one spectral band of the second channel of the processed audio signal is a spectral band of a side signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, wherein the encoding unit (120) is configured to encode the processed audio signal to obtain the encoded audio signal. An apparatus according to claim 2, wherein the encoding unit (120) is configured, if the band-wise encoding mode is chosen, to decide for each spectral band of a plurality of spectral bands of the processed audio signal, whether mid-side encoding is employed or whether dual-mono encoding is employed, wherein, if the mid-side encoding is employed for said spectral band, the encoding unit (120) is configured to generate said spectral band of the first channel of the processed audio signal as a spectral band of a mid signal based on said spectral band of the first channel of the normalized audio signal and based on said spectral band of the second channel of the normalized audio signal, and the encoding unit (120) is configured to generate said spectral band of the second channel of the processed audio signal as a spectral band of a side signal based on said spectral band of the first channel of the normalized audio signal and based on said spectral band of the second channel of the normalized audio signal, and wherein, if the dual-mono encoding is employed for said spectral band, the encoding unit (120) is configured to use said spectral band of the first channel of the normalized audio signal as said spectral band of the first channel of the processed audio signal, and is configured to use said spectral band of the second channel of the normalized audio signal as said spectral band of the second channel of the processed audio signal, or the encoding unit (120) is configured to use said spectral band of the second channel of the normalized audio signal as said spectral band of the first channel of the processed audio signal, and is configured to use said spectral band of the first channel of the normalized audio signal as said spectral band of the second channel of the processed audio signal. An apparatus according to claim 2 or 3, wherein the encoding unit (120) is configured to choose between the fuli-mid-side encoding mode and the full-dual- mono encoding mode and the band-wise encoding mode by determining a first estimation estimating a first number of bits that are needed for encoding when the full-mid-side encoding mode is employed, by determining a second estimation estimating a second number of bits that are needed for encoding when the full- dual-mono encoding mode is employed, by determining a third estimation estimating a third number of bits that are needed for encoding when the band-wise encoding mode is employed, and by choosing that encoding mode among the full- mid-side encoding mode and the full-dual-mono encoding mode and the band-wise encoding mode that has a smallest number of bits among the first estimation and the second estimation and the third estimation. 5. An apparatus according to claim 4, wherein the encoding unit ( 120) is configured to estimate the third estimation bBW , estimating the third number of bits that are needed for encoding when the band-wise encoding mode is employed, according to the formula: nBands— b3W = nBan ds + ^ ηιπι(¾ (.^ , bb' w KiS) wherein nBands is a number of spectral bands of the normalized audio signal, wherein bb'wMS is an estimation for a number of bits that are needed for encoding an z'-th spectral band of the mid signal and for encoding the z'-th spectral band of the side signal, and wherein bb'wLR is an estimation for a number of bits that are needed for encoding an z'-th spectral band of the first signal and for encoding the z'-th spectral band of the second signal. An apparatus according to claim 2 or 3, wherein the encoding unit (120) is configured to choose between the full-mid-side encoding mode and the full-dual-mono encoding mode and the band-wise encoding mode by determining a first estimation estimating a first number of bits that are saved when encoding in the full-mid-side encoding mode, by determining a second estimation estimating a second number of bits that are saved when encoding in the full-dual-mono encoding mode, by determining a third estimation estimating a third number of bits that are saved when encoding in the band-wise encoding mode, and by choosing that encoding mode among the full-mid-side encoding mode and the full-dual-mono encoding mode and the band-wise encoding mode that has a greatest number of bits that are saved among the first estimation and the second estimation and the third estimation. An apparatus according to claim 2 or 3, wherein the encoding unit ( 120) is configured to choose between the full-mid-side encoding mode and the full-dual-mono encoding mode and the band-wise encoding mode by estimating a first signal-to-noise ratio that occurs when the full-mid-side encoding mode is employed, by estimating a second signal-to-noise ratio that occurs when the full- that occurs when the band-wise encoding mode is employed, and by choosing that encoding mode among the full-mid-side encoding mode and the full-dual-mono encoding mode and the band-wise encoding mode that has a greatest signal-to-noise-ratio among the first signal-to-noise-ratio and the second signal-to-noise-ratio and the third signal-to-noise-ratio. An apparatus according to claim 1 , wherein the encoding unit (120) is configured to generate the processed audio signal, such that said at least one spectral band of the first channel of the processed audio signal is said spectral band of said mid signal, and such that said at least one spectral band of the second channel of the processed audio signal is said spectral band of said side signal, wherein, to obtain the encoded audio signal, the encoding unit (120) is configured to encode said spectral band of said side signal by determining a correction factor for said spectral band of said side signal, wherein the encoding unit (120) is configured to determine said correction factor for said spectral band of said side signal depending on a residual and depending on a spectral band of a previous mid signal, which corresponds to said spectral band of said mid signal, wherein the previous mid signal precedes said mid signal in time, wherein the encoding unit (120) is configured to determine the residual depending on said spectral band of said side signal, and depending on said spectral band of said mid signal. An apparatus according to claim 8, wherein the encoding unit (120) is configured to determine said correction factor for said spectral band of said side signal according to the formula cor rection_f actor. -b = EResf b / (EprevDr x r-b 4- ε) wherein correctlon-I a ctorfb indicates said correction factor for said spectral band of said side signal, wherein t h e sfb indicates a residual energy depending on an energy of a spectral band of said residual, which corresponds to said spectral band of said mid signal, wherein EprevDmxfb jn jjcates a previous energy depending on an energy of the spectral band of the previous mid signal, and wherein ε = 0, or wherein 0.1 > ε > 0. 10. An apparatus according to claim 8 or 9, wherein said residual is defined according to ResR = SR— aRDmxR l wherein ResR is said residual, wherein SR is said side signal, wherein ag is a coefficient, wherein DmxR is said mid signal, wherein the encoding unit (120) is configured to determine said residual energy according to EResfb =∑fb i?es| . 1 1 . An apparatus according to claim 8 or 9, wherein said residual is defined according to ResR " = Ss— RDmx%— alDmxll wherein ResR is said residual, wherein SR is said side signal, wherein aR is a real part of a complex coefficient, and wherein a, is an imaginary part of said complex coefficient, wherein DmxR is said mid signal, wherein Dmx{ is another mid signal depending on the first channel of the normalized audio signal and depending on the second channel of the normalized audio signal, wherein another residual of another side signal 57 depending on the first channel of the normalized audio signal and depending on the second channel of the normalized audio signal is defined according to Res I = S/ — aRDmxR ~ a ,Dmx; wherein the encoding unit (120) is configured to determine said residual according to wherein the encoding unit (120) is configured to determine the previous energy depending on the energy of the spectral band of said residual, which corresponds to said spectral band of said mid signal, and depending on an energy of a spectral band of said another residual, which corresponds to said spectral band of said mid signal. 12. An apparatus according to one of the preceding claims, wherein the normalizer (1 10) is configured to determine the normalization value for the audio input signal depending on an energy of the first channel of the audio input signal and depending on an energy of the second channel of the audio input signal. 13. An apparatus according to one of the preceding claims, wherein the audio input signal is represented in a spectral domain, wherein the normalizer (1 10) is configured to determine the normalization value for the audio input signal depending on a plurality of spectral bands of the first channel of the audio input signal and depending on a plurality of spectral bands of the second channel of the audio input signal, and wherein the normalizer (1 10) is configured to determine the normalized audio signal by modifying, depending on the normalization value, the plurality of spectral bands of at least one of the first channel and the second channel of the audio input signal. 14. An apparatus according to claim 13, wherein the normalizer (1 10) is configured to determine the normalization value based on the formulae: NRGr 1LD = NRG, + NRGR wherein MDCTL k is a fc-th coefficient of an MDCT spectrum of the first channel of the audio input signal, and MDC TP k is the fc-th coefficient of the MDCT spectrum of the second channel of the audio input signal, and wherein the normalizer (1 10) is configured to determine the normalization value by quantizing ILD. 15. An apparatus according to claim 13 or 14, wherein the apparatus for encoding further comprises a transform unit ( 102) and a preprocessing unit ( 105), wherein the transform unit (102) is configured to configured to transform a time- domain audio signal from a time domain to a frequency domain to obtain a transformed audio signal, wherein the preprocessing unit ( 105) is configured to generate the first channel and the second channel of the audio input signal by applying an encoder-side frequency domain noise shaping operation on the transformed audio signal. 16. An apparatus according to claim 15, wherein the preprocessing unit ( 105) is configured to generate the first channel and the second channel of the audio input signal by applying an encoder-side temporal noise shaping operation on the transformed audio signal before applying the encoder-side frequency domain noise shaping operation on the transformed audio signal. 17. An apparatus according to one of claims 1 to 12, wherein the normalizer (1 10) is configured to determine a normalization value for the audio input signal depending on the first channel of the audio input signal being represented in a time domain and depending on the second channel of the audio input signal being represented in the time domain, wherein the normalizer ( 10) is configured to determine the first channel and the second channel of the normalized audio signal by modifying, depending on the normalization value, at least one of the first channel and the second channel of the audio input signal being represented in the time domain, wherein the apparatus further comprises a transform unit (1 15) being configured to transform the normalized audio signal from the time domain to a spectral domain so that the normalized audio signal is represented in the spectral domain, and wherein the transform unit is configured to feed the normalized audio signal being represented in the spectral domain into the encoding unit (120). 18. An apparatus according to claim 17, wherein the apparatus further comprises a preprocessing unit (106) being configured to receive a time-domain audio signal comprising a first channel and a second channel, wherein the preprocessing unit (106) is configured to apply a filter on the first channel of the time-domain audio signal that produces a first perceptually whitened spectrum to obtain the first channel of the audio input signal being represented in the time domain, and wherein the preprocessing unit (106) is configured to apply the filter on the second channel of the time-domain audio signal that produces a second perceptually whitened spectrum to obtain the second channel of the audio input signal being represented in the time domain. 19. An apparatus according to claim 17 or 18, wherein the transform unit (1 15) is configured to transform the normalized audio signal from the time domain to the spectral domain to obtain a transformed audio signal, wherein the apparatus furthermore comprises a spectral-domain preprocessor (1 18) being configured to conduct encoder-side temporal noise shaping on the transformed audio signal to obtain the normalized audio signal being represented in the spectral domain. An apparatus according to one of the preceding claims, wherein the encoding unit (120) is configured to obtain the encoded audio signal by applying encoder-side Stereo Intelligent Gap Filling on the normalized audio signal or on the processed audio signal. An apparatus according to one of the preceding claims, wherein the audio input signal is an audio stereo signal comprising exactly two channels. A system for encoding four channels of an audio input signal comprising four or more channels to obtain an encoded audio signal, wherein the system comprises: a first apparatus (170) according to one of claims 1 to 20, for encoding a first channel and a second channel of the four or more channels of the audio input signal to obtain a first channel and a second channel of the encoded audio signal, and a second apparatus (180) according to one of claims 1 to 20, for encoding a third channel and a fourth channel of the four or more channels of the audio input signal to obtain a third channel and a fourth channel of the encoded audio signal. An apparatus for decoding an encoded audio signal comprising a first channel and a second channel to obtain a first channel and a second channel of a decoded audio signal comprising two or more channels, wherein the apparatus comprises a decoding unit (210) configured to determine for each spectra! band of a plurality of spectra! bands, whether said spectral band of the first channel of the encoded audio signal and said spectral band of the second channel of the encoded audio signal was encoded using dual-mono encoding or using mid-side encoding, wherein the decoding unit (210) is configured to use said spectral band of the first channel of the encoded audio signal as a spectral band of a first channel of an intermediate audio signal and is configured to use said spectral band of the second channel of the encoded audio signal as a spectral band of a second channel of the intermediate audio signal, if the dual-mono encoding was used, wherein the decoding unit (210) is configured to generate a spectral band of the first channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, and to generate a spectral band of the second channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, if the mid-side encoding was used, and wherein the apparatus comprises a de-normalizer (220) configured to modify, depending on a de-normalization value, at least one of the first channel and the second channel of the intermediate audio signal to obtain the first channel and the second channel of the decoded audio signal. An apparatus according to claim 23, wherein the decoding unit (210) is configured to determine whether the encoded audio signal is encoded in a full-mid-side encoding mode or in a full-dual-mono encoding mode or in a band-wise encoding mode, wherein the decoding unit (210) is configured, if it is determined that the encoded audio signal is encoded in the full-mid-side encoding mode, to generate the first channel of the intermediate audio signal from the first channel and from the second channel of the encoded audio signal, and to generate the second channel of the intermediate audio signal from the first channel and from the second channel of the encoded audio signal, wherein the decoding unit (210) is configured, if it is determined that the encoded audio signal is encoded in the full-dual-mono encoding mode, to use the first channel of the encoded audio signal as the first channel of the intermediate audio signal, and to use the second channel of the encoded audio signal as the second channel of the intermediate audio signal, and wherein the decoding unit (210) is configured, if it is determined that the encoded audio signal is encoded in the band-wise encoding mode, to determine for each spectral band of a plurality of spectral bands, whether said spectral band of the first channel of the encoded audio signal and said spectral band of the second channel of the encoded audio signal was encoded using the dual-mono encoding or using the mid-side encoding, to use said spectral band of the first channel of the encoded audio signal as a spectral band of the first channel of the intermediate audio signal and to use said spectral band of the second channel of the encoded audio signal as a spectral band of the second channel of the intermediate audio signal, if the dual-mono encoding was used, and to generate a spectral band of the first channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, and to generate a spectral band of the second channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, if the mid-side encoding was used. An apparatus according to claim 23, wherein the decoding unit (210) is configured to determine for each spectral band of said plurality of spectral bands, whether said spectral band of the first channel of the encoded audio signal and said spectral band of the second channel of the encoded audio signal was encoded using dual-mono encoding or using mid-side encoding, wherein the decoding unit (210) is configured to obtain said spectral band of the second channel of the encoded audio signal by reconstructing said spectral band of the second channel, wherein, if mid-side encoding was used, said spectral band of the first channel of the encoded audio signal is a spectral band of a mid signal, and said spectral band of the second channel of the encoded audio signal is spectral band of a side signal, wherein, if mid-side encoding was used, the decoding unit (210) is configured to reconstruct said spectral band of the side signal depending on a correction factor for said spectral band of the side signal and depending on a spectral band of a previous mid signal, which corresponds to said spectral band of said mid signal, wherein the previous mid signal precedes said mid signal in time. An apparatus according to claim 25, wherein, if mid-side encoding was used, the decoding unit (210) is configured to reconstruct said spectral band of the side signal, by reconstructing spectral values of said spectral band of the side signal according to Si = Ni facDmXfb ■ prevDmx^ wherein 5, indicates the spectral values of said spectral band of the side signal, wherein prevDmxj indicates spectral values of the spectral band of said previous mid signal, wherein N,- indicates spectral values of a noise filled spectrum, wherein facDmxfb is defined according to facDmxfb = wherein is said correction factor for said spectral band of the side signal, wherein ENfh, is an energy of the noise-filled spectrum, wherein EprevDmxft is an energy of said spectral band of said previous mid signal, and wherein z = 0, or wherein 0.1 > ε > 0. 27. An apparatus according to one of claims 23 to 26, wherein the de-normalizer (220) is configured to modify, depending on the de-normalization value, the plurality of spectral bands of at least one of the first channel and the second channel of the intermediate audio signal to obtain the first channel and the second channel of the decoded audio signal. 28. An apparatus according to one of claims 23 to 26, wherein the de-normalizer (220) is configured to modify, depending on the de-normalization value, the plurality of spectral bands of at least one of the first channel and the second channel of the intermediate audio signal to obtain a de- normalized audio signal, wherein the apparatus furthermore comprises a postprocessing unit (230) and a transform unit (235), and wherein the postprocessing unit (230) is configured to conduct at least one of decoder-side temporal noise shaping and decoder-side frequency domain noise shaping on the de-normalized audio signal to obtain a postprocessed audio signal, wherein the transform unit (235) is configured to configured to transform the postprocessed audio signal from a spectral domain to a time domain to obtain the first channel and the second channel of the decoded audio signal. 29. An apparatus according to according to one of claims 23 to 26, wherein the apparatus further comprises a transform unit (215) configured to transform the intermediate audio signal from a spectral domain to a time domain, wherein the de-normalizer (220) is configured to modify, depending on the de-normalization value, at least one of the first channel and the second channel of the intermediate audio signal being represented in a time domain to obtain the first channel and the second channel of the decoded audio signal. 30. An apparatus according to one of claims 23 to 26, wherein the apparatus further comprises a transform unit (215) configured to transform the intermediate audio signal from a spectral domain to a time domain, wherein the de-normalizer (220) is configured to modify, depending on the de-normalization value, at least one of the first channel and the second channel of the intermediate audio signal being represented in a time domain to obtain a de-normalized audio signal, wherein the apparatus further comprises a postprocessing unit (235) being configured to process the de-normalized audio signal, being a perceptually whitened audio signal, to obtain the first channel and the second channel of the decoded audio signal. 31 . An apparatus according to claim 29 or 30, wherein the apparatus furthermore comprises a spectral-domain postprocessor (212) being configured to conduct decoder-side temporal noise shaping on the intermediate audio signal, wherein the transform unit (215) is configured to transform the intermediate audio signal from the spectral domain to the time domain, after decoder-side temporal noise shaping has been conducted on the intermediate audio signal. 32. An apparatus according to one of claims 23 to 3 , wherein the decoding unit (210) is configured to apply decoder-side Stereo Intelligent Gap Filling on the encoded audio signal. 33. An apparatus according to one of claims 23 to 32, wherein the decoded audio signal is an audio stereo signal comprising exactly two channels. 34. A system for decoding an encoded audio signal comprising four or more channels to obtain four channels of a decoded audio signal comprising four or more channels, wherein the system comprises: a first apparatus (270) according to one of claims 23 to 32 for decoding a first channel and a second channel of the four or more channels of the encoded audio signal to obtain a first channel and a second channel of the decoded audio signal, and a second apparatus (280) according to one of claims 23 to 32 for decoding a third channel and a fourth channel of the four or more channels of the encoded audio signal to obtain a third channel and a fourth channel of the decoded audio signal. 35. A system for generating an encoded audio signal from an audio input signal and for generating a decoded audio signal from the encoded audio signal, comprising: an apparatus (310) according to one of claims 1 to 21 , wherein the apparatus according (310) to one of claims 1 to 21 is configured to generate the encoded audio signal from the audio input signal, and an apparatus (320) according to one of claims 23 to 33, wherein the apparatus (320) according to one of claims 23 to 33 is configured to generate the decoded audio signal from the encoded audio signal. 36. A system for generating an encoded audio signal from an audio input signal and for generating a decoded audio signal from the encoded audio signal, comprising: a system according to claim 22, wherein the system according to claim 22 is configured to generate the encoded audio signal from the audio input signal, and a system according to claim 34, wherein the system according to claim 34 is configured to generate the decoded audio signal from the encoded audio signal. 37. A method for encoding a first channel and a second channel of an audio input signal comprising two or more channels to obtain an encoded audio signal, wherein the method comprises: determining a normalization value for the audio input signal depending on the first channel of the audio input signal and depending on the second channel of the audio input signal, determining a first channel and a second channel of a normalized audio signal by modifying, depending on the normalization value, at least one of the first channel and the second channel of the audio input signal, generating a processed audio signal having a first channel and a second channel, such that one or more spectral bands of the first channel of the processed audio signal are one or more spectral bands of the first channel of the normalized audio signal, such that one or more spectral bands of the second channel of the processed audio signal are one or more spectral bands of the second channel of the normalized audio signal, such that at least one spectral band of the first channel of the processed audio signal is a spectral band of a mid signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, and such that at least one spectral band of the second channel of the processed audio signal is a spectral band of a side signal depending on a spectral band of the first channel of the normalized audio signal and depending on a spectral band of the second channel of the normalized audio signal, and encoding the processed audio signal to obtain the encoded audio signal. A method for decoding an encoded audio signal comprising a first channel and a second channel to obtain a first channel and a second channel of a decoded audio signal comprising two or more channels, wherein the method comprises: determining for each spectral band of a plurality of spectral bands, whether said spectral band of the first channel of the encoded audio signal and said spectral band of the second channel of the encoded audio signal was encoded using dual-mono encoding or using mid-side encoding, using said spectral band of the first channel of the encoded audio signal as a spectral band of a first channel of an intermediate audio signal and using said spectral band of the second channel of the encoded audio signal as a spectral band of a second channel of the intermediate audio signal, if dual-mono encoding was used, generating a spectral band of the first channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, and generating a spectral band of the second channel of the intermediate audio signal based on said spectral band of the first channel of the encoded audio signal and based on said spectral band of the second channel of the encoded audio signal, if mid-side encoding was used, and modifying, depending on a de-normalization value, at least one of the first channel and the second channel of the intermediate audio signal to obtain the first channel and the second channel of a decoded audio signal. A computer program for implementing the method of claim 37 or 38 when being executed on a computer or signal processor.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 201837026922-IntimationOfGrant25-01-2024.pdf 2024-01-25
1 201837026922-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2018(online)].pdf 2018-07-19
2 201837026922-FORM 1 [19-07-2018(online)].pdf 2018-07-19
2 201837026922-PatentCertificate25-01-2024.pdf 2024-01-25
3 201837026922-FIGURE OF ABSTRACT [19-07-2018(online)].pdf 2018-07-19
3 201837026922-Annexure [21-12-2023(online)].pdf 2023-12-21
4 201837026922-Written submissions and relevant documents [21-12-2023(online)].pdf 2023-12-21
4 201837026922-DRAWINGS [19-07-2018(online)].pdf 2018-07-19
5 201837026922-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2018(online)].pdf 2018-07-19
5 201837026922-Correspondence to notify the Controller [04-12-2023(online)].pdf 2023-12-04
6 201837026922-US(14)-ExtendedHearingNotice-(HearingDate-07-12-2023).pdf 2023-11-23
6 201837026922-COMPLETE SPECIFICATION [19-07-2018(online)].pdf 2018-07-19
7 201837026922-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [03-11-2023(online)].pdf 2023-11-03
7 201837026922-FORM 18 [09-08-2018(online)].pdf 2018-08-09
8 201837026922-US(14)-ExtendedHearingNotice-(HearingDate-08-11-2023).pdf 2023-10-12
8 201837026922-Proof of Right (MANDATORY) [03-10-2018(online)].pdf 2018-10-03
9 201837026922-FORM 3 [11-10-2023(online)].pdf 2023-10-11
9 201837026922-FORM-26 [25-10-2018(online)].pdf 2018-10-25
10 201837026922-Information under section 8(2) (MANDATORY) [07-12-2018(online)].pdf 2018-12-07
10 201837026922-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [06-10-2023(online)].pdf 2023-10-06
11 201837026922-Correspondence to notify the Controller [05-10-2023(online)].pdf 2023-10-05
11 201837026922-Information under section 8(2) (MANDATORY) [27-06-2019(online)].pdf 2019-06-27
12 201837026922-FORM-26 [03-10-2023(online)].pdf 2023-10-03
12 201837026922-Information under section 8(2) [08-05-2020(online)].pdf 2020-05-08
13 201837026922-Information under section 8(2) [19-06-2020(online)].pdf 2020-06-19
13 201837026922-US(14)-HearingNotice-(HearingDate-12-10-2023).pdf 2023-09-13
14 201837026922-FER.pdf 2020-06-29
14 201837026922-FORM 3 [11-04-2023(online)].pdf 2023-04-11
15 201837026922-Information under section 8(2) [30-03-2023(online)].pdf 2023-03-30
15 201837026922-Verified English translation [16-10-2020(online)].pdf 2020-10-16
16 201837026922-Information under section 8(2) [05-01-2023(online)].pdf 2023-01-05
16 201837026922-Information under section 8(2) [16-10-2020(online)].pdf 2020-10-16
17 201837026922-Information under section 8(2) [09-11-2022(online)].pdf 2022-11-09
17 201837026922-FORM 4(ii) [11-12-2020(online)].pdf 2020-12-11
18 201837026922-FORM 3 [10-10-2022(online)].pdf 2022-10-10
18 201837026922-Information under section 8(2) [03-02-2021(online)].pdf 2021-02-03
19 201837026922-Information under section 8(2) [28-07-2022(online)].pdf 2022-07-28
19 201837026922-PETITION UNDER RULE 137 [27-03-2021(online)].pdf 2021-03-27
20 201837026922-FORM 3 [12-04-2022(online)].pdf 2022-04-12
20 201837026922-PETITION UNDER RULE 137 [27-03-2021(online)]-1.pdf 2021-03-27
21 201837026922-Information under section 8(2) [12-04-2022(online)].pdf 2022-04-12
21 201837026922-OTHERS [27-03-2021(online)].pdf 2021-03-27
22 201837026922-FER_SER_REPLY [27-03-2021(online)].pdf 2021-03-27
22 201837026922-FORM 3 [05-10-2021(online)].pdf 2021-10-05
23 201837026922-DRAWING [27-03-2021(online)].pdf 2021-03-27
23 201837026922-Information under section 8(2) [03-09-2021(online)].pdf 2021-09-03
24 201837026922-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
24 201837026922-COMPLETE SPECIFICATION [27-03-2021(online)].pdf 2021-03-27
25 201837026922-CLAIMS [27-03-2021(online)].pdf 2021-03-27
25 201837026922-Information under section 8(2) [06-05-2021(online)].pdf 2021-05-06
26 201837026922-ABSTRACT [27-03-2021(online)].pdf 2021-03-27
26 201837026922-Information under section 8(2) [01-04-2021(online)].pdf 2021-04-01
27 201837026922-ABSTRACT [27-03-2021(online)].pdf 2021-03-27
27 201837026922-Information under section 8(2) [01-04-2021(online)].pdf 2021-04-01
28 201837026922-CLAIMS [27-03-2021(online)].pdf 2021-03-27
28 201837026922-Information under section 8(2) [06-05-2021(online)].pdf 2021-05-06
29 201837026922-COMPLETE SPECIFICATION [27-03-2021(online)].pdf 2021-03-27
29 201837026922-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
30 201837026922-DRAWING [27-03-2021(online)].pdf 2021-03-27
30 201837026922-Information under section 8(2) [03-09-2021(online)].pdf 2021-09-03
31 201837026922-FER_SER_REPLY [27-03-2021(online)].pdf 2021-03-27
31 201837026922-FORM 3 [05-10-2021(online)].pdf 2021-10-05
32 201837026922-Information under section 8(2) [12-04-2022(online)].pdf 2022-04-12
32 201837026922-OTHERS [27-03-2021(online)].pdf 2021-03-27
33 201837026922-FORM 3 [12-04-2022(online)].pdf 2022-04-12
33 201837026922-PETITION UNDER RULE 137 [27-03-2021(online)]-1.pdf 2021-03-27
34 201837026922-Information under section 8(2) [28-07-2022(online)].pdf 2022-07-28
34 201837026922-PETITION UNDER RULE 137 [27-03-2021(online)].pdf 2021-03-27
35 201837026922-FORM 3 [10-10-2022(online)].pdf 2022-10-10
35 201837026922-Information under section 8(2) [03-02-2021(online)].pdf 2021-02-03
36 201837026922-Information under section 8(2) [09-11-2022(online)].pdf 2022-11-09
36 201837026922-FORM 4(ii) [11-12-2020(online)].pdf 2020-12-11
37 201837026922-Information under section 8(2) [05-01-2023(online)].pdf 2023-01-05
37 201837026922-Information under section 8(2) [16-10-2020(online)].pdf 2020-10-16
38 201837026922-Information under section 8(2) [30-03-2023(online)].pdf 2023-03-30
38 201837026922-Verified English translation [16-10-2020(online)].pdf 2020-10-16
39 201837026922-FER.pdf 2020-06-29
39 201837026922-FORM 3 [11-04-2023(online)].pdf 2023-04-11
40 201837026922-Information under section 8(2) [19-06-2020(online)].pdf 2020-06-19
40 201837026922-US(14)-HearingNotice-(HearingDate-12-10-2023).pdf 2023-09-13
41 201837026922-FORM-26 [03-10-2023(online)].pdf 2023-10-03
41 201837026922-Information under section 8(2) [08-05-2020(online)].pdf 2020-05-08
42 201837026922-Correspondence to notify the Controller [05-10-2023(online)].pdf 2023-10-05
42 201837026922-Information under section 8(2) (MANDATORY) [27-06-2019(online)].pdf 2019-06-27
43 201837026922-Information under section 8(2) (MANDATORY) [07-12-2018(online)].pdf 2018-12-07
43 201837026922-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [06-10-2023(online)].pdf 2023-10-06
44 201837026922-FORM 3 [11-10-2023(online)].pdf 2023-10-11
44 201837026922-FORM-26 [25-10-2018(online)].pdf 2018-10-25
45 201837026922-Proof of Right (MANDATORY) [03-10-2018(online)].pdf 2018-10-03
45 201837026922-US(14)-ExtendedHearingNotice-(HearingDate-08-11-2023).pdf 2023-10-12
46 201837026922-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [03-11-2023(online)].pdf 2023-11-03
46 201837026922-FORM 18 [09-08-2018(online)].pdf 2018-08-09
47 201837026922-US(14)-ExtendedHearingNotice-(HearingDate-07-12-2023).pdf 2023-11-23
47 201837026922-COMPLETE SPECIFICATION [19-07-2018(online)].pdf 2018-07-19
48 201837026922-DECLARATION OF INVENTORSHIP (FORM 5) [19-07-2018(online)].pdf 2018-07-19
48 201837026922-Correspondence to notify the Controller [04-12-2023(online)].pdf 2023-12-04
49 201837026922-Written submissions and relevant documents [21-12-2023(online)].pdf 2023-12-21
49 201837026922-DRAWINGS [19-07-2018(online)].pdf 2018-07-19
50 201837026922-FIGURE OF ABSTRACT [19-07-2018(online)].pdf 2018-07-19
50 201837026922-Annexure [21-12-2023(online)].pdf 2023-12-21
51 201837026922-FORM 1 [19-07-2018(online)].pdf 2018-07-19
51 201837026922-PatentCertificate25-01-2024.pdf 2024-01-25
52 201837026922-IntimationOfGrant25-01-2024.pdf 2024-01-25
52 201837026922-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2018(online)].pdf 2018-07-19

Search Strategy

1 2020-06-0312-58-58E_03-06-2020.pdf

ERegister / Renewals

3rd: 26 Feb 2024

From 20/01/2019 - To 20/01/2020

4th: 26 Feb 2024

From 20/01/2020 - To 20/01/2021

5th: 26 Feb 2024

From 20/01/2021 - To 20/01/2022

6th: 26 Feb 2024

From 20/01/2022 - To 20/01/2023

7th: 26 Feb 2024

From 20/01/2023 - To 20/01/2024

8th: 26 Feb 2024

From 20/01/2024 - To 20/01/2025

9th: 02 Jan 2025

From 20/01/2025 - To 20/01/2026