Sign In to Follow Application
View All Documents & Correspondence

Audio Encoder And Decoder Using A Frequency Domain Processor With Full Band Gap Filling And A Time Domain Processor

Abstract: An audio encoder for encoding an audio signal comprises: a first encoding processor (600) for encoding a first audio signal portion in a frequency domain wherein the first encoding processor (600) comprises: a time frequency converter (602) for converting the first audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the first audio signal portion; an analyzer (604) for analyzing the frequency domain representation up to the maximum frequency to determine first spectral portions to be encoded with a first spectral resolution and second spectral regions to be encoded with a second spectral resolution the second spectral resolution being lower than the first spectral resolution; a spectral encoder (606) for encoding the first spectral portions with the first spectral resolution and for encoding the second spectral portions with the second spectral resolution; a second encoding processor (610) for encoding a second different audio signal portion in the time domain; a controller (620) configured for analyzing the audio signal and for determining which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and an encoded signal former (630) for forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
16 January 2017
Publication Number
18/2017
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2022-05-27
Renewal Date

Applicants

FRAUNHOFER GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München

Inventors

1. DISCH Sascha
Wilhelmstrasse 70 90766 Fürth
2. DIETZ Martin
Am Westpark 11 90431 90431
3. MULTRUS Markus
Etzlaubweg 7 90469 Nürnberg
4. FUCHS Guillaume
Joseph Otto Kolb Str. 31 91088 Bubenrath
5. RAVELLI Emmanuel
Branderweg 7 91058 Erlangen
6. NEUSINGER Matthias
Bergstraße 10 91186 Rohr
7. SCHNELL Markus
Labenwolfstr. 15 90409 Nürnberg
8. SCHUBERT Benjamin
Zickstrasse 6 90429 Nürnberg
9. GRILL Bernhard
Peter Henlein Strasse 7 91207 Lauf

Specification

The present invention relates to audio signal encoding and decoding and, in particular, to audio signal processing using parallel frequency domain and time domain encoder/decoder processors.

The perceptual coding of audio signals for the purpose of data reduction for efficient storage or transmission of these signals is a widely used practice. In particular when lowest bit rates are to be achieved, the employed coding leads to a reduction of audio quality that often is primarily caused by a limitation at the encoder side of the audio signal bandwidth to be transmitted. Here, typically the audio signal is low-pass filtered such that no spectral waveform content remains above a certain pre-determined cut-off frequency.

In contemporary codecs well-known methods exist for the decoder-side signal restoration through audio signal Bandwidth Extension (BWE), e.g. Spectral Band Replication (SBR) that operates in frequency domain or so-called Time Domain Bandwidth Extension (TD-BWE) being is a post-processor in speech coders that operates in time domain.

Additionally, several combined time domain/frequency domain coding concepts exist such as concepts known under the term AMR-WB+ or USAC.

All these combined time domain/coding concepts have in common that the frequency domain coder relies on bandwidth extension technologies which incur a band limitation into the input audio signal and the portion above a cross-over frequency or border frequency is encoded with a low resolution coding concept and synthesized on the decoder-side. Hence, such concepts mainly rely on a pre-processor technology on the encoder side and a corresponding post-processing functionality on the decoder-side.

Typically, the time domain encoder is selected for useful signals to be encoded in the time domain such as speech signals and the frequency domain encoder is selected for non-speech signals, music signals, etc. However, specifically for non-speech signals having

prominent harmonics in the high frequency band, the prior art frequency domain encoders have a reduced accuracy and, therefore, a reduced audio quality due to the fact that such prominent harmonics can only be separately parametrically encoded or are eliminated at all in the encoding/decoding process.

Furthermore, concepts exist in which the time domain encoding/decoding branch additionally relies on the bandwidth extension which also parametrically encodes an upper frequency range while a lower frequency range is typically encoded using an ACELP or any other CELP related coder, for example a speech coder. This bandwidth extension functionality increases the bitrate efficiency but, on the other hand, introduces further inflexibility due to the fact that both encoding branches, i.e., the frequency domain encoding branch and the time domain encoding branch are band limited due to the bandwidth extension procedure or spectral band replication procedure operating above a certain crossover frequency substantially lower than the maximum frequency included in the input audio signal.

Relevant topics in the state-of-art comprise

SBR as a post-processor to waveform decoding [1-3]

MPEG-D USAC core switching [4]

- MPEG-H 3D IGF [5]

The following papers and patents describe methods that are considered to constitute prior art for the application:

[1] M. Dietz, L. Liljeryd, K. Kjorling and O. Kunz, "Spectral Band Replication, a novel approach in audio coding," in 112th AES Convention, Munich, Germany, 2002.

[2] S. Meltzer, R. Bohm and F. Henn, "SBR enhanced audio codecs for digital broadcasting such as "Digital Radio Mondiale" (DRM)," in 112th AES Convention, Munich, Germany, 2002.

[3] T. Ziegler, A. Ehret, P. Ekstrand and M. Lutzky, "Enhancing mp3 with SBR: Features and Capabilities of the new mp3PRO Algorithm," in 112th AES Convention, Munich, Germany, 2002.

[4] MPEG-D USAC Standard.

[5] PCT/EP2014/065109.

In MPEG-D USAC, a switchable core coder is described. However, in USAC, the band-limited core is restricted to always transmit a low-pass filtered signal. Therefore, certain music signals that contain prominent high frequency content e.g. full-band sweeps, triangle sounds, etc. cannot be reproduced faithfully.

It is an object of the present invention to provide an improved concept for audio coding.

This object is achieved by an audio coder encoder of claim 1, an audio decoder of claim 11 , a method of audio encoding of claim 20, a method of audio decoding of claim 21 or a computer program of claim 22.

The present invention is based on the finding that a time domain encoding/decoding processor can be combined with a frequency domain encoding/decoding processor having a gap filling functionality but this gap filling functionality for filling spectral holes is operated over the whole band of the audio signal or at least above a certain gap filling frequency. Importantly, the frequency domain encoding/decoding processor is particularly in the position to perform accurate or wave form or spectral value encoding/decoding up to the maximum frequency and not only until a crossover frequency. Furthermore, the full-band capability of the frequency domain encoder for encoding with the high resolution allows an integration of the gap filling functionality into the frequency domain encoder.

Hence, in accordance with the present invention by using the full-band spectral encoder/decoder processor, the problems related to the separation of the bandwidth extension on the one hand and the core coding on the other hand can be addressed and overcome by performing the bandwidth extension in the same spectral domain in which the core decoder operates. Therefore, a full rate core decoder is provided which encodes and decodes the full audio signal range. This does not require the need for a downsampler on the encoder side and an upsampler on the decoder side. Instead, the whole processing is performed in the full sampling rate or full-bandwidth domain. In order to obtain a high coding gain, the audio signal is analyzed in order to find a first set of first spectral portions which has to be encoded with a high resolution, where this first set of first spectral portions may include, in an embodiment, tonal portions of the audio signal. On the other hand, non-tonal or noisy components in the audio signal constituting a second set of second spectral portions are parametrically encoded with low spectral resolution. The encoded audio signal then only requires the first set of first spectral portions encoded in a waveform-preserving manner with a high spectral resolution and,

additionally, the second set of second spectral portions encoded parametrically with a low resolution using frequency "tiles" sourced from the first set. On the decoder side, the core decoder, which is a full-band decoder, reconstructs the first set of first spectral portions in a waveform-preserving manner, i.e., without any knowledge that there is any additional frequency regeneration. However, the so generated spectrum has a lot of spectral gaps. These gaps are subsequently filled with the inventive Intelligent Gap Filling (IGF) technology by using a frequency regeneration applying parametric data on the one hand and using a source spectral range, i.e., first spectral portions reconstructed by the full rate audio decoder on the other hand.

In further embodiments, spectral portions, which are reconstructed by noise filling only rather than bandwidth replication or frequency tile filling, constitute a third set of third spectral portions. Due to the fact that the coding concept operates in a single domain for the core coding/decoding on the one hand and the frequency regeneration on the other hand, the IGF is not only restricted to fill up a higher frequency range but can fill up lower frequency ranges, either by noise filling without frequency regeneration or by frequency regeneration using a frequency tile at a different frequency range.

Furthermore, it is emphasized that an information on spectral energies, an information on individual energies or an individual energy information, an information on a survive energy or a survive energy information, an information a tile energy or a tile energy information, or an information on a missing energy or a missing energy information may comprise not only an energy value, but also an (e.g. absolute) amplitude value, a level value or any other value, from which a final energy value can be derived. Hence, the information on an energy may e.g. comprise the energy value itself, and/or a value of a level and/or of an amplitude and/or of an absolute amplitude.

A further aspect is based on the finding that the correlation situation is not only important for the source range but is also important for the target range. Furthermore, the present invention acknowledges the situation that different correlation situations can occur in the source range and the target range. When, for example, a speech signal with high frequency noise is considered, the situation can be that the low frequency band comprising the speech signal with a small number of overtones is highly correlated in the left channel and the right channel, when the speaker is placed in the middle. The high frequency portion, however, can be strongly uncorrelated due to the fact that there might be a different high frequency noise on the left side compared to another high frequency

noise or no high frequency noise on the right side. Thus, when a straightforward gap filling operation would be performed that ignores this situation, then the high frequency portion would be correlated as well, and this might generate serious spatial segregation artifacts in the reconstructed signal. In order to address this issue, parametric data for a reconstruction band or, generally, for the second set of second spectral portions which have to be reconstructed using a first set of first spectral portions is calculated to identify either a first or a second different two-channel representation for the second spectral portion or, stated differently, for the reconstruction band. On the encoder side, a two-channel identification is, therefore calculated for the second spectral portions, i.e., for the portions, for which, additionally, energy information for reconstruction bands is calculated. A frequency regenerator on the decoder side then regenerates a second spectral portion depending on a first portion of the first set of first spectral portions, i.e., the source range and parametric data for the second portion such as spectral envelope energy information or any other spectral envelope data and, additionally, dependent on the two-channel identification for the second portion, i.e., for this reconstruction band under reconsideration.

The two-channel identification is preferably transmitted as a flag for each reconstruction band and this data is transmitted from an encoder to a decoder and the decoder then decodes the core signal as indicated by preferably calculated flags for the core bands. Then, in an implementation, the core signal is stored in both stereo representations (e.g. left/right and mid/side) and, for the IGF frequency tile filling, the source tile representation is chosen to fit the target tile representation as indicated by the two-channel identification flags for the intelligent gap filling or reconstruction bands, i.e., for the target range.

It is emphasized that this procedure not only works for stereo signals, i.e., for a left channel and the right channel but also operates for multi-channel signals. In the case of multi-channel signals, several pairs of different channels can be processed in that way such as a left and a right channel as a first pair, a left surround channel and a right surround as the second pair and a center channel and an LFE channel as the third pair. Other pairings can be determined for higher output channel formats such as 7.1 , 11.1 and so on.

A further aspect is based on the finding that the audio quality of the reconstructed signal can be improved through IGF since the whole spectrum is accessible to the core encoder so that, for example, perceptually important tonal portions in a high spectral range can still be encoded by the core coder rather than parametric substitution. Additionally, a gap filling operation using frequency tiles from a first set of first spectral portions which is, for example, a set of tonal portions typically from a lower frequency range, but also from a higher frequency range if available, is performed. For the spectral envelope adjustment on the decoder side, however, the spectral portions from the first set of spectral portions located in the reconstruction band are not further post-processed by e.g. the spectral envelope adjustment. Only the remaining spectral values in the reconstruction band which do not originate from the core decoder are to be envelope adjusted using envelope information. Preferably, the envelope information is a full-band envelope information accounting for the energy of the first set of first spectral portions in the reconstruction band and the second set of second spectral portions in the same reconstruction band, where the latter spectral values in the second set of second spectral portions are indicated to be zero and are, therefore, not encoded by the core encoder, but are parametrically coded with low resolution energy information.

It has been found that absolute energy values, either normalized with respect to the bandwidth of the corresponding band or not normalized, are useful and very efficient in an application on the decoder side. This especially applies when gain factors have to be calculated based on a residual energy in the reconstruction band, the missing energy in the reconstruction band and frequency tile information in the reconstruction band.

Furthermore, it is preferred that the encoded bitstream not only covers energy information for the reconstruction bands but, additionally, scale factors for scale factor bands extending up to the maximum frequency. This ensures that for each reconstruction band, for which a certain tonal portion, i.e., a first spectral portion is available, this first set of first spectral portion can actually be decoded with the right amplitude. Furthermore, in addition to the scale factor for each reconstruction band, an energy for this reconstruction band is generated in an encoder and transmitted to a decoder. Furthermore, it is preferred that the reconstruction bands coincide with the scale factor bands or in case of energy grouping, at least the borders of a reconstruction band coincide with borders of scale factor bands.

A further aspect is based on the finding that certain impairments in audio quality can be remedied by applying a signal adaptive frequency tile filling scheme. To this end, an analysis on the encoder-side is performed in order to find out the best matching source region candidate for a certain target region. A matching information identifying for a target region a certain source region together with optionally some additional information is

generated and transmitted as side information to the decoder. The decoder then applies a frequency tile filling operation using the matching information. To this end, the decoder reads the matching information from the transmitted data stream or data file and accesses the source region identified for a certain reconstruction band and, if indicated in the matching information, additionally performs some processing of this source region data to generate raw spectral data for the reconstruction band. Then, this result of the frequency tile filling operation, i.e., the raw spectral data for the reconstruction band, is shaped using spectral envelope information in order to finally obtain a reconstruction band that comprises the first spectral portions such as tonal portions as well. These tonal portions, however, are not generated by the adaptive tile filling scheme, but these first spectral portions are output by the audio decoder or core decoder directly.

The adaptive spectral tile selection scheme may operate with a low granularity. In this implementation, a source region is subdivided into typically overlapping source regions and the target region or the reconstruction bands are given by non-overlapping frequency target regions. Then, similarities between each source region and each target region are determined on the encoder-side and the best matching pair of a source region and the target region are identified by the matching information and, on the decoder-side, the source region identified in the matching information is used for generating the raw spectral data for the reconstruction band.

For the purpose of obtaining a higher granularity, each source region is allowed to shift in order to obtain a certain lag where the similarities are maximum. This lag can be as fine as a frequency bin and allows an even better matching between a source region and the target region.

Furthermore, in addition of only identifying a best matching pair, this correlation lag can also be transmitted within the matching information and, additionally, even a sign can be transmitted. When the sign is determined to be negative on the encoder-side, then a corresponding sign flag is also transmitted within the matching information and, on the decoder-side, the source region spectral values are multiplied by "-1" or, in a complex representation, are "rotated" by 180 degrees.

A further implementation of this invention applies a tile whitening operation. Whitening of a spectrum removes the coarse spectral envelope information and emphasizes the spectral fine structure which is of foremost interest for evaluating tile similarity. Therefore, a

frequency tile on the one hand and/or the source signal on the other hand are whitened before calculating a cross correlation measure. When only the tile is whitened using a predefined procedure, a whitening flag is transmitted indicating to the decoder that the same predefined whitening process shall be applied to the frequency tile within IGF.

Regarding the tile selection, it is preferred to use the lag of the correlation to spectrally shift the regenerated spectrum by an integer number of transform bins. Depending on the underlying transform, the spectral shifting may require addition corrections. In case of odd lags, the tile is additionally modulated through multiplication by an alternating temporal sequence of -1/1 to compensate for the frequency-reversed representation of every other band within the MDCT. Furthermore, the sign of the correlation result is applied when generating the frequency tile.

Furthermore, it is preferred to use tile pruning and stabilization in order to make sure that artifacts created by fast changing source regions for the same reconstruction region or target region are avoided. To this end, a similarity analysis among the different identified source regions is performed and when a source tile is similar to other source tiles with a similarity above a threshold, then this source tile can be dropped from the set of potential source tiles since it is highly correlated with other source tiles. Furthermore, as a kind of tile selection stabilization, it is preferred to keep the tile order from the previous frame if none of the source tiles in the current frame correlate (better than a given threshold) with the target tiles in the current frame.

A further aspect is based on the finding that an improved quality and reduced bitrate specifically for signals comprising transient portions as they occur very often in audio signals is obtained by combining the Temporal Noise Shaping (TNS) or Temporal Tile Shaping (TTS) technology with high frequency reconstruction. The TNS/TTS processing on the encoder-side being implemented by a prediction over frequency reconstructs the time envelope of the audio signal. Depending on the implementation, i.e., when the temporal noise shaping filter is determined within a frequency range not only covering the source frequency range but also the target frequency range to be reconstructed in a frequency regeneration decoder, the temporal envelope is not only applied to the core audio signal up to a gap filling start frequency, but the temporal envelope is also applied to the spectral ranges of reconstructed second spectral portions. Thus, pre-echoes or post-echoes that would occur without temporal tile shaping are reduced or eliminated. This is accomplished by applying an inverse prediction over frequency not only within the core

frequency range up to a certain gap filling start frequency but also within a frequency range above the core frequency range. To this end, the frequency regeneration or frequency tile generation is performed on the decoder-side before applying a prediction over frequency. However, the prediction over frequency can either be applied before or subsequent to spectral envelope shaping depending on whether the energy information calculation has been performed on the spectral residual values subsequent to filtering or to the (full) spectral values before envelope shaping.

The TTS processing over one or more frequency tiles additionally establishes a continuity of correlation between the source range and the reconstruction range or in two adjacent reconstruction ranges or frequency tiles.

In an implementation, it is preferred to use complex TNS/TTS filtering. Thereby, the (temporal) aliasing artifacts of a critically sampled real representation, like MDCT, are avoided. A complex TNS filter can be calculated on the encoder-side by applying not only a modified discrete cosine transform but also a modified discrete sine transform in addition to obtain a complex modified transform. Nevertheless, only the modified discrete cosine transform values, i.e., the real part of the complex transform is transmitted. On the decoder-side, however, it is possible to estimate the imaginary part of the transform using MDCT spectra of preceding or subsequent frames so that, on the decoder-side, the complex filter can be again applied in the inverse prediction over frequency and, specifically, the prediction over the border between the source range and the reconstruction range and also over the border between frequency-adjacent frequency tiles within the reconstruction range.

The inventive audio coding system efficiently codes arbitrary audio signals at a wide range of bitrates. Whereas, for high bitrates, the inventive system converges to transparency, for low bitrates perceptual annoyance is minimized. Therefore, the main share of available bitrate is used to waveform code just the perceptually most relevant structure of the signal in the encoder, and the resulting spectral gaps are filled in the decoder with signal content that roughly approximates the original spectrum. A very limited bit budget is consumed to control the parameter driven so-called spectral Intelligent Gap Filling (IGF) by dedicated side information transmitted from the encoder to the decoder.

In further embodiments, the time domain encoding/decoding processor relies on a lower sampling rate and the corresponding bandwidth extension functionality.

In further embodiments, a cross-processor is provided in order to initialize the time domain encoder/decoder with initialization data derived from the currently processed frequency domain encoder/decoder signal This allows that when the currently processed audio signal portion is processed by the frequency domain encoder, the parallel time domain encoder is initialized so that when a switch from the frequency domain encoder to a time domain encoder takes place, this time domain encoder can start processing since all the initialization data relating to earlier signals are already there due to the cross-processor. This cross-processor is preferably applied on the encoder-side and, additionally, on the decoder-side and preferably uses a frequency-time transform which additionally performs a very efficient downsampling from the higher output or input sampling rate into the lower time domain core coder sampling rate by only selecting a certain low band portion of the domain signal together with a certain reduced transform size. Thus, a sample rate conversion from the high sampling rate to the low sampling rate is very efficiently performed and this signal obtained by the transform with the reduced transform size can then be used for initializing the time domain encoder/decoder so that the time domain encoder/decoder is ready to immediately perform time domain encoding when this situation is signaled by a controller and the immediately preceding audio signal portion was encoded in the frequency domain.

Hence, preferred embodiments of the present invention allow a seamless switching of a perceptual audio coder comprising spectral gap filling and a time domain encoder with or without bandwidth extension.

Hence, the present invention relies on methods that are not restricted to removing the high frequency content above a cut-off frequency in the frequency domain encoder from the audio signal but rather signal-adaptively removes spectral band-pass regions leaving spectral gaps in the encoder and subsequently reconstructs these spectral gaps in the decoder. Preferably, an integrated solution such as intelligent gap filling is used that efficiently combines full-bandwidth audio coding and spectral gap filling particularly in the MDCT transform domain.

Hence, the present invention provides an improved concept for combining speech coding and a subsequent time domain bandwidth extension with a full-band wave form decoding comprising spectral gap filling into a switchable perceptual encoder/decoder.

Hence, in contrast to already existing methods, the new concept utilizes full-band audio signal wave form coding in the transform domain coder and at the same time allows a seamless switching to a speech coder preferably followed by a time domain bandwidth extension.

Further embodiments of the present invention avoid the explained problems that occur due to a fixed band limitation. The concept enables the switchable combination of a full-band wave form coder in the frequency domain equipped with a spectral gap filling and a lower sampling rate speech coder and a time domain bandwidth extension. Such a coder is capable of wave form coding the aforementioned problematic signals providing full audio bandwidth up to the Nyquist frequency of the audio input signal. Nevertheless, seamless switching between both coding strategies is guaranteed particularly by the embodiments having the cross-processor. For this seamless switching, the cross-processor represents a cross connection at both encoder and decoder between the full-band capable full-rate (input sampling rate) frequency domain encoder and the low-rate ACELP coder having a lower sampling rate to properly initialize the ACELP parameters and buffers particularly within the adaptive codebook, the LPC filter or the resampling stage, when switching from the frequency domain coder such as TCX to the time domain encoder such as ACELP.

The present invention is subsequently discussed with respect to the accompanying drawings in which:

Fig. 1a illustrates an apparatus for encoding an audio signal;

Fig. 1 b illustrates a decoder for decoding an encoded audio signal matching with the encoder of Fig. 1a;

Fig. 2a illustrates a preferred implementation of the decoder;

Fig. 2b illustrates a preferred implementation of the encoder;

Fig. 3a illustrates a schematic representation of a spectrum as generated by the spectral domain decoder of Fig. 1b;

Fig. 3b illustrates a table indicating the relation between scale factors for scale factor bands and energies for reconstruction bands and noise filling information for a noise filling band;

Fig. 4a illustrates the functionality of the spectral domain encoder for applying the selection of spectral portions into the first and second sets of spectral portions;

Fig. 4b illustrates an implementation of the functionality of Fig. 4a;

Fig. 5a illustrates a functionality of an MDCT encoder;

Fig. 5b illustrates a functionality of the decoder with an MDCT technology;

Fig. 5c illustrates an implementation of the frequency regenerator;

Fig. 6 illustrates an implementation of an audio encoder;

Fig. 7a illustrates a cross-processor within the audio encoder;

Fig. 7b illustrates an implementation of an inverse or frequency-time transform additionally providing a sampling rate reduction within the cross-processor;

Fig. 8 illustrates a preferred implementation of the controller of Fig. 6;

Fig. 9 illustrates a further embodiment of the time domain encoder having bandwidth extension functionalities;

Fig. 10 illustrates a preferred usage of a preprocessor;

Fig. 11a illustrates a schematic implementation of the audio decoder;

Fig. 11b illustrates a cross-processor within the decoder for providing initialization data for the time domain decoder;

Fig. 12 illustrates a preferred implementation of the time domain decoding processor of Fig. 11a;

Fig. 13 illustrates a further implementation of the time domain bandwidth extension;

Fig. 14a illustrates a preferred implementation of an audio encoder;

Fig. 14b illustrates a preferred implementation of an audio decoder;

Fig. 14c illustrates an inventive implementation of a time domain decoder with sample rate conversion and bandwidth extension.

Fig. 6 illustrates an audio encoder for encoding an audio signal comprising a first encoding processor 600 for encoding a first audio signal portion in a frequency domain. The first encoding processor 600 comprises a time frequency converter 602 for converting the first input audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the input signal. Furthermore, the first encoding processor 600 comprises an analyzer 604 for analyzing the frequency domain representation up to the maximum frequency to determine first spectral regions to be encoded with a first spectral representation and to determine second spectral regions to be encoded with a second spectral resolution being lower than the first spectral resolution. In particular, the full-band analyzer 604 determines which frequency lines or spectral values in the time frequency converter spectrum are to be encoded spectral-line wise and which other spectral portions are to be encoded in a parametric way and these latter spectral values are then reconstructed on the decoder-side with the gap filling procedure. The actual encoding operation is performed by a spectral encoder 606 for encoding the first spectral regions or spectral portions with the first resolution and for parametrically encoding the second spectral regions or portions with the second spectral resolution.

The audio encoder of Fig. 6 additionally comprises a second encoding processor 610 for encoding the audio signal portion in a time domain. Additionally, the audio encoder comprises a controller 620 configured for analyzing the audio signal at an audio signal input 601 and for determining which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain. Furthermore, an encoded signal former 630 which can be, for example, implemented as a bit stream multiplexor is

provided which is configured for forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion. Importantly, the encoded signal only has either a frequency domain representation or a time domain representation from one and the same audio signal portion.

Hence, the controller 620 makes sure that for a single audio signal portion only a time domain representation or a frequency domain representation is in the encoded signal. This can be accomplished by the controller 620 in several ways. One way would be that, for one and the same audio signal portion, both representations arrive at block 630 and the controller 620 controls the encoded signal former 630 to only introduce one of both representations into the encoded signal. Alternatively, however, the controller 620 can control an input into the first encoding processor and an input into the second encoding processor so that, based on the analysis of the corresponding signal portion, only one of both blocks 600 or 610 is activated to actually perform the full encoding operation and the other block is deactivated.

This deactivation can be a deactivation or, as illustrated with respect to, for example, Fig. 7a, is only a kind of "initialization'' mode where the other encoding processor is only active to receive and process initialization data in order to initialize internal memories but any specific encoding operation is not performed at all. This activation can be done by a certain switch at the input which is not illustrated in Fig. 6 or, preferably, by control lines 621 and 622. Hence, in this embodiment, the second encoding processor 610 does not output anything when the controller 620 has determined that the current audio signal portion should be encoded by the first encoding processor but the second encoding processor is nevertheless provided with initialization data to be active for an instant switching in the future. On the other hand, the first encoding processor is configured to not need any data from the past to update any internal memories and, therefore, when the current audio signal portion is to be encoded by the second encoding processor 610 then the controller 620 can control the first ending encoding processor 600 via control line 621 to be inactive at all. This means that the first encoding processor 600 does not need to be in an initialization state or waiting state but can be in a complete deactivation state. This is preferable particularly for mobile devices where power consumption and, therefore, battery life is an issue.

In the further specific implementation of the second encoding processor operating in the time domain, the second encoding processor comprises a downsampler 900 or sampling rate converter for converting the audio signal portion into a representation with a lower sampling rate, wherein the lower sampling rate is lower than a sampling rate at the input into the first encoding processor. This is illustrated in Fig. 9. In particular, when the input audio signal comprises a low band and a high band, it is preferred that the lower sampling rate representation at the output of block 900 only has the low band of the input audio signal portion and this low band is then encoded by a time domain low band encoder 910 which is configured for time-domain encoding the lower sampling rate representation provided by block 900. Furthermore, a time domain bandwidth extension encoder 920 is provided for parametrically encoding the high band. To this end, the time domain bandwidth extension encoder 920 receives at least the high band of the input audio signal or the low band and the high band of the input audio signal.

In a further embodiment of the present invention the audio encoder additionally comprises, although not illustrated in Fig. 6 but illustrated in Fig. 10, a preprocessor 1000 configured for preprocessing the first audio signal portion and the second audio signal portion. In an embodiment, this preprocessor comprises a prediction analyzer for determining prediction coefficients. This prediction analyzer can be implemented as an LPC (linear prediction coding) analyzer for determining LPC coefficients. However, other analyzers can be implemented as well. Furthermore, the preprocessor, which is also illustrated in Fig. 14a, comprises a prediction coefficient quantizer 1010, wherein this device illustrated in Fig. 14a receives prediction coefficient data from the prediction analyzer also illustrated in Fig. 14a at 1002.

Furthermore, the preprocessor additionally comprises an entropy coder for generating an encoded version of the quantized prediction coefficients. It is important to note that the encoded signal former 630 or the specific implementation, i.e., the bit stream multiplexor 613 makes sure that the encoded version of the quantized prediction coefficients is included into the encoded audio signal 632. Preferably, the LPC coefficients are not directly quantized but are converted into an ISF, for example, or any other representation better suited for quantization. This conversion is preferably performed either by the determine LPC coefficients block 1002 or is performed within the block 1010 for quantizing the LPC coefficients.

Furthermore, the preprocessor may comprise a resampler 1004 for resampling an audio input signal at an input sampling rate into a lower sampling rate for the time domain encoder. When the time domain encoder is an ACELP encoder having a certain ACELP sampling rate then the down sampling is performed to preferably either 12.8 kHz or 16 kHz. The input sampling rate can be any of a particular number of sampling rates such as 32 kHz or an even higher sampling rate. On the other hand, the sampling rate of the time domain encoder will be predetermined by certain restrictions and the resampler 1004 performs this resampling and outputs the lower sampling rate representation of the input signal. Hence, the resampler 1004 can perform a similar functionality and can even be one and the same element as the downsampler 900 illustrated in the context of Fig. 9.

Furthermore, it is preferred to apply a pre-emphasis in the pre-emphasis block 1005 in Fig. 14a. The pre-emphasis processing is well-known in the art of time domain encoding and is described in literature referring to the AMR-WB+ processing and the pre-emphasis is particularly configured for compensating for a spectral tilt and, therefore, allows a better calculation of LPC parameters at a given LPC order.

Claims

1. Audio encoder for encoding an audio signal, comprising:

a first encoding processor (600) for encoding a first audio signal portion in frequency domain, wherein the first encoding processor (600) comprises:

a time frequency converter (602) for converting the first audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the first audio signal portion;

an analyzer (604) for analyzing the frequency domain representation up to the maximum frequency to determine first spectral portions to be encoded with a first spectral resolution and second spectral portions to be encoded with a second spectral resolution, the second spectral resolution being lower than the first spectral resolution, wherein the analyzer (604) is configured to determine a first spectral portion (306) from the first spectral portions, the first spectral portion being placed, with respect to frequency, between two second spectral portions (307a, 307b) from the second spectral portions;

a spectral encoder (606) for encoding the first spectral portions with the first spectral resolution and for encoding the second spectral portions with the second spectral resolution, wherein the spectral encoder comprises a parametric coder for calculating spectral envelope information having the second spectral resolution from the second spectral portions;

a second encoding processor (610) for encoding a second different audio signal portion in the time domain;

a controller (620) configured for analyzing the audio signal and for determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and

an encoded signal former (630) for forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.

2. Audio encoder of claim 1 , wherein the input signal has a high band and a low band,

wherein the second encoding processor (610) comprises a sampling rate converter (900) for converting the second audio signal portion to a lower sampling rate representation, the lower sampling rate being lower than a sampling rate of the audio signal, wherein the lower sampling rate representation does not include the high band of the input signal;

a time domain low band encoder (910) for time domain encoding the lower sampling rate representation; and

a time domain bandwidth extension encoder (920) for parametrically encoding the high band.

3. Audio encoder of claim 1 or 2, further comprising;

a preprocessor (1000) configured for preprocessing the first audio signal portion and the second audio signal portion,

wherein the preprocessor comprises:

a prediction analyzer (1002) for determining prediction coefficients; and

wherein the second encoding processor comprises:

a prediction coefficient quantizer (1010) for generating a quantized version of the prediction coefficients; and

an entropy coder for generating an encoded version of the quantized prediction coefficients,

wherein the encoded signal former (630) is configured for introducing the encoded version into the encoded audio signal.

4. Audio encoder of claims 1 , 2 or 3,

wherein a preprocessor (1000) comprises a resampler (1004) for resampling the audio signal to a sampling rate of the second encoding processor; and

wherein a prediction analyzer is configured to determine the prediction coefficients using a resampled audio signal, or

wherein the preprocessor (1000) further comprises a long term prediction analysis stage (1006) for determining one or more long term prediction parameters for the first audio signal portion.

5. Audio encoder of one of the preceding claims, further comprising a cross- processor (700) for calculating, from the encoded spectral representation of the first audio signal portion, initialization data of the second encoding processor (610), so that the second encoding processing (610) is initialized to encode the second audio signal portion immediately following the first audio signal portion in time in the audio signal.

6. Audio encoder of claim 5, wherein the cross-processor (700) comprises:

a spectral decoder (701 ) for calculating a decoded version of the first encoded signal portion;

a delay stage (707) for feeding a delayed version of the decoded version into a de- emphasis stage (617) of the second encoding processor for initialization;

a weighted prediction coefficient analysis filtering block (708) for feeding a filter output into a codebook determinator (613) of the second encoding processor (610) for initialization;

an analysis filtering stage (706) for filtering the decoded version or a pre- emphasized (709) version and for feeding a filter residual into an adaptive codebook determinator (612) of the second encoding processor for initialization; or

a pre-emphasis filter (709) for filtering the decoded version and for feeding a delayed or pre-emphasized version to a synthesis filtering stage (616) of the second encoding processor (610) for initialization.

7. Audio encoder of one of the preceding claims,

wherein the analyzer (604) is configured to perform a temporal tile shaping or temporal noise shaping analysis or an operation of setting to zero spectral values in the second spectral portions,

wherein the first encoding processor (600) is configured to perform a shaping (606a) of spectral values of the first spectral portions using prediction coefficients (1010) derived from the first audio signal portion, and wherein the first encoding processor (600) is furthermore configured to perform a quantization and entropy coding operation (606b) of shaped spectral values of the first spectral portions, and

wherein spectral values of the second spectral portions are set to zero.

8. Audio encoder of claim 7, further comprising a cross-processor (700), wherein the cross-processor (700) comprises:

a noise shaper (703) for shaping quantized spectral values of the first spectral portions using LPC coefficients (1010) derived from the first audio signal portion;

a spectral decoder (704, 705) for decoding the spectrally shaped spectral portions of the first spectral portion with a high spectral resolution and for synthesizing second spectral portions using a parametric representation of the second spectral portions and at least a decoded first spectral portion to obtain a decoded spectral representation;

a frequency-time converter (702) for converting the spectral representation into a time domain to obtain a decoded first audio signal portion, wherein a sampling rate associated with the decoded first audio signal portion is different than a sampling rate of the audio signal, and a sampling rate associated with an output signal of the frequency-time converter (702) is different from a sampling rate of the audio signal input into the frequency-time converter (602).

99.. Audio encoder of one of the preceding claims,

wherein the second encoding processor comprises at least one block of the following group of blocks:

a prediction analysis filter (611 );

an adaptive codebook stage (612);

an innovative codebook stage (614);

an estimator (613) for estimating an innovative codebook entry;

an ACELP/gain coding stage (615);

a prediction synthesis filtering stage (616);

a de-emphasis stage (617); and

a bass post-filter analysis stage (618).

10. Audio encoder of one of the preceding claims,

wherein the time domain encoding processor has an associated second sampling rate,

wherein the frequency domain encoding processor has associated therewith a first sampling rate being higher than the second sampling rate, wherein the audio encoder further comprises a cross-processor (700) for calculating, from the encoded spectral representation of the first audio signal portion, initialization data of the second encoding processor,

wherein the cross-processor comprises a frequency-time converter (702) for generating a time domain signal at the second sampling rate,

wherein the frequency time converter (702) comprises:

a selector (726) for selecting a low portion of a spectrum input into the frequency time converter in accordance with a ratio of the first sampling rate and the second sampling rate, the ratio being smaller than 1,

a transform processor (720) having a transform length being smaller than a transform length of the time-frequency converter (602); and

a synthesis windower (712) for windowing using a window having a smaller number of window coefficients compared to a window used by the time frequency converter (602).

11. Audio decoder for decoding an encoded audio signal, comprising:

a first decoding processor (1120) for decoding a first encoded audio signal portion in a frequency domain, the first decoding processor (1120) comprising:

a spectral decoder (1122) for decoding first spectral portions with a high spectral resolution and for synthesizing second spectral portions using a parametric representation of the second spectral portions and at least a decoded first spectral portion to obtain a decoded spectral representation, wherein the spectral decoder (1122) is configured to generate the first decoded representation so that a first spectral portion (306) is placed with respect to frequency between two second spectral portions (307a, 307b); and

a frequency-time converter (1120) for converting the decoded spectral representation into a time domain to obtain a decoded first audio signal portion;

a second decoding processor (1140) for decoding a second encoded audio signal portion in the time domain to obtain a decoded second audio signal portion; and

a combiner (1160) for combining the decoded first spectral portion and the decoded second spectral portion to obtain a decoded audio signal.

12. Audio decoder of claim 11 , wherein the second decoding processor comprises:

a time domain low band decoder (1200) for decoding a low band time domain signal;

an upsampler (1210) for upsampling the low band time domain signal;

a time domain bandwidth extension decoder (1220) for synthesizing a high band of a time domain output signal; and

a mixer (1230) for mixing a synthesized high band of the time domain signal and an upsampled low band time domain signal.

13. Audio decoder of claim 12,

wherein the upsampler (1210) comprises an analysis filterbank (1471 ) operating at a first time domain low band decoder sampling rate and a synthesis filterbank (1473) operating at a second output sampling rate being higher than the first time domain low band sampling rate.

14. Audio decoder of claim 12 or 13,

wherein the time domain low band decoder (1200) comprises a residual signal, a decoder (1149, 1141 , 1142) and a synthesis filter (1143) for filtering a residual signal using synthesis filter coefficients (1145),

wherein the time domain bandwidth extension decoder (1220) is configured to upsample the residual signal (1221 ) and to process (1222) an upsampled residual signal using a non-linear operation to obtain a high band residual signal, and to spectrally shape (1223) the high band residual signal to obtain the synthesized high band.

15. Audio decoder of one of the claims 11 to 14,

wherein the first decoding processor (1120) comprises an adaptive long term prediction post-filter (1420) for post-filtering the first decoded first signal portion, wherein the filter (1420) is controlled by one or more long term prediction parameters included in the encoded audio signal.

16. Audio decoder of one of claims 11 to 15, further comprising;

a cross-processor (1170) for calculating, from the decoded spectral representation of the first encoded audio signal portion, initialization data of the second decoding processor (1140), so that the second decoding processor (1140) is initialized to decode the encoded second audio signal portion following in time the first audio signal portion in the encoded audio signal.

17. Audio decoder of claim 16, wherein the cross-processor further comprises:;

a frequency-time converter (1170) operating at a lower sampling rate than the frequency-time converter (1124) of the first decoding processor (1120) to obtain a further decoded first signal portion in the time domain,

wherein the signal output by the frequency-time converter (1171 ) has a second sampling rate being lower than the first sampling rate associated with an output of the frequency-time converter (1124) of the second decoding processor,

wherein the additional frequency-time converter (1171 ) comprises a selector (726) for selecting a low portion of a spectrum input into the additionally frequency-time converter (1171 ) in accordance with a ratio of the first sampling rate and the second sampling rate, the ratio being smaller than 1 ;

a transform processor (720) having a transform length being smaller than a transform length (710) of the time-frequency converter (1124); and

a synthesis windower (722) using a window having a smaller number of coefficients compared to a window used by the frequency-time converter (1124).

18. Audio decoder of one of claims 16 and 17,

wherein the cross-processor (1170) comprises:

a delay stage (1172) for delaying the further decoded first signal portion and for feeding a delayed version of the decoded first signal portion into a de-emphasis stage (1144) of the second decoding processor for initialization;

a pre-emphasis filter (1173) and a delay stage (1175) for filtering and delaying the further decoded first signal portion and for feeding a delay stage output into a prediction synthesis filter (1143) of the second decoding processor for initialization;

a prediction analysis filter (1174) for generating a prediction residual signal from the further decoded first spectral portion or a pre-emphasized (1173) further decoded first signal portion and for feeding a prediction residual signal into a codebook synthesizer (1141 ) of the second decoding processor (1200); or

a switch (1480) for feeding the further decoded first signal portion into an analysis stage (1471 ) of a resampler (1210) of the second decoding processor for initialization.

19. Audio decoder of one of claims 11 to 18,

wherein the second decoding processor (1200) comprises at least one block of the group of blocks comprising:

an ACELP for decoding gains and an innovative codebook;

an adaptive codebook synthesis stage (1141 );

an ACELP post-processor (1142);

a prediction synthesis filter (1143); and

a de-emphasis stage (1144).

20. Method of encoding an audio signal, comprising;

first encoding (600) a first audio signal portion in a frequency domain, wherein the first encoding (600) comprises:

converting (602) the first audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the first audio signal portion;

analyzing (604) the frequency domain representation up to the maximum frequency to determine first spectral portions to be encoded with a first spectral resolution and second spectral portions to be encoded with a second spectral resolution, the second spectral resolution being lower than the first spectral resolution, wherein the analyzing (604) determines a first spectral portion (306) from the first spectral portions, the first spectral portion being placed, with respect to frequency, between two second spectral portions (307a, 307b) from the second spectral portions;

encoding (606) the first spectral portions with the first spectral resolution and for encoding the second spectral portions with the second spectral resolution, wherein the encoding the second spectral portion comprises calculating, from the second spectral portions, spectral envelope information having the second spectral resolution;

second encoding (610) a second different audio signal portion in the time domain;

analyzing (620) the audio signal and determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and

forming (630) an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.

21. Method of decoding an encoded audio signal, comprising:

first decoding (1120) a first encoded audio signal portion in a frequency domain, the first decoding (1120) comprising:

decoding (1122) first spectral portions with a high spectral resolution and synthesizing second spectral portions using a parametric representation of the second spectral portions and at least a decoded first spectral portion to obtain a decoded spectral representation, wherein decoding (1122) comprises generating the first decoded representation so that a first spectral portion (306) is placed with respect to frequency between two second spectral portions (307a, 307b); and

converting (1120) the decoded spectral representation into a time domain to obtain a decoded first audio signal portion;

second decoding (1140) a second encoded audio signal portion in the time domain to obtain a decoded second audio signal portion; and

combining (1160) the decoded first spectral portion and the decoded second spectral portion to obtain a decoded audio signal.

22. Computer program for performing, when running on a computer or a processor, the method of claim 20 or claim 21.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 201737001634-Information under section 8(2) [01-06-2022(online)].pdf 2022-06-01
1 Form 5 [16-01-2017(online)].pdf 2017-01-16
2 201737001634-IntimationOfGrant27-05-2022.pdf 2022-05-27
2 Form 3 [16-01-2017(online)].pdf 2017-01-16
3 Form 20 [16-01-2017(online)].pdf 2017-01-16
3 201737001634-PatentCertificate27-05-2022.pdf 2022-05-27
4 Drawing [16-01-2017(online)].pdf 2017-01-16
4 201737001634-Annexure [02-05-2022(online)].pdf 2022-05-02
5 Description(Complete) [16-01-2017(online)].pdf_99.pdf 2017-01-16
5 201737001634-FORM 3 [02-05-2022(online)].pdf 2022-05-02
6 Description(Complete) [16-01-2017(online)].pdf 2017-01-16
6 201737001634-Information under section 8(2) [02-05-2022(online)].pdf 2022-05-02
7 Form 18 [25-01-2017(online)].pdf 2017-01-25
7 201737001634-Written submissions and relevant documents [02-05-2022(online)].pdf 2022-05-02
8 Other Patent Document [01-04-2017(online)].pdf 2017-04-01
8 201737001634-FORM-26 [13-04-2022(online)].pdf 2022-04-13
9 201737001634-Correspondence to notify the Controller [09-04-2022(online)].pdf 2022-04-09
9 Other Patent Document [04-05-2017(online)].pdf 2017-05-04
10 201737001634-Information under section 8(2) (MANDATORY) [25-11-2017(online)].pdf 2017-11-25
10 201737001634-US(14)-ExtendedHearingNotice-(HearingDate-15-04-2022).pdf 2022-03-09
11 201737001634-Information under section 8(2) (MANDATORY) [19-12-2017(online)].pdf 2017-12-19
11 201737001634-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [07-03-2022(online)].pdf 2022-03-07
12 201737001634-FORM-26 [15-02-2018(online)].pdf 2018-02-15
12 201737001634-US(14)-HearingNotice-(HearingDate-15-03-2022).pdf 2022-02-25
13 201737001634-Information under section 8(2) (MANDATORY) [04-05-2018(online)].pdf 2018-05-04
13 201737001634-Information under section 8(2) [28-01-2022(online)].pdf 2022-01-28
14 201737001634-FORM 3 [03-11-2021(online)].pdf 2021-11-03
14 201737001634-Information under section 8(2) (MANDATORY) [06-06-2018(online)].pdf 2018-06-06
15 201737001634-Information under section 8(2) (MANDATORY) [28-08-2018(online)].pdf 2018-08-28
15 201737001634-Information under section 8(2) [03-11-2021(online)].pdf 2021-11-03
16 201737001634-Information under section 8(2) (MANDATORY) [13-11-2018(online)].pdf 2018-11-13
16 201737001634-Information under section 8(2) [29-10-2021(online)].pdf 2021-10-29
17 201737001634-Information under section 8(2) [09-09-2021(online)].pdf 2021-09-09
17 201737001634-Information under section 8(2) (MANDATORY) [03-12-2018(online)].pdf 2018-12-03
18 201737001634-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf 2019-02-18
18 201737001634-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
19 201737001634-FORM 3 [04-05-2021(online)].pdf 2021-05-04
19 201737001634-Information under section 8(2) (MANDATORY) [09-05-2019(online)].pdf 2019-05-09
20 201737001634-Information under section 8(2) (MANDATORY) [25-05-2019(online)].pdf 2019-05-25
20 201737001634-Information under section 8(2) [21-04-2021(online)].pdf 2021-04-21
21 201737001634-Information under section 8(2) (MANDATORY) [23-07-2019(online)].pdf 2019-07-23
21 201737001634-Information under section 8(2) [05-02-2021(online)].pdf 2021-02-05
22 201737001634-Information under section 8(2) (MANDATORY) [10-09-2019(online)].pdf 2019-09-10
22 201737001634-Information under section 8(2) [05-11-2020(online)].pdf 2020-11-05
23 201737001634-FER.pdf 2019-09-30
23 201737001634-Information under section 8(2) [28-09-2020(online)].pdf 2020-09-28
24 201737001634-Information under section 8(2) [29-06-2020(online)].pdf 2020-06-29
24 201737001634-Information under section 8(2) (MANDATORY) [30-12-2019(online)].pdf 2019-12-30
25 201737001634-FORM 3 [06-06-2020(online)].pdf 2020-06-06
25 201737001634-FORM 4(ii) [23-03-2020(online)].pdf 2020-03-23
26 201737001634-ABSTRACT [30-05-2020(online)].pdf 2020-05-30
26 201737001634-Information under section 8(2) [09-05-2020(online)].pdf 2020-05-09
27 201737001634-CLAIMS [30-05-2020(online)].pdf 2020-05-30
27 201737001634-OTHERS [30-05-2020(online)].pdf 2020-05-30
28 201737001634-DRAWING [30-05-2020(online)].pdf 2020-05-30
28 201737001634-FER_SER_REPLY [30-05-2020(online)].pdf 2020-05-30
29 201737001634-DRAWING [30-05-2020(online)].pdf 2020-05-30
29 201737001634-FER_SER_REPLY [30-05-2020(online)].pdf 2020-05-30
30 201737001634-CLAIMS [30-05-2020(online)].pdf 2020-05-30
30 201737001634-OTHERS [30-05-2020(online)].pdf 2020-05-30
31 201737001634-ABSTRACT [30-05-2020(online)].pdf 2020-05-30
31 201737001634-Information under section 8(2) [09-05-2020(online)].pdf 2020-05-09
32 201737001634-FORM 3 [06-06-2020(online)].pdf 2020-06-06
32 201737001634-FORM 4(ii) [23-03-2020(online)].pdf 2020-03-23
33 201737001634-Information under section 8(2) (MANDATORY) [30-12-2019(online)].pdf 2019-12-30
33 201737001634-Information under section 8(2) [29-06-2020(online)].pdf 2020-06-29
34 201737001634-FER.pdf 2019-09-30
34 201737001634-Information under section 8(2) [28-09-2020(online)].pdf 2020-09-28
35 201737001634-Information under section 8(2) (MANDATORY) [10-09-2019(online)].pdf 2019-09-10
35 201737001634-Information under section 8(2) [05-11-2020(online)].pdf 2020-11-05
36 201737001634-Information under section 8(2) [05-02-2021(online)].pdf 2021-02-05
36 201737001634-Information under section 8(2) (MANDATORY) [23-07-2019(online)].pdf 2019-07-23
37 201737001634-Information under section 8(2) (MANDATORY) [25-05-2019(online)].pdf 2019-05-25
37 201737001634-Information under section 8(2) [21-04-2021(online)].pdf 2021-04-21
38 201737001634-FORM 3 [04-05-2021(online)].pdf 2021-05-04
38 201737001634-Information under section 8(2) (MANDATORY) [09-05-2019(online)].pdf 2019-05-09
39 201737001634-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf 2019-02-18
39 201737001634-Information under section 8(2) [10-07-2021(online)].pdf 2021-07-10
40 201737001634-Information under section 8(2) (MANDATORY) [03-12-2018(online)].pdf 2018-12-03
40 201737001634-Information under section 8(2) [09-09-2021(online)].pdf 2021-09-09
41 201737001634-Information under section 8(2) (MANDATORY) [13-11-2018(online)].pdf 2018-11-13
41 201737001634-Information under section 8(2) [29-10-2021(online)].pdf 2021-10-29
42 201737001634-Information under section 8(2) (MANDATORY) [28-08-2018(online)].pdf 2018-08-28
42 201737001634-Information under section 8(2) [03-11-2021(online)].pdf 2021-11-03
43 201737001634-FORM 3 [03-11-2021(online)].pdf 2021-11-03
43 201737001634-Information under section 8(2) (MANDATORY) [06-06-2018(online)].pdf 2018-06-06
44 201737001634-Information under section 8(2) (MANDATORY) [04-05-2018(online)].pdf 2018-05-04
44 201737001634-Information under section 8(2) [28-01-2022(online)].pdf 2022-01-28
45 201737001634-FORM-26 [15-02-2018(online)].pdf 2018-02-15
45 201737001634-US(14)-HearingNotice-(HearingDate-15-03-2022).pdf 2022-02-25
46 201737001634-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [07-03-2022(online)].pdf 2022-03-07
46 201737001634-Information under section 8(2) (MANDATORY) [19-12-2017(online)].pdf 2017-12-19
47 201737001634-Information under section 8(2) (MANDATORY) [25-11-2017(online)].pdf 2017-11-25
47 201737001634-US(14)-ExtendedHearingNotice-(HearingDate-15-04-2022).pdf 2022-03-09
48 201737001634-Correspondence to notify the Controller [09-04-2022(online)].pdf 2022-04-09
48 Other Patent Document [04-05-2017(online)].pdf 2017-05-04
49 201737001634-FORM-26 [13-04-2022(online)].pdf 2022-04-13
49 Other Patent Document [01-04-2017(online)].pdf 2017-04-01
50 201737001634-Written submissions and relevant documents [02-05-2022(online)].pdf 2022-05-02
50 Form 18 [25-01-2017(online)].pdf 2017-01-25
51 Description(Complete) [16-01-2017(online)].pdf 2017-01-16
51 201737001634-Information under section 8(2) [02-05-2022(online)].pdf 2022-05-02
52 Description(Complete) [16-01-2017(online)].pdf_99.pdf 2017-01-16
52 201737001634-FORM 3 [02-05-2022(online)].pdf 2022-05-02
53 Drawing [16-01-2017(online)].pdf 2017-01-16
53 201737001634-Annexure [02-05-2022(online)].pdf 2022-05-02
54 Form 20 [16-01-2017(online)].pdf 2017-01-16
54 201737001634-PatentCertificate27-05-2022.pdf 2022-05-27
55 201737001634-IntimationOfGrant27-05-2022.pdf 2022-05-27
55 Form 3 [16-01-2017(online)].pdf 2017-01-16
56 201737001634-Information under section 8(2) [01-06-2022(online)].pdf 2022-06-01
56 Form 5 [16-01-2017(online)].pdf 2017-01-16

Search Strategy

1 SEARCHSTRATEGY_06-09-2019.pdf

ERegister / Renewals

3rd: 01 Aug 2022

From 24/07/2017 - To 24/07/2018

4th: 01 Aug 2022

From 24/07/2018 - To 24/07/2019

5th: 01 Aug 2022

From 24/07/2019 - To 24/07/2020

6th: 01 Aug 2022

From 24/07/2020 - To 24/07/2021

7th: 01 Aug 2022

From 24/07/2021 - To 24/07/2022

8th: 01 Aug 2022

From 24/07/2022 - To 24/07/2023

9th: 29 Jun 2023

From 24/07/2023 - To 24/07/2024

10th: 11 Jul 2024

From 24/07/2024 - To 24/07/2025

11th: 24 Jul 2025

From 24/07/2025 - To 24/07/2026