Sign In to Follow Application
View All Documents & Correspondence

Apparatuses And Methods For Encoding Or Decoding An Audio Multi Channel Signal Using Spectral Domain Resampling

Abstract: An apparatus converting sequences of blocks of sample values of at least two channels into sequences of blocks of spectral values wherein a block of sampling values has an input sampling rate a multi channel processor (1010) for applying a joint multi channel processing to the sequences of blocks or to resampled sequences of blocks to obtain at least one result sequence of blocks of spectral values; a spectral domain resampler (1020) for resampling the blocks of the result sequences or for resampling the sequences of blocks of spectral values to obtain a resampled sequence of blocks of spectral values wherein a block of the resampled sequence of blocks has spectral values up to a maximum output frequency (1231 1221) being different from the maximum input frequency (1211); a spectral time converter for converting the resampled sequence of blocks or the result sequence of blocks into a time domain; and a core encoder (1040) for encoding the output sequence of blocks.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 November 2017
Publication Number
49/2017
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-11-06
Renewal Date

Applicants

FRAUNHOFER GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München

Inventors

1. FUCHS Guillaume
Joseph Otto Kolb Str. 31 91088 Bubenreuth
2. RAVELLI Emmanuel
Gerhart Hauptmann Str. 1 91058 Erlangen
3. MULTRUS Markus
Etzlaubweg 7 90469 Nürnberg
4. SCHNELL Markus
Labenwolfstr. 15 90409 Nürnberg
5. DÖHLA Stefan
Saidelsteig 61 91058 Erlangen
6. DIETZ Martin
Deutschherrnstr. 37 90429 Nürnberg
7. MARKOVIC Goran
Aachener Str. 19 90425 Nürnberg
8. FOTOPOULOU Eleni
Berckhauserstr. 33 90409 Nürnberg
9. BAYER Stefan
Dortmunder Str. 14 90425 Nürnberg
10. JÄGERS Wolfgang
Kulmbacher Str. 47 91056 Erlangen

Specification

APPARATUSES AND METHODS FOR ENCODING OR DECODING AN AUDIO MULTI-CHANNEL SIGNAL USING SPECTRAL-DOMAIN RESAMPLING Description The present application is related to stereo processing or, generally, multi-channel processing, where a multi-channel signal has two channels such as a left channel and a right channel in the case of a stereo signal or more than two channels, such as three, four, five or any other number of channels. Stereo speech and particularly conversational stereo speech has received much less scientific attention than storage and broadcasting of stereophonic music. Indeed in speech communications monophonic transmission is still nowadays mostly used. However with the increase of network bandwidth and capacity, it is envisioned that communications based on stereophonic technologies will become more popular and bring a better listening experience. Efficient coding of stereophonic audio material has been for a long time studied in perceptual audio coding of music for efficient storage or broadcasting. At high bitrates, where waveform preserving is crucial, sum-difference stereo, known as mid/side (M/S) stereo, has been employed for a long time. For low bit-rates, intensity stereo and more recently parametric stereo coding has been introduced. The latest technique was adopted in different standards as HeAACv2 and Mpeg USAC. It generates a downmix of the two-channel signal and associates compact spatial side information. Joint stereo coding are usually built over a high frequency resolution, i.e. low time resolution, time-frequency transformation of the signal and is then not compatible to low delay and time domain processing performed in most speech coders. Moreover the engendered bit-rate is usually high. On the other hand, parametric stereo employs an extra filter-bank positioned in the front-end of the encoder as pre-processor and in the back-end of the decoder as postprocessor. Therefore, parametric stereo can be used with conventional speech coders like ACELP as it is done in MPEG USAC. Moreover, the parametrization of the auditory scene can be achieved with minimum amount of side information, which is suitable for low bit- rates. However, parametric stereo is as for example in MPEG USAC not specifically designed for low delay and does not deliver consistent quality for different conversational scenarios. In conventional parametric representation of the spatial scene, the width of the stereo image is artificially reproduced by a decorrelator applied on the two synthesized channels and controlled by Inter-channel Coherence (ICs) parameters computed and transmitted by the encoder. For most stereo speech, this way of widening the stereo image is not appropriate for the recreating the natural ambience of speech which is a pretty direct sound since it is produced by a single source located at a specific position in the space (with sometimes some reverberation from the room). By contrast, music instruments have much more natural width than speech, which can be better imitated by decorrelating the channels. Problems also occur when speech is recorded with non-coincident microphones, like in A-B configuration when microphones are distant from each other or for binaural recording or rendering. Those scenarios can be envisioned for capturing speech in teleconferences or for creating a virtually auditory scene with distant speakers in the multipoint control unit (MCU). The time of arrival of the signal is then different from one channel to the other unlike recordings done on coincident microphones like X-Y (intensity recording) or MS (Mid-Side recording). The computation of the coherence of such non time-aligned two channels can then be wrongly estimated which makes fail the artificial ambience synthesis. Prior art references related to stereo processing are US Patent 5,434,948 or US Patent 8,81 1 ,621. Document WO 2006/089570 A1 discloses a near-transparent or transparent multi-channel encoder/decoder scheme. A multi-channel encoder/decoder scheme additionally generates a waveform-type residual signal. This residual signal is transmitted together with one or more multi-channel parameters to a decoder. In contrast to a purely parametric multi-channel decoder, the enhanced decoder generates a multi-channel output signal having an improved output quality because of the additional residual signal. On the encoder-side, a left channel and a right channel are both filtered by an analysis filter-bank. Then, for each subband signal, an alignment value and a gain value are calculated for a subband. Such an alignment is then performed before further processing. On the decoder-side, a de-alignment and a gain processing is performed and the corresponding signals are then synthesized by a synthesis filter-bank in order to generate a decoded left signal and a decoded right signal. On the other hand, parametric stereo employs an extra filter-bank positioned in the front-end of the encoder as pre-processor and in the back-end of the decoder as postprocessor. Therefore, parametric stereo can be used with conventional speech coders like ACELP as it is done in MPEG USAC. Moreover, the parametrization of the auditory scene can be achieved with minimum amount of side information, which is suitable for low bit-rates. However, parametric stereo is as for example in MPEG USAC not specifically designed for low delay and the overall system shows a very high algorithmic delay. It is an object of the present invention to provide an improved concept for multi-channel encoding/decoding, which is efficient and in the position to obtain a low delay. This object is achieved by an apparatus for encoding a multi-channel signal in accordance with claim 1 , a method of encoding a multi-channel signal in accordance with claim 24, an apparatus for decoding an encoded multi-channel signal in accordance with claim 25, a method of decoding an encoded multi-channel signal in accordance with claim 42 or a computer program in accordance with claim 43. The present invention is based on the finding that at least a portion and preferably all parts of the multi-channel processing, i.e., a joint multi-channel processing are performed in a spectral domain. Specifically, it is preferred to perform the downmix operation of the joint multi-channel processing in the spectral domain and, additionally, temporal and phase alignment operations or even procedures for analyzing parameters for the joint stereo/joint multi-channel processing. Additionally, the spectral domain resampling is performed either subsequent to the multi-channel processing or even before the multichannel processing in order to provide an output signal from a further spectral-time converter that is already at an output sampling rate required by a subsequently connected core encoder. On the decoder-side, it is preferred to once again perform at least an operation for generating a first channel signal and a second channel signal from a downmix signal in the spectral domain and, preferably, to perform even the whole inverse multi-channel processing in the spectral domain. Furthermore, the time-spectral converter is provided for converting the core decoded signal into a spectral domain representation and, within the frequency domain, the inverse multi-channel processing is performed. A spectral domain resampling is either performed before the multi-channel inverse processing or is performed subsequent to the multi-channel inverse processing in such a way that, in the end, a spectral-time converter converts a spectrally resampled signal into the time domain at an output sampling rate that is intended for the time domain output signal. Therefore, the present invention allows to completely avoid any computational intensive time-domain resampling operations. Instead, the multi-channel processing is combined with the resampling. The spectral domain resampling is, in preferred embodiments, either performed by truncating the spectrum in the case of downsampling or is performed by zero padding the spectrum in the case of upsampling. These easy operations, i.e., truncating the spectrum on the one hand or zero padding the spectrum on the other hand and preferable additional scalings in order to account for certain normalization operations performed in spectral domain/ time-domain conversion algorithms such as DFT or FFT algorithm complete the spectral domain resampling operation in a very efficient and low-delay manner. Furthermore, it has been found that at least a portion or even the whole joint stereo processing/joint multi-channel processing on the encoder-side and the corresponding inverse multi-channel processing on the decoder-side is suitable for being executed in the frequency-domain. This is not only valid for the downmix operation as a minimum joint multi-channel processing on the encoder-side or an upmix processing as a minimum inverse multi-channel processing on the decoder-side. Instead, even a stereo scene analysis and time/phase alignments on the encoder-side or phase and time de-alignments on the decoder-side can be performed in the spectral domain as well. The same applies to the preferably performed Side channel encoding on the encoder-side or Side channel synthesis and usage for the generation of the two decoded output channels on the decoder-side. Therefore, an advantage of the present invention is to provide a new stereo coding scheme much more suitable for conversion of a stereo speech than the existing stereo coding schemes. Embodiments of the present invention provide a new framework for achieving a low-delay stereo codec and integrating a common stereo tool performed in frequency-domain for both a speech core coder and an MDCT-based core coder within a switched audio codec. Embodiments of the present invention relate to a hybrid approach mixing elements from a conventional M/S stereo or parametric stereo. Embodiments use some aspects and tools from the joint stereo coding and others from the parametric stereo. More particularly, embodiments adopt the extra time-frequency analysis and synthesis done at the front end of the encoder and at the back-end of the decoder. The time-frequency decomposition and inverse transform is achieved by employing either a filter-bank or a block transform with complex values. From the two channels or multi-channel input, the stereo or multichannel processing combines and modifies the input channels to output channels referred to as Mid and Side signals (MS). Embodiments of the present invention provide a solution for reducing an algorithmic delay introduced by a stereo module and particularly from the framing and windowing of its filter-bank. It provides a multi-rate inverse transform for feeding a switched coder like 3GPP EVS or a coder switching between a speech coder like ACELP and a generic audio coder like TCX by producing the same stereo processing signal at different sampling rates. Moreover, it provides a windowing adapted for the different constraints of the low-delay and low-complex system as well as for the stereo processing. Furthermore, embodiments provide a method for combining and resampling different decoded synthesis results in the spectral domain, where the inverse stereo processing is applied as well. Preferred embodiments of the present invention comprise a multi-function in a spectral domain resampler not only generating a single spectral-domain resampled block of spectral values but, additionally, a further resampled sequence of blocks of spectral values corresponding to a different higher or lower sampling rate. Furthermore, the multi-channel encoder is configured to additionally provide an output signal at the output of the spectral-time converter that has the same sampling rate as the original first and second channel signal input into the time-spectral converter on the encoder-side. Thus, the multi-channel encoder provides, in embodiments, at least one output signal at the original input sampling rate, that is preferably used for an MDCT-based encoding. Additionally, at least one output signal is provided at an intermediate sampling rate that is specifically useful for ACELP coding and additionally provides a further output signal at a further output sampling rate that is also useful for ACELP encoding, but that is different from the other output sampling rate. These procedures can be performed either for the Mid signal or for the Side signal or for both signals derived from the first and the second channel signal of a multi-channel signal where the first signal can also be a left signal and the second signal can be a right signal in the case of a stereo signal only having two channels (additionally two, for example, a low-frequency enhancement channel). In further embodiments, the core encoder of the multi-channel encoder is configured to operate in accordance with a framing control, and the time-spectral converter and the spectrum-time converter of the stereo post-processor and resampler are also configured to operate in accordance with a further framing control which is synchronized to the framing control of the core encoder. The synchronization is performed in such a way that a start frame border or an end frame border of each frame of a sequence of frames of the core encoder is in a predetermined relation to a start instant or an end instant of an overlapping portion of a window used by the time-spectral converter or the spectral time converter for each block of the sequence of blocks of sampling values or for each block of the resampled sequence of blocks of spectral values. Thus, it is assured that the subsequent framing operations operate in synchrony to each other. In further embodiments, a look-ahead operation with a look-ahead portion is performed by the core encoder. In this embodiment, it is preferred that the look-ahead portion is also used by an analysis window of the time-spectrai converter where an overlap portion of the analysis window is used that has a length in time being lower than or equal to the length in time of the look-ahead portion. Thus, by making the look-ahead portion of the core encoder and the overlap portion of the analysis window equal to each other or by making the overlap portion even smaller than the look-ahead portion of the core encoder, the time-spectral analysis of the stereo preprocessor can't be implemented without any additional algorithmic delay. In order to make sure that this windowed look-ahead portion does not influence the core encoder look-ahead functionality too much, it is preferred to redress this portion using an inverse of the analysis window function. In order to be sure that this is done with a good stability, a square root of sine window shape is used instead of a sine window shape as an analysis window and a sine to the power of 1.5 synthesis window is used for the purpose of synthesis windowing before performing the overlap operation at the output of the spectral-time converter. Thus, it is made sure that the redressing function assumes values that are reduced with respect to their magnitudes compared to a redressing function being the inverse of a sine-function. On the decoder-side, however, it is preferred to use the same analysis and synthesis window shapes, since there is no redressing required, of course. On the other hand, it is preferred to use a time gap on the decoder-side, where the time gap exists between an end of a leading overlapping portion of an analysis window of the time-spectral converter on the decoder-side and a time instant at the end of a frame output by the core decoder on the multi-channel decoder-side. Thus, the core decoder output samples within this time gap are not required for the purpose of analysis windowing by the stereo post-processor immediately, but are only required for the processing/windowing of the next frame. Such a time gap can be, for example, implemented by using a non-overlapping portion typically in the middle of an analysis window which results in a shortening of the overlapping portion. However, other alternatives for implementing such a time gap can be used as well, but implementing the time gap by the non-overlapping portion in the middle is the preferred way. Thus, this time gap can be used for other core decoder operations or smoothing operations between preferably switching events when the core decoder switches from a frequency-domain to a time-domain frame or for any other smoothing operations that may be useful when the parameter changes or coding characteristic changes have occurred. Subsequently, preferred embodiments of the present invention are discussed in detail with respect to the accompanying drawings, in which: Fig. 1 is a block diagram of an embodiment of the multi-channel encoder; Fig. 2 illustrates embodiments of the spectral domain resampling; illustrate different alternatives for performing time/frequency or frequency/time-conversions with different normalizations and corresponding scalings in the spectral domain; illustrates different frequency resolutions and other frequency-related aspects for certain embodiments; Fig. 4a illustrates a block diagram of an embodiment of an encoder; Fig. 4b illustrates a block diagram of a corresponding embodiment of a decoder; Fig. 5 illustrates a preferred embodiment of a multi-channel encoder; Fig. 6 illustrates a block diagram of an embodiment of a multi-channel decoder; Fig. 7a illustrates a further embodiment of a multi-channel decoder comprising a combiner; Fig. 7b illustrates a further embodiment of a multi-channel decoder additionally comprising the combiner (addition); Fig. 8a illustrates a table showing different characteristics of window for several sampling rates; Fig. 8b illustrates different proposals/embodiments for a DFT filter-bank as an implementation of the time-spectral converter and a spectrum-time converter; Fig. 8c illustrates a sequence of two analysis windows of a DFT with a time resolution of 10 ms; Fig. 9a illustrates an encoder schematic windowing in accordance with a first proposal/embodiment; Fig. 9b illustrates a decoder schematic windowing in accordance with the first proposal/embodiment; Fig. 9c illustrates the windows at the encoder and the decoder in accordance with the first proposal/embodiment; Fig. 9d illustrates a preferred flowchart illustrating the redressing embodiment; Fig. 9e illustrates a flowchart further illustrating the redress embodiment; Fig. 9f illustrates a flowchart for explaining the time gap decoder-side embodiment; illustrates an encoder schematic windowing in accordance with the fourth proposal/embodiment; illustrates a decoder schematic window in accordance with the fourth proposal/embodiment; illustrates windows at the encoder and the decoder in accordance with the fourth proposal/embodiment; illustrates an encoder schematic windowing in accordance with the fifth proposal/embodiment; illustrates a decoder schematic windowing in accordance with the fifth proposal/embodiment; illustrates the encoder and the decoder in accordance with the fifth proposal/embodiment; is a block diagram of a preferred implementation of the multi-channel processing using a downmix in the signal processor; is a preferred embodiment of the inverse multi-channel processing with an upmix operation within the signal processor; illustrates a flowchart of procedures performed in the apparatus for encoding for the purpose of aligning the channels; illustrates a preferred embodiment of procedures performed in the frequency-domain; illustrates a preferred embodiment of procedures performed in the apparatus for encoding using an analysis window with zero padding portions and overlap ranges; Fig. 14d illustrates a flowchart for further procedures performed within embodiment of the apparatus for encoding; Fig. 15a illustrates procedures performed by an embodiment of the apparatus for decoding and encoding multi-channel signals; Fig. 15b illustrates a preferred implementation of the apparatus for decoding with respect to some aspects; and Fig. 15c illustrates a procedure performed in the context of broadband de-alignment in the framework of the decoding of an encoded multi-channel signal. Fig. 1 illustrates an apparatus for encoding a multi-channel signal comprising at least two channels 1001 , 1002. The first channel 1001 in the left channel, and the second channel 1002 can be a right channel in the case of a two-channel stereo scenario. However, in the case of a multi-channel scenario, the first channel 1001 and the second channel 1002 can be any of the channels of the multi-channel signal such as, for example, the left channel on the one hand and the left surround channel on the other hand or the right channel on the one hand and the right surround channel on the other hand. These channel pairings, however, are only examples, and other channel pairings can be applied as the case requires. The multi-channel encoder of Fig. 1 comprises a time-spectral converter for converting sequences of blocks of sampling values of the at least two channels into a frequency-domain representation at the output of the time-spectral converter. Each frequency domain representation has a sequence of blocks of spectral values for one of the at least two channels. Particularly, a block of sampling values of the first channel 1001 or the second channel 1002 has an associated input sampling rate, and a block of spectral values of the sequences of the output of the time-spectral converter has spectral values up to a maximum input frequency being related to the input sampling rate. The time-spectral converter is, in the embodiment illustrated in Fig. 1 , connected to the multichannel processor 1010. This multi-channel processor is configured for applying a joint multi-channel processing to the sequences of blocks of spectral values to obtain at least one result sequence of blocks of spectral values comprising information related to the at least two channels. A typical multi-channel processing operation is a downmix operation, but the preferred multi-channel operation comprises additional procedures that will be described later on. In an alternative embodiment, the multi-channel processor 1010 is connected to a spectral domain resampler 1020, and an output of the spectral-domain resampler 1020 is input into the multi-channel processor. This is illustrated by the broken connection lines 1021 , 1022. In this alternative embodiment, the multi-channel processor is configured for applying the joint multi-channel processing not to the sequences of blocks of spectral values as output by the time-spectral converter, but resampied sequences of blocks as available on connection lines 1022. The spectral-domain resampler 1020 is configured for resampling of the result sequence generated by the multi-channel processor or to resample the sequences of blocks output by the time-spectral converter 1000 to obtain a resampied sequence of blocks of spectral values that may represent a Mid-signal as illustrated at line 1025. Preferably, the spectral domain resampler additionally performs resampling to the Side signal generated by the multi-channel processor and, therefore, also outputs a resampied sequence corresponding to the Side signal as illustrated at 1026. However, the generation and resampling of the Side signal is optional and is not required for a low bit rate implementation. Preferably, the spectral-domain resampler 1020 is configured for truncating blocks of spectral values for the purpose of downsampling or for zero padding the blocks of spectral values for the purpose of upsampling. The multi-channel encoder additionally comprises a spectral-time converter for converting the resampied sequence of blocks of spectral values into a time-domain representation comprising an output sequence of blocks of sampling values having associated an output sampling rate being different from the input sampling rate. In alternative embodiments, where the spectral domain resampling is performed before multi-channel processing, the multi-channel processor provides the result sequence via broken line 1023 directly to the spectral-time converter 1030. In this alternative embodiment, an optional feature is that, additionally, the Side signal is generated by the multi-channel processor already in the resampied representation and the Side signal is then also processed by the spectral-time converter. In the end, the spectral-time converter preferably provides a time-domain Mid signal 1031 and an optional time-domain Side signal 1032, that can both be core-encoded by the core encoder 1040. Generally, the core encoder is configured for a core encoding the output sequence of blocks of sampling values to obtain the encoded multi-channel signal. Fig. 2 illustrates spectral charts that are useful for explaining the spectral domain resampling. The upper chart in Fig. 2 illustrates a spectrum of a channel as available at the output of the time-spectral converter 1000. This spectrum 1210 has spectral values up to the maximum input frequency 121 1. In the case of upsampling, a zero padding is performed within the zero padding portion or zero padding region 1220 that extends until the maximum output frequency 1221 . The maximum output frequency 1221 is greater than the maximum input frequency 121 1 , since an upsampling is intended. Contrary thereto, the lowest chart in Fig, 2 illustrates the procedures incurred by downsampling a sequence of blocks. To this end, a block is truncated within a truncated region 1230 so that a maximum output frequency of the truncated spectrum at 1231 is lower than the maximum input frequency 121 1. Typically, the sampling rate associated with a corresponding spectrum in Fig. 2 is at least 2x the maximum frequency of the spectrum. Thus, for the upper case in Fig. 2, the sampling rate will be at least 2 times the maximum input frequency 121 1 . In the second chart of Fig. 2, the sampling rate will be at least two times the maximum output frequency 1221 , i.e., the highest frequency of the zero padding region 1220. Contrary thereto, in the lowest chart in Fig. 2, the sampling rate will be at least 2x the maximum output frequency 1231 , i.e., the highest spectral value remaining subsequent to a truncation within the truncated region 1230. Fig. 3a to 3c illustrate several alternatives that can be used in the context of certain DFT forward or backward transform algorithms. In Fig. 3a, a situation is considered, where a DFT with a size x is performed, and where there does not occur any normalization in the forward transform algorithm 131 1 . At block 1331 , a backward transform with a different size y is illustrated, where a normalization with 1/Ny is performed. Ny is the number of spectral values of the backward transform with size y. Then, it is preferred to perform a scaling by Ny/Nx as illustrated by block 1321. Contrary thereto, Fig. 3b illustrates an implementation, where the normalization is distributed to the forward transform 1312 and the backward transform 1332. Then a scaling is required as illustrated in block 1322, where a square root of the relation between the number of spectral values of the backward transform to the number of spectral values of the forward transform is useful. Fig. 3c illustrates a further implementation, where the whole normalization is performed on the forward transform where the forward transform with the size x is performed. Then, the backward transform as illustrated in block 1333 operates without any normalization so that any scaling is not required as illustrated by the schematic block 1323 in Fig. 3c. Thus, depending on certain algorithms, certain scaling operations or even no scaling operations are required. It is, however, preferred to operate in accordance with Fig. 3a. In order to keep the overall delay low, the present invention provides a method at the encoder-side for avoiding the need of a time-domain resampler and by replacing it by resampling the signals in the DFT domain. For example, in EVS it allows saving 0.9375 ms of delay coming from the time-domain resampler. The resampling in frequency domain is achieved by zero padding or truncating the spectrum and scaling it correctly. Consider an input windowed signal x sampled at rate fx with a spectrum X of size Nx and a version y of the same signal re-sampled at rate fy with a spectrum of size Ny. The sampling factor is then equal to: in case of downsampling Nx>Ny. The downsampling can be simply performed in frequency domain by directly scaling and truncating the original spectrum X: in case of upsampling Nx

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 201737041315-IntimationOfGrant06-11-2023.pdf 2023-11-06
1 201737041315-STATEMENT OF UNDERTAKING (FORM 3) [18-11-2017(online)].pdf 2017-11-18
2 201737041315-FORM 1 [18-11-2017(online)].pdf 2017-11-18
2 201737041315-PatentCertificate06-11-2023.pdf 2023-11-06
3 201737041315-Information under section 8(2) [12-10-2023(online)].pdf 2023-10-12
3 201737041315-FIGURE OF ABSTRACT [18-11-2017(online)].pdf 2017-11-18
4 201737041315-FORM 3 [11-10-2023(online)].pdf 2023-10-11
4 201737041315-DRAWINGS [18-11-2017(online)].pdf 2017-11-18
5 201737041315-DECLARATION OF INVENTORSHIP (FORM 5) [18-11-2017(online)].pdf 2017-11-18
5 201737041315-Annexure [05-07-2023(online)].pdf 2023-07-05
6 201737041315-FORM 13 [05-07-2023(online)].pdf 2023-07-05
6 201737041315-COMPLETE SPECIFICATION [18-11-2017(online)].pdf 2017-11-18
7 201737041315-RELEVANT DOCUMENTS [05-07-2023(online)].pdf 2023-07-05
7 201737041315-FORM 18 [23-11-2017(online)].pdf 2017-11-23
8 201737041315-Written submissions and relevant documents [05-07-2023(online)].pdf 2023-07-05
8 201737041315-Proof of Right (MANDATORY) [27-02-2018(online)].pdf 2018-02-27
9 201737041315-Information under section 8(2) (MANDATORY) [11-04-2018(online)].pdf 2018-04-11
9 201737041315-Information under section 8(2) [06-06-2023(online)].pdf 2023-06-06
10 201737041315-Correspondence to notify the Controller [10-05-2023(online)].pdf 2023-05-10
10 201737041315-FORM-26 [07-05-2018(online)].pdf 2018-05-07
11 201737041315-FORM-26 [10-05-2023(online)]-1.pdf 2023-05-10
11 201737041315-Information under section 8(2) (MANDATORY) [13-08-2018(online)].pdf 2018-08-13
12 201737041315-FORM-26 [10-05-2023(online)].pdf 2023-05-10
12 201737041315-Information under section 8(2) (MANDATORY) [16-10-2018(online)].pdf 2018-10-16
13 201737041315-Information under section 8(2) (MANDATORY) [06-12-2018(online)].pdf 2018-12-06
13 201737041315-US(14)-HearingNotice-(HearingDate-20-06-2023).pdf 2023-04-20
14 201737041315-FORM 3 [04-04-2023(online)].pdf 2023-04-04
14 201737041315-Information under section 8(2) (MANDATORY) [11-04-2019(online)].pdf 2019-04-11
15 201737041315-Information under section 8(2) (MANDATORY) [18-04-2019(online)].pdf 2019-04-18
15 201737041315-Information under section 8(2) [04-04-2023(online)].pdf 2023-04-04
16 201737041315-Information under section 8(2) (MANDATORY) [07-06-2019(online)].pdf 2019-06-07
16 201737041315-Information under section 8(2) [06-12-2022(online)].pdf 2022-12-06
17 201737041315-Information under section 8(2) (MANDATORY) [08-07-2019(online)].pdf 2019-07-08
17 201737041315-FORM 3 [17-10-2022(online)]-1.pdf 2022-10-17
18 201737041315-FORM 3 [17-10-2022(online)].pdf 2022-10-17
18 201737041315-Information under section 8(2) (MANDATORY) [16-10-2019(online)].pdf 2019-10-16
19 201737041315-Information under section 8(2) (MANDATORY) [15-11-2019(online)].pdf 2019-11-15
19 201737041315-Information under section 8(2) [17-10-2022(online)].pdf 2022-10-17
20 201737041315-Information under section 8(2) (MANDATORY) [23-12-2019(online)].pdf 2019-12-23
20 201737041315-Information under section 8(2) [28-09-2022(online)].pdf 2022-09-28
21 201737041315-FER.pdf 2020-03-18
21 201737041315-Information under section 8(2) [31-05-2022(online)].pdf 2022-05-31
22 201737041315-Information under section 8(2) [06-05-2022(online)].pdf 2022-05-06
22 201737041315-Information under section 8(2) [18-06-2020(online)].pdf 2020-06-18
23 201737041315-FORM 3 [07-04-2022(online)].pdf 2022-04-07
23 201737041315-FORM 3 [18-06-2020(online)].pdf 2020-06-18
24 201737041315-Information under section 8(2) [31-08-2020(online)].pdf 2020-08-31
24 201737041315-Information under section 8(2) [23-12-2021(online)].pdf 2021-12-23
25 201737041315-FORM 3 [31-08-2020(online)].pdf 2020-08-31
25 201737041315-Information under section 8(2) [17-11-2021(online)].pdf 2021-11-17
26 201737041315-Certified Copy of Priority Document [02-09-2020(online)].pdf 2020-09-02
26 201737041315-Information under section 8(2) [19-10-2021(online)].pdf 2021-10-19
27 201737041315-FORM 3 [18-10-2021(online)].pdf 2021-10-18
27 201737041315-RELEVANT DOCUMENTS [18-09-2020(online)].pdf 2020-09-18
28 201737041315-Information under section 8(2) [24-08-2021(online)].pdf 2021-08-24
28 201737041315-PETITION UNDER RULE 137 [18-09-2020(online)].pdf 2020-09-18
29 201737041315-Information under section 8(2) [29-07-2021(online)].pdf 2021-07-29
29 201737041315-OTHERS [18-09-2020(online)].pdf 2020-09-18
30 201737041315-FORM-26 [18-09-2020(online)].pdf 2020-09-18
30 201737041315-Information under section 8(2) [09-06-2021(online)].pdf 2021-06-09
31 201737041315-FER_SER_REPLY [18-09-2020(online)].pdf 2020-09-18
31 201737041315-Information under section 8(2) [04-05-2021(online)].pdf 2021-05-04
32 201737041315-ENDORSEMENT BY INVENTORS [18-09-2020(online)].pdf 2020-09-18
32 201737041315-FORM 3 [07-04-2021(online)].pdf 2021-04-07
33 201737041315-DRAWING [18-09-2020(online)].pdf 2020-09-18
33 201737041315-Information under section 8(2) [04-01-2021(online)].pdf 2021-01-04
34 201737041315-COMPLETE SPECIFICATION [18-09-2020(online)].pdf 2020-09-18
34 201737041315-Information under section 8(2) [31-12-2020(online)].pdf 2020-12-31
35 201737041315-CLAIMS [18-09-2020(online)].pdf 2020-09-18
35 201737041315-Information under section 8(2) [05-11-2020(online)].pdf 2020-11-05
36 201737041315-FORM 3 [31-10-2020(online)].pdf 2020-10-31
37 201737041315-Information under section 8(2) [05-11-2020(online)].pdf 2020-11-05
37 201737041315-CLAIMS [18-09-2020(online)].pdf 2020-09-18
38 201737041315-Information under section 8(2) [31-12-2020(online)].pdf 2020-12-31
38 201737041315-COMPLETE SPECIFICATION [18-09-2020(online)].pdf 2020-09-18
39 201737041315-DRAWING [18-09-2020(online)].pdf 2020-09-18
39 201737041315-Information under section 8(2) [04-01-2021(online)].pdf 2021-01-04
40 201737041315-ENDORSEMENT BY INVENTORS [18-09-2020(online)].pdf 2020-09-18
40 201737041315-FORM 3 [07-04-2021(online)].pdf 2021-04-07
41 201737041315-FER_SER_REPLY [18-09-2020(online)].pdf 2020-09-18
41 201737041315-Information under section 8(2) [04-05-2021(online)].pdf 2021-05-04
42 201737041315-FORM-26 [18-09-2020(online)].pdf 2020-09-18
42 201737041315-Information under section 8(2) [09-06-2021(online)].pdf 2021-06-09
43 201737041315-Information under section 8(2) [29-07-2021(online)].pdf 2021-07-29
43 201737041315-OTHERS [18-09-2020(online)].pdf 2020-09-18
44 201737041315-Information under section 8(2) [24-08-2021(online)].pdf 2021-08-24
44 201737041315-PETITION UNDER RULE 137 [18-09-2020(online)].pdf 2020-09-18
45 201737041315-FORM 3 [18-10-2021(online)].pdf 2021-10-18
45 201737041315-RELEVANT DOCUMENTS [18-09-2020(online)].pdf 2020-09-18
46 201737041315-Certified Copy of Priority Document [02-09-2020(online)].pdf 2020-09-02
46 201737041315-Information under section 8(2) [19-10-2021(online)].pdf 2021-10-19
47 201737041315-Information under section 8(2) [17-11-2021(online)].pdf 2021-11-17
47 201737041315-FORM 3 [31-08-2020(online)].pdf 2020-08-31
48 201737041315-Information under section 8(2) [23-12-2021(online)].pdf 2021-12-23
48 201737041315-Information under section 8(2) [31-08-2020(online)].pdf 2020-08-31
49 201737041315-FORM 3 [07-04-2022(online)].pdf 2022-04-07
49 201737041315-FORM 3 [18-06-2020(online)].pdf 2020-06-18
50 201737041315-Information under section 8(2) [06-05-2022(online)].pdf 2022-05-06
50 201737041315-Information under section 8(2) [18-06-2020(online)].pdf 2020-06-18
51 201737041315-FER.pdf 2020-03-18
51 201737041315-Information under section 8(2) [31-05-2022(online)].pdf 2022-05-31
52 201737041315-Information under section 8(2) (MANDATORY) [23-12-2019(online)].pdf 2019-12-23
52 201737041315-Information under section 8(2) [28-09-2022(online)].pdf 2022-09-28
53 201737041315-Information under section 8(2) (MANDATORY) [15-11-2019(online)].pdf 2019-11-15
53 201737041315-Information under section 8(2) [17-10-2022(online)].pdf 2022-10-17
54 201737041315-FORM 3 [17-10-2022(online)].pdf 2022-10-17
54 201737041315-Information under section 8(2) (MANDATORY) [16-10-2019(online)].pdf 2019-10-16
55 201737041315-FORM 3 [17-10-2022(online)]-1.pdf 2022-10-17
55 201737041315-Information under section 8(2) (MANDATORY) [08-07-2019(online)].pdf 2019-07-08
56 201737041315-Information under section 8(2) (MANDATORY) [07-06-2019(online)].pdf 2019-06-07
56 201737041315-Information under section 8(2) [06-12-2022(online)].pdf 2022-12-06
57 201737041315-Information under section 8(2) (MANDATORY) [18-04-2019(online)].pdf 2019-04-18
57 201737041315-Information under section 8(2) [04-04-2023(online)].pdf 2023-04-04
58 201737041315-FORM 3 [04-04-2023(online)].pdf 2023-04-04
58 201737041315-Information under section 8(2) (MANDATORY) [11-04-2019(online)].pdf 2019-04-11
59 201737041315-US(14)-HearingNotice-(HearingDate-20-06-2023).pdf 2023-04-20
59 201737041315-Information under section 8(2) (MANDATORY) [06-12-2018(online)].pdf 2018-12-06
60 201737041315-FORM-26 [10-05-2023(online)].pdf 2023-05-10
60 201737041315-Information under section 8(2) (MANDATORY) [16-10-2018(online)].pdf 2018-10-16
61 201737041315-FORM-26 [10-05-2023(online)]-1.pdf 2023-05-10
61 201737041315-Information under section 8(2) (MANDATORY) [13-08-2018(online)].pdf 2018-08-13
62 201737041315-Correspondence to notify the Controller [10-05-2023(online)].pdf 2023-05-10
62 201737041315-FORM-26 [07-05-2018(online)].pdf 2018-05-07
63 201737041315-Information under section 8(2) (MANDATORY) [11-04-2018(online)].pdf 2018-04-11
63 201737041315-Information under section 8(2) [06-06-2023(online)].pdf 2023-06-06
64 201737041315-Proof of Right (MANDATORY) [27-02-2018(online)].pdf 2018-02-27
64 201737041315-Written submissions and relevant documents [05-07-2023(online)].pdf 2023-07-05
65 201737041315-RELEVANT DOCUMENTS [05-07-2023(online)].pdf 2023-07-05
65 201737041315-FORM 18 [23-11-2017(online)].pdf 2017-11-23
66 201737041315-FORM 13 [05-07-2023(online)].pdf 2023-07-05
66 201737041315-COMPLETE SPECIFICATION [18-11-2017(online)].pdf 2017-11-18
67 201737041315-DECLARATION OF INVENTORSHIP (FORM 5) [18-11-2017(online)].pdf 2017-11-18
67 201737041315-Annexure [05-07-2023(online)].pdf 2023-07-05
68 201737041315-DRAWINGS [18-11-2017(online)].pdf 2017-11-18
68 201737041315-FORM 3 [11-10-2023(online)].pdf 2023-10-11
69 201737041315-FIGURE OF ABSTRACT [18-11-2017(online)].pdf 2017-11-18
69 201737041315-Information under section 8(2) [12-10-2023(online)].pdf 2023-10-12
70 201737041315-FORM 1 [18-11-2017(online)].pdf 2017-11-18
70 201737041315-PatentCertificate06-11-2023.pdf 2023-11-06
71 201737041315-IntimationOfGrant06-11-2023.pdf 2023-11-06
71 201737041315-STATEMENT OF UNDERTAKING (FORM 3) [18-11-2017(online)].pdf 2017-11-18

Search Strategy

1 search_07-02-2020.pdf

ERegister / Renewals

3rd: 25 Nov 2023

From 20/01/2019 - To 20/01/2020

4th: 25 Nov 2023

From 20/01/2020 - To 20/01/2021

5th: 25 Nov 2023

From 20/01/2021 - To 20/01/2022

6th: 25 Nov 2023

From 20/01/2022 - To 20/01/2023

7th: 25 Nov 2023

From 20/01/2023 - To 20/01/2024

8th: 25 Nov 2023

From 20/01/2024 - To 20/01/2025

9th: 02 Jan 2025

From 20/01/2025 - To 20/01/2026