Sign In to Follow Application
View All Documents & Correspondence

An Apparatus For Determining A Spatial Output Multi Channel Audio Signal

Abstract: An apparatus (100) for determining a spatial output multi-channel audio signal based on an input audio signal and an input parameter. The apparatus (100) comprises a decomposer (110) for decomposing the input audio signal based on the input parameter to obtain a first decomposed signal and a second decomposed signal different from each other. Furthermore, the apparatus (100) comprises a renderer (110) for rendering the first decomposed signal to obtain a first rendered signal having a first semantic property and for rendering the second decomposed signal to obtain a second rendered signal having a second semantic property being different from the first semantic property. The apparatus (100) comprises a processor (130) for processing the first rendered signal and the second rendered signal to obtain the spatial output multi-channel audio signal.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 November 2011
Publication Number
35/2016
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2020-09-10
Renewal Date

Applicants

FRAUNHOFER-GESELLSCHAFT ZUR FOERDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
HANSASTRAβE 27C, 80686 MUNICH, GERMANY

Inventors

1. SASCHA DISCH
TURNSTR. 7, 90763 FUERTH, GERMANY
2. VILLE PULKKI
YLAEPORTTI 4, A 7, 02210 ESPOO/FINLAND
3. MIKKO-VILLE LAITINEN
ALBERGANESPLANADI 2 A 26, 02600 ESPOO, FINLAND
4. CUMHUR ERKUT
LAUTTASAARENTIE 46 A 2, 00200 HELSINKI, FINLAND

Specification

An Apparatus for Determining a Spatial Output Multi-Channel
Audio Signal
Specification
The present invention is in the field of audio processing,
especially processing of spatial audio properties,
Audio processing and/or coding has advanced in many ways.
More and more demand is generated for spatial audio
applications, In many applications audio signal processing
is utilised to decorrelate or render signals, Such
applications may, for example, carry out mono-to-stereo up-
mix, mono/stereo to multi-channel up-mix, artificisl
reverberation,, stereo widening or user interactive
mixing/rendering.
For certain classes of signals as e.g.noise-like signals
as for instance applause-like signals, conventional methods
and systems suffer from either unsatisfactory perceptual
quality or, if an object-orientated approach is used, high
computational complexity due to the number of auditory
events to be modeled or processed, other examples of audio
material, which is problematic,are generally ambience
material like, for example,the noise that is emited by a
flock of birds birds, a sea shore, galloping horses,a division
of marching soldiers, etc.
Conventional concepts use, for example, parametric stereo
or MPEG-surroynd coding (MPEG = Moving Pictures Expert
Group), Fig. 6 shows a typical application of a
decorrelator in a mono-to-stereo upr-mixgr. Fig. 6 show a
mono input signal provided to a decorrelator 610, which
provides a decorreiated input signal at its output. The
original input signal is provided to an up-mix matrix 620
together with the decorrelated signal. Dependent on up-mix
control parameters 630, a stereo output signal is rendered.
The signal decorrelator 610 generates a decorrelated signal


D fed to the matrixing stage 620 alorig with the dry mono
signal M- Inside the mixing matrix 620, the stereo channels
L (L= Left stereo channel) and. R (R = Right stereo
channel) are formed accoding to a mixing matrix H. The
coefficients in the matrix H can be fixed, signal dspendent
or controlled by a user.
Alternatively, the matrix can be eontrolled ay side
information, transmitted alemg with the down-mix,
containing a. parametric description on how to up-mix the
signals of the down-mix to form the desired multi-channel
output, This spatial side. infcirmation is ugually generated
by a signal encoder prior to the up-mix process.
This is typically done in parametric spatial audio coding
as, for example,in Parmetric stereo, of J, Breehaart,s,
van de far, A, Kohlrausch, E, schuijers, "High-Quality
Parametric spatial Audio Coding at Low Bitrates" in AES
116th Convention, Berlin, prepsint 6072, May 2004 and in
MPEG surround, of, J. Herre, K, Kjorling, J, Breebaart et:
al., "MPEG Surround - the ISO/MPEG standard, fop Efficient
and. Compatible Multi-Channel Audio Coding" in Proceedings
of tht 122nd AES Convention, Vienna, Austria May 2007. A
typical structure of a pgrametric stereo decoder is shown
in Fig, 7, In this example, the decorrelation process is
performed in a transform domain, which is indicated by the
analysis filtarbank 710, which transforms an input mono
signal to the transform domain as, for example, the
frequency domain in terms of a number of frequency bands,
In the frequency domain, the decorreiator 720 generates the
according decorrelated signal, which is to be up-mixed in
the up-mix matrix 730, The up-mix matrix 730 considers up-
mix parameters, which are provided by the parameter
modification box 740, which is provided with spatial input
parameters and coupled to a parameter control stage 7 50. In
the example shown in Fig, 7, the spatial parameters can be
modified by a user or additional tools as, for example,


post-processing for binaural rendering/prfsentation, In
this case, the up-mix parameters can be merged with the
parameters from the binaural filters to form the input
parameters for the up-mix matrix 730. The measuring f the
parameters may be carried out by the parameter modification
block 740, The output of the up-mix matrix 730 is then
the stereo output signal,
As described above, the output LIR of the mixing matrix H
can de computer from the mone input signal M and the
decorrelated signal D, for example according to

In the mixing matrix, the amount of decorrelated sound fed
to the output one be controlled on the basis of transmitted
parameters as, for example, (ICC (ICC = Intill * $nterchannel
Another conventional approach.is established by the
temporal permutatio method.A dedicated proposal on
decorrelation of applause-like signals can be found, for
example, in Gerard Hotho, Steven van de Par Jeroen
Breebaart, "Multichannel Coding of Applause Signals," in
EURASIP Journal on Advances in Signal Processing, Vol. 1,
Art, 10, 2008. Here, a monophonic audio signal is segmented
into overlapping time segments, which are temporally
permuted pseudo randomly within a "super"-block to form the
decorrelated output channels. The permutations are mutually
independent for a number n output channels.
Another approach is the alternating channel swap of
original and delayed copy in order to obtain a decorrelated
signal, of. German patent application 102007018032.4-55.

In some conventional conceptual object-orientated systems,
e.g. in wagner, Andreas; walther,Andreas; Melchoir, Frank;
Strauẞ, Michael,"Generation of Highly Immersivs
Atmospheres for Wave Field Synthesis Reproduction"at 116th
International EAS Convention, Berlin, 2004,it is described
hew to create an immersive scene out of many objects as for
example single claps,application of a wave field
synthfsis.
Yet another approach is the so-called "directional audio
coding" (DirAC= pireetional Audio Coding), which is a
different sound reproduction sound repreduction systems,of Pulkkii, Ville,
"Spatial Sound Reproduction with Directional Audio Coding"
in J, Audio Eng. soc.,Vol. 55, No. 6, 2007, In the
analysis part, the diffuseness and direction ofarrival of
sound are estmated in a single location dependent on time
and frequency. In the syntthesis part, microphone signals
are first divided into non-diffuse and diffuse parts and
are then reproduced using different strategies,
Conventional approaches have a number of disadvantages. For
example, guided or unguided up-mix of audio signals having
content such as applause may require a. strong
decorrelation. Con3equently, on the one hand, strong
decorrelation is needed to restore the ambience sensation
of being, for example, in a concert hall. On the other
hand, suitable decorrelation filters as, for example, all-
pass filters, degrade a reproduction o£ quality of
transient events, like a single handclap by introducing
temporal smearing effects such as pre- and post-echoes and
filter ringing. Moreover, spatial panning of single clap
events has to be done on a rather fine time grid, while
ambience decorrelation should be quasi-stationary over
time.

State of the art systems according to J. Breebaart, S. van
de Par, A. Kohlrausch, E. Schuijers, "High-Quality
Parametric Spatial Audio Coding at Low Bitrates" in AES
116th Convention, Berlin, Preprint 6072, May 2004 and J.
Herre, K. Kjorling, J. Breebaart, et. al., "MPEG Surround -
the ISO/MPEG Standard for Efficient and Compatible Multi-
Channel Audio Coding" in Proceedings of the 122nd AES
Convention, Vienna, Austria, May 2007 compromise temporal
resolution vs. ambience stability and transient quality
degradation vs. ambience decorrelation.
A system utilizing the temporal permutation method, for
example, will exhibit perceivable degradation of the output
sound due to a certain repetitive quality in the output
audio signal. This is because of the fact that one and the
same segment of the input signal appears unaltered in every
output channel, though at a different point in time.
Furthermore, to avoid increased applause density, some
original channels have to be dropped in the up-mix and,
thus, some important auditory event might be missed in the
resulting up-mix.
In object-orientated systems, typically such sound events
are spatialized as a large group of point-like sources,
which leads to a computationally complex implementation.
It is the object of the present invention to provide an
improved concept for spatial audio processing.
This object is achieved by an apparatus according to claim
1 and a method according to claim 16.
It is a finding of the present invention that an audio
signal can be decomposed in several components to which a
spatial rendering, for example, in terms of a decorrelation
or in terms of an amplitude-panning approach, can be
adapted. In other words, the present invention is based on
the finding that, for example, in a scenario with multiple

audio sources, foreground and background sources can be
distinguished and rendered or decorrelated differently.
Generally different spatial depths and/or extents of audio
objects can be distinguished.
One of the key points of the present invention is the
decomposition of signals, like the sound originating from
an applauding audience, a flock of birds, a sea shore,
galloping horses, a division of marching soldiers, etc.
into a foreground and a background part, whereby the
foreground part contains single auditory events originated
from, for example, nearby sources and the background part
holds the ambience of the perceptually-fused far-off
events. Prior to final mixing, these two signal parts are
processed separately, for example, in order to synthesize
the correlation, render a scene, etc.
Embodiments are not bound to distinguish only foreground
and background parts of the signal, they may distinguish
multiple different audio parts, which all may be rendered
or decorrelated differently.
In general, audio signals may be decomposed into n
different semantic parts by embodiments, which are
processed separately. The decomposition/separate processing
of different semantic components may be accomplished in the
time and/or in the frequency domain by embodiments.
Embodiments may provide the advantage of superior
perceptual quality of the rendered sound at moderate
computational cost. Embodiments therewith provide a novel
decorrelation/rendering method that offers high perceptual
quality at moderate costs, especially for applause-like
critical audio material or other similar ambience material
like, for example, the' noise that is emitted by a flock of
birds, a sea shore, galloping horses, a division of
marching soldiers, etc.

Embodiments of the present invention will be detai.led with
the help of the accompanying Figs., in which
Fig. la shows an embodiment of an apparatus . for
determining a spatial audio multi-channel audio
signal;
Fig. lb shows a block diagram of another embodiment;
Fig. 2 shows an embodiment illustrating a multiplicity
of decomposed signals;
Fig. 3 illustrates an embodiment with a foreground and a
background semantic decomposition;
Fig. 4 illustrates an example of a transient separation
method for obtaining a background signal
component;
Fig. 5 illustrates a synthesis of sound sources having
spatially a large extent;
Fig. 6 illustrates one state of the art application of a
decorrelator in time domain in a mono-to-stereo
up-mixer; and
Fig. 7 shows another state of the art application of a
decorrelator in frequency domain in a monc-to-
stereo up-mixer scenario.
Fig, 1 shows an embodiment of an apparatus 100 for
determining 4 spatial output multi-channal audio signal
baasd on an input audio signal. In some embsdiments the
apparatus can be adapted fop furthtr baaing the spatial
output multi-channel audio signal on an input parameter,
The input parameter may be ganerated locally or provided
with the input nudio sigal, far exampla, as side
informatiora.

In the embodiment depicted in Fig, 1, the apparatus 100
comprise a decomposer 110 for decomposing the input audio
signal to obtain a first decomposed signal having a first
semantic property and a second decomposed signal having a
second semantic property being different from the first
semantic property.
The apparatus 100 further comprises a renderer 120 for
rendering the first decpmposed signal using a first
rendering characteristic to obtain a first rendered signal
having the first semantic property and for rtndering the
second decompoaed signal using a second rendering
characteristic to obtain a second rendered signal having
the second semantic property.
A semantic property may correspond to a spatial property,
as close of far, focused or wide, and/or a dynamic property
as a.g, whether a signal is tonal A stationary or transient
and/or a. dominance property as e,g. whether the signal is
foregsound or background, a measure thereof respect ively,
Moreover, in the embodiment, the apparatus 100 comprises a
processor 130 for processing the firat rendered aignal and
the second rendered signal to obtain the spatial output
multi-channel audio signal.
In other words, the decomposer 110 is adapted for
decomposing the input audio signal, in some embodimnts
based on the input parameter The decomposition of the
input iudio fignai is adapted to semantic , e.g. spatial,
properties of different parts of the input audio signal,
Moreover, rendering carried out by the renderer 120
according to the first and second rendering characteristics
can also be adapted to the spatial properties, which
allows, for example in a scenario where the first
decomposed signal corresponds to a background audio signal
and the second decomposed signal corresponds to a

foreground audio signal, different rendering or
decorrelators may be applied, the other way around
respectively. In the following the term "foreground" is
understood to refer to an audio object being dominant in nn
audio environment, such that a potential listener would
notice a foreground-audio object, A foreground audio object
or source may be distinguished Pr differentiated from a
background audio pbject or source. A background audio
object or gourse may not be noticeable by a potential
listener in an audio environment as being less dominant
than a foreground audio object or source. In embodiments
foreground audio objects or sources may be, but are not
limited to, a point-like audio source, whre background
audio objtcts or souroes may corresppod to spatially wider
audio objects or sources.
In other words, in embodiments the first rendering
characteristic can be based on or matched to the first
semantic property and the second rendering characteri§tio
can be based on or matched to the second semantic property.
in one embodiment the first semantic property and the first
rendtring characteristic corrsspond to a foreground audio
source or sbject and tht renderer 120 can be adapted to
apply amplitude panning to the first decomposed signal, The
renderer 120 may then be further adapted for previding as
the first rendered signal two amplitude panned versiona of
tht first decomposed signal. In this embodiment, the second
semantic property and the second rendering oharacteristic
corrfipond to a backgroynd audio sourct or object, a
plurality thereof respectively, and the renderer 120 can be
adapted to apply a decorrlation to the second decomposed
signal and prvide as second rendered signal the second
decomposed signal and the decorrelated veraion thereof,
In embodiments, the renderer 120 can be further adapted for
rendering the first decomposed aignai such that the first
rendering characteristic does not have a delay introducing
characteristic. In other words, there may be no

decorrelation of the first decomposed signal. In another
embodiment, the first rendering characteristic may have a
delay introducing characteristic having a first delay
amount and the second rendering characteristic may have a
second delay amount, the second delay amount being greater
than the first delay amount. In other words in this
embodiment, both the first decomposed signal and the second
decomposed signal may be decorrelated, however, the level
of decorrelation may scale with amount of delay introduced
to the respective decorrelated versions of the decomposed
signals. The decorrelation may therefore be stronger for
the second decomposed signal than for the first decomposed
signal.
In embodiments, the first decomposed signal and the second
decomposed signal may overlap and/or may be time
synchronous. In other words, signal processing may be
carried out block-wise, where one block of input audio
signal samples may be sub-divided by the decomposer 110 in
a number of blocks of decomposed signals. In embodiments,
the number of decomposed, signals may at least partly
overlap in the time domain, i.e. they may represent
overlapping time domain samples. In other words, the
decomposed signals may correspond to . parts of the input
audio signal, which overlap, i.e. which represent at least
partly simultaneous audio signals. In embodiments the first
and second decomposed signals may represent filtered or
transformed versions of an original input signal. For
example, they may represent signal parts being extracted
from a composed spatial signal corresponding for example to
a close sound source or a more distant sound source. In
other embodiments they may correspond to transient and
stationary signal components, etc.
In embodiments, the renderer 120 may be $ub-divided in a
first renderer and a second renderer, where the first
renderer can be adapted for rendering the firat decomposed
signal and the second rendered can ba adapted for rendering

the second decomposed signal. In embodiments, the renderer
120 may be implemented in software, for example, as a
program stored in a memory to be run on a prpcessor or a
digital signal processor which, in turn, is adapted for
rendering the dfoomposed signals stquentially.
The renderer 120 can be adapted fpr decorrelating the first
decomposed signal to obtain a first decorrelated signal
and/or for decorrelating the second decompoaed signal to
obtain a second decorrelated Signal. In ether wards, the
renderer 120 may be adapted for decorrelating both
d§composed signals, however, vising different decorrelation
or rendering characteristics. In embpdiments, the renderer
120 may be adapted for applying amplitude panning to either
one of the first or second decomposed signals instead or in
addition to dsoorrelation,
The renderer 120 may be adapted for rendering the first and
second rendered signals each having as many components as
channels in the spatial output multi-channel audio signal
and the proessor 130 may be adapted for combining the
components of the first and second rendered signals to
pbtain the spatial output multi-channel audio signal. In
other embodiments the renderer 120 can be adapted for
rendering the first and second rendered signals each having
less components than the spatial output multi-channel audio
signal and wherein the processor 130 can be adapted fpf up-
mixing the components of the first and second rendered
signals to obtain the spatial output multi-channel audio
signal.
Fig, lb shows another embodiment of an apparatus 100,
comprising similar components ap were introduced with the
help of Fig, 1a. However, Fig. 1b shows an embodiment
having more details. Fig. 1b shews a decomposer 110
receiving the input audio signal and optionally the input
parameter, As can be seen from Fig. 1b, the decomposer is
adapted for providing a first decomposed signal and a

second decomposed signal to a renderer 120, which is
indicated by the dashed lines, In the embodiment shown in
Fig. 1b, it is assumed that the first decomposed signal
corresponds to a point-like audio sorce as the first
semantic property and that the renderer 120 is adapted for
applying amplitude-panning as the first rendering
characteristic to the first decomposed signal, In
embodiments the first and second decomposed signals are
exchangeable, i.e. in other embodiments amplitude-panning
may be applied to the second decomposed signal.
In the embodiment depicted in FIG. 1B, the renderer 120
shows,In the signal path of the first decomposed signal,
two scalablevamplifiers 121 and 122, which are adapted for
amplifing two copies of the first decomposed signal
differently, The different amplication factors used may,
in embodiments, be determined from the input parameter, in
other embodiments, they may be determined from input
audio signal, it may be preset or it may be locally
generated, possibly also referring to a user input. The
outputs of the tow scalable amplifiers 121 and 122 are
provided to the processor 130, for which details which details will be
provided below.
As can be seen from Fig. 1b, the decomposer 110 provides a
second decomposed signal to the renderer 120, which carries
put a different rendering in the processing path of the
second decomposed signal. In other embodiments, the first
decomposed signal may be processed in the presently
dascribed path as well or instead of the second decomposed
signal. The first and second decomposed signals can be
exchanged in embodiments.
In the embodiment depicted in Fig, 1b, in the processing
path of the second decomposed signal, there is a
decorrelator 123 followed by a potator or parametric stereo
or up-mix module 124 as second rendering characteristic.
The deeorrelator 123 can be adapted for decorrelating the

second decomposed signal X[k] and for providing a
decorrelated version Q[k] of the second decomposed signal
to the parametric stereo or up-mix module 124. In Fig. lb,
the mono signal X[k] is fed into the decorrelator unit "D"
123 as well as the up-mix module 124. The decorrelator
unit 123 may create the decorrelated version Q[k] of the
input signal, having the same frequency characteristics and
the same long term energy. The up-mix module 124 may
calculate an up-mix matrix based on the spatial parameters
and synthesize the output channels Y1[k] and Y2[k] . The up-
mix module can be explained according to

with the parameters ct, cr, a and β being constants, or
time- and frequency-variant values estimated from the
input signal X[k] adaptively, or transmitted as side
information along with the input signal X[k] in the form
of e.g. ILD (ILD = Inter channel Level Difference)
parameters and ICC (ICC = Inter Channel Correlation)
parameters. The' signal X[k] is the received mono signal,
the signal Q[k] is the da-correlated signal, being a
decorrelated version of the input signal X[k] . The output
signals are denoted by Y1[k] and Y2[k] .
The decorrelator 123 may be implemented as an IIR filter
(IIR = Infinite Impulse Response), an arbitrary FIR filter
(FIR = Finite Impulse response) or a special FIR filter
using a single tap for simply delaying the signal.
The parameters c, , cr , a and β can be determined in
different ways. In some embodiments, they are simply
determined by input parameters, which can be provided along
with the input audio signal, for example, with the down-mix
data as a side information. In other embodiments, they may

be generated locally or derived from properties of the
input audio signal.
In the embodiment shown in Fig. 1b, the renderer 120 is
adapted for providing the second rendered signal in terms
of the two output signals Y1[K] and Y2[K] of the up-mix module
124 to the processor 130.

According to the processing path of the first decomposed
signal, the two amplitude-panned versions of the first
decomposed signal, availablefor the outputs of the two
scalable amplifiers 121 and 122 are also provided to the
processor 130. In other embodiments, the scalable
amplifiers121 and 122 may be present in the procssor 130,
where only the first decomposed signal and a panning factor
may be provided by the renderer 120.
As con de seen in Fig. 1b, the processor 130 can be adapted
for processing or combining the first rendered signal and
the secnd rendered signal, in this embodiment simply by
combinig the outputs in order to provide a stereo signal
having a lift channel L and a right channel R corresponding
to the spatial output multi-channel audio signal of Fig.
1a,
In the embodiment in Fig, 1b, in both signaling paths, the
left and right channels for a stereo signal are determined,
In the path of the first deomposed signal, amplitude
panning is carried out by the two scalable amplifies 121
amd 122, therefore, the two components result in two in-
phase audio signals, which are scaled differently. This
corresponds to an impression of a point-like audio source
as a semantic property or rendering characteristic.
In the signal-processing path of the second decomposed
signal, the output signals Y1[k] and Y2[k] are provided to the
processor 130 corresponding to left and right channels as
determined by the up-mix module 124. The parameters c1, ar,

a and β determine the spatial wideness of the
corresponding audio source. In other words, the parameters
cl , cr, a and β can be chosen in a way or range such that
for the L and R channels any correlation between a maximum
correlation and a minimum correlation can be obtained in
the second signal-processing path as second rendering
characteristic. Moreover, this may be carried out
independently for different frequency bands. In other
words, the parameters c,, cr, a and β can be chosen in a
way or range such that the L and R channels are in-phase,
modeling a point-like audio source as semantic property.
The parameters c, , cr, a and β may also be chosen in a way
or range such that the L and R channels in the second
•signal processing path are decorrelated, modeling a
spatially rather distributed audio source as semantic
property, e.g. modeling a background or spatially wider
sound source.
Fig. 2 illustrates another embodiment, which is more
general. Fig. 2 shows a semantic decomposition block 210,
which corresponds to the decomposer 110. The output of the
semantic decomposition 210 is the input of a rendering
stage 220, which corresponds to the renderer 120. The
rendering stage 220 is composed of a number of individual
renderers 221 to 22n, i.e. the semantic decomposition stage
210 is adapted for decomposing a mono/stereo input signal
into n decomposed signals, having n semantic properties.
The decomposition can be carried out based on decomposition
controlling parameters, which can be provided along with
the mono/stereo input signal, be preset, be generated
locally or be input by a user, etc.
In other words, the decomposer 110 can be adapted for
decomposing the input audio signal semantically based on
the optional input parameter and/or for determining the
input parameter from the input audio signal.

The output of the decorrelation or rendering stage 220 is
then provided to an up-mix block 230, which determines a
multi-channel output on the basis of the decorrelated or
rendered signals and optionally based on up-mix controlled
parameters.
Generally, embodiments may separate the sound material into
n different semantic components and decorrelate each
component separately with a matched decorrelator, which are
also labeled D1 to Dn in Fig, 2. In other words, in
embodiments the rendering characteristics can be matched to
the semantic properties of the decomposed signals. Each of
the decorrelators or renderers can be adapted to the
semantic properties of the accordingly-decomposed signal
component. Subsequently, the processed components can be
mixed to obtain the output multi-channel signal. The
different components could, for example, correspond
foreground and background modeling objects.
In other words, the renderer 110 can be adapted for
combining the first decomposed signal and the first
decorrelated signal to obtain a stereo or multi-channel up-
mix signal as the first rendered signal and/or for
combining the second decomposed signal and the second
decorrelated signal to obtain a stereo up-mix signal as the
second rendered signal.
Moreover, the renderer 120 can be adapted for rendering the
first decomposed signal according to a background audio
characteristic and/or for rendering the second decomposed
signal according to a foreground audio characteristic or
vice versa.
Since, for example, applause-like signals can be seen as
composed of single, distinct nearby claps and a noise-like
ambience originating from very dense far-off claps, a
suitable decomposition of such signals may be obtained by
distinguishing between isolated foreground clapping events

as one component and noise-like background as the ether
component. In other words, in one embodiment, n=2. In such
an embodiment, for example, the renderer 120 may be adapted
for rendering the first decomposed signal by amplitude
panning of the first decomposed signal. In other words, the
correlation or rendering of the foreground clap component
may, in embodiments, be achieved in D1 by amplitude panning
of each single event to its estimated original location.
In embodiments, the renderer 120 may be adapted for
rendering the first and/or second decomposed signal, for
example, by all-pass filtering the first or second
decomposed signal to obtain the first or second
decorrelated signal.
In other words, in embodiments, the background can be
decorrelated or rendered by the use of m mutually
independent all-pass filters D2l..m. In embodiments, only the
quasi-stationary background may be processed by the all-
pass filters, the temporal smearing effects of the state of
the art decorrelation methods can be avoided this way. As
amplitude panning may be applied to the events of the
foreground object, the original foreground applause density
can approximately be restored as opposed to the state of
the art's system as, for example, presented in paragraph J.
Breebaart, S. van de Par, A. Kohlrausch, E. Schuijers,
"High-Quality Parametric Spatial Audio Coding at Low
Bitrates" in AES 116th Convention, Berlin, Preprint 6072,
May 2004 and J. Herre, K. Kjorling, J. Breebaart, et. al.,
"MPEG Surround - the ISO/MPEG Standard for Efficient and
Compatible Multi-Channel Audio Coding" in Proceedings of
the 122nd AES Convention, Vienna, Austria, May 2007.
In other words, in embodiments, the decomposer 110 can be
adapted for decomposing the input audio signal semantically
based on the input parameter, wherein the input parameter
may be provided along with the input audio signal as, for
example, a side information. In such an embodiment, the

decomposer 110 can be adapted for determining ths input
parameter from the input audio signal, In cther
embodiments, the decomposer 110 can be adacted for
determining the input parameter as a control parameter
independent from the input audio signal, which may be
generated locally, preset, or may also be ingut by a user
In embodiments,the renderer 120 can be adapted for
obtaining a spatial distribution of the first rendered
signal or the second rendered signal by applying a
broadband amplitude panning. In othtr words, aoccrding to
the description of Fig. lb above, instead of generating a
ppint-like source, the panning location of the source can
be temporally varied in oydes to generate an audio scurce
having a certain spatial distribution. In embodiments, the
renderer 120 can be adapted for applying the locally-
generated low-pass noise for amplitude Panning, i.e. the
scaling factors for the amplitude panning for, for example,
the scalable amplifiers 121 and 122 in Fig. 1b correspond
tp a locally-generated noise value, i.e, are time-varying
with a certain bandwidth.
Embodiments may be adapted for being oPerated in a guided
or an unguided mode. For example, in a guided scenario,
referring to the dashed lines, for example in Fig. 2, the
decprrelation can be accomplished by applying standard
technology decorrelation filters controlled on a ccarse
time grid to, fop example, the background or ambience part
only and obtain the correlation by redistribution of each
single event in, for example, the foreground part via time
variant spatial positioning using broadband amplitude
panning on a much finer time grid. In other words, in
embodiments, the renderer 120 can be adapted for operating
decorrelators for different decomposed signals on different
time grids, e.g, based on different time scales, which may
be in terma of different sample rates or different delay
for the respective decorrelators. In one embodiment,
carrying out foreground and background separation, the

arrow may as well represent amplitude-panned signals, i.e.
in case of stereo up-mix, already the left and the right
channel. As can be seen from Fig. 3, the up-mix 330
corresponding to the processor 130 is then adapted to
process or combine the background and foreground decomposed
signals to derive the stereo output.
Other embodiments may use native processing in order to
derive background and foreground decomposed signals or
input parameters for decomposition. The decomposer 110 may
be adapted for determining the first decomposed signal
and/or the second decomposed signal based on a transient
separation method. In other words, the decomposer 110 can
be adapted for determining the first or second decomposed
signal based on a separation method and the ether
decomposed signal based on the difference between the first
determined decomposed signal and the input audio signal. In
other embodiments, the first or second decomposed signal
may be determined based on the transient separation method
and the other decomposed signal may be based on the
difference between the first or second decomposed signal
and the input audio signal.
The decomposer 110 and/or the renderer 120 and/or the
processor 130 may comprise a DirAC monosynth stage and/or a
DirAC synthesis stage and/or a DirAC merging stage. In
embodiments the decomposer 110 can be adapted for
decomposing the input audio signal, the renderer 120 can be
adapted for rendering the first and/or second decomposed
signals, and/or the processor 130 can be adapted for
processing the first and/or second rendered signals in
terms of different frequency bands.
Embodiments may use the following approximation for
applause-like signals. While the foreground components can
be obtained by transient detection or separation methods/
cf. Pulkki, Ville; ''Spatial Sound Reproduction with
Directional Audio Coding" in J. Audio Eng. Soc, Vol. 55,

No. 6, 2007, the background component may be given by the
residual signal. Fig. 4 depicts an example where a suitable
method to obtain a background component x' (n) of, for
example, an applause-like signal x(n) to implement the
semantic decomposition 310 in Fig. 3, i.e. an embodiment of
the decomposer 120. Fig. 4 shows a time-discrete input
signal x(n), which is input to a DFT 410 (DFT = Discrete
Fourier Transform) . The output of the DFT block 4.10 is
provided to a block for smoothing the spectrum 420 and to a
spectral whitening block 430 for spectral whitening on the
basis of the output of the DFT 410 and the output of the
smooth spectrum stage 430.
The output of thg speotral whittling stage 430 is then
provided to a spectral peak-picking stage 440, which
separates the spectrum and provides two outputs, i.e. a
noise and transient residual signal and. a tonal signal, The
noise transient residual signal is provided to an LPC
filter 450 (LPS = Ltnear Prediction Coding) of which the
residual noise signal is provided to the mixing stage 460
together with the tonal signal as output of the spectral
peak-pieking stage 4 40. The output of the mining stage 460
is then provided to a spectral shaping stage 4 70, whisch
shapes the spectrum on the basis of the Smoothed spectrum
provided by the smoothed spectrum stage 420, The output of
the speotral shaping stage 470 is then provided to the
synthesis filter 480, i.e an inverse discrete Fourier
transform in order to obtain x' (n) represnting the
background component, The foreground component can than be
delved as the difference between tha input signal and the
output aignal, i.e. as x(n)-x'(n).
Embodiments of the present invention may be operated in a
virtual reality applications as, for example, 3D gaming. In
such applications, the synthesis of sound sources with a
large spatial extent may be complicaated and complex when
based on conventional gonctpts. Suah sources might, for
example be a seashore, a bird flock, galloping horses, the

division of marching soldiers, or an applauding audience.
Typically,such sound events are spatialized as a large
group of point-like sources, which leads to
computationally-complex implementations,cf,Wagner,
Andreas;Walther, Andreas; Melchoir, Frank, Strauẞ,
Michael; uGeneration of Highly Immersive Atmospheres for
Wave Figld Synthesis Reproduction" at 116th International
EAS Convention, Berlin, 2004.
Embodiments may carry out a method, which performs the
synthesis of the extent of sound sources plausibly but, at
the same time, having a lower structural and computational
complexity, Embodiments may be based on DirAC (Dir =
Directional Audio coding),cf, pulkki, ville, "Spatial
Sound Reproduction with Directional Audio Coding" in J,
Audio Eng. Soc, Vol, 55,No.6,2007, In other words, in
embodiments, the decomposer 110 and/or the renderer 120
and/or the processor 130 may be adapted for processing
DirAC signals, In oter words, the decomposer 110 may
comprise DirAC monosynth stages,the renderor 120 may
comprise a DirAC synthesis stage and/or the processor may
comprise a DirAC merging stage.
Embodiments may be based on DirAC processing, for example,
using only two synthesis structures, for example, on for
foreground sound sources and for background sound
sources, The foreground sound may be applied to a single
DirAC stream with controlled directional data, resulting in
the perception of nearby point-like sources, The background
sound may also be reproduced by using a single direct
stream with differently-controlied directional data,which
leads to the perception of spatially-spread sound objects,
The two DirAC streams may then be merged and decoded for
arbitrary loudspeaker set-up air for headphoneas, for
example.
Fig. 5 illustrates a synthesis of sound sources having a
spatially-large extent. Fig. 5 shows an upper mpnosynth

blick 610, which creates a mono-DirAC stream leading to a
perception of a nearby point-loke sound source,such as the
nearest clappers of an audience.The lower monosynth block
620 is used to create a mono-DirAC stream leading to the
perception of spatially-spread sound, which is, for
example, suitable to generate background sound as the
clapping sound from the audience.he outputs of the two
DirAC monosynth blocks 610and 620 are then merged in the
DirAC merge stage 630. Fig. shows that only two DirAC
synthesis blocks 610 and 620are used in this embodiment,
One of them is used to create the sound events, which are
in the foreground, such as closest or nearby oirds or
closest or nearby persons in an applauding audience an the
other generates a background sound, the continuous bird
The foregroind sound is converted into a mono-DirAC stream
with DirAC-monosynth block 610 in a way that the azimith
data is kept constant with frequency,however,changed
randomly or controlled by an external process in time. The
diffuseness parameter ψ is set to θ, i,e, representing a
point-like source. The audio input to the block 610 is
assumed to be temporily non-overlapping sounds,such as
distinct bird calls or hand claps, which generate the
perception of nearby sound sources, such as birds or
clapping persons. the spatial extent of the foreground
sound evnts is controlled by adjusting the θ and
θ range_forground, which means means that individual sound events will
be perceived in θ+θ range_forground,directions, however, a
single evgnt may be perceived point-like. In other words,
point-like sound sources are generated where the possible
positions of the point are limited to the range
The background block 620 takes as input audio stream, a
signal, which contains all other sound events not: present
in the foreground audio stream, which is intended to
include lots of temporarily overlapping sound events, for

example hundreds of birds or a great number of far-away
clappers, The attached azimuth values are then set random
both in time and frequency,within given constraint azimuth
values θ+θrange-background.The spatial extent of the background
sounds can thus be synthesized with low computational
complexity, The diffuseness ψ may also be controlled .If
it was added, the DirAC decoder would apply the sounde to
all directions, which can be used when the sound source
surrounds the listener totally, If it does not surrcund,
diffuseness may be kept low or close to zero, or zero in
embodiments.
Embodiments of the present invention can provide the
advantage that superior perceptis quality of rendered
sounds can be achieved at moderate computational cist,
Enbodiments may enable a modular implementation of spatial
sound rendering as, for example, shown in Fig.5.
Depending on certain implementation requirements of the
in hardware or in software, The implementation can be
performed using a digital storage medium and, particularly,
a flash memory, a disc, a DVD or a CD having
electronically-readable control signals stored thereon,
which co-operate with the programmable computer-program,
such that the inventive methods are performed .Genrally,
the present invintion is, therefore,a computer-program
product with a program code stored on a machine-readable
carrier, the program code being operative for performing
the inventive methods when the computer program product
runs on a computer. In other words, the inventive methods
are, therefore, a computer program having a program code
for performing at least one of the inventive methods whan
the computer program runs on a computer.

We Claim:
1. An apparatus (100) for determining a spatial output multi-
channel audio signal based on an input audio signal,
comprising:
a semantic decomposer (110) configured for decomposing the
input audio signal to obtain a first decomposed signal having
a first semantic property, the first decomposed signal being a
foreground signal part, and a second decomposed signal having
a second semantic property being different from the first
semantic property, the second decomposed signal being a
background signal part;
a renderer (120) for rendering the first decomposed signal
using a first rendering characteristic to obtain a first
rendered signal having the first semantic property and for
rendering the second decomposed signal using a second
rendering characteristic to obtain a second rendered signal
having the second semantic property, wherein the first
rendering characteristic and the second rendering
characteristic are different from each other,
wherein the renderer (120) comprises a first DirAC monosynth
stage (610) for rendering the foreground signal part, the
first DirAC monosynth stage (610) being configured for
creating a first mono-DirAC stream leading to a perception of
a nearby point-like source, and a second DirAC monosynth stage
(620) for rendering the background signal part, the second
DirAC monosynth stage (610) being configured for creating a
mono-DirAC stream leading to a perception of spatially-spread
sound; and
a processor (130) for processing the first rendered signal and
the second rendered signal to obtain the spatial output multi-
channel audio signal, wherein the processor (130) comprises a
DirAC merging stage (630) for merging the first mono-DirAC
stream and the second mono-DirAC stream.

2. The apparatus of claim 1, in which the first DirAC monosynth
stage (610) is configured so that azimuth data is kept
constant with frequency and changed randomly or controlled by
an external process in time within a controlled azimuth range,
and a diffuseness parameter is set to zero, and
in which the second DirAC monosynth stage (610) is configured
so that azimuth data are set random in time and frequency
within given constraint azimuth values.
3. A method for determining a spatial output multi-channel audio
signal based on an input audio signal and an input parameter
comprising the steps of:
semantically decomposing the input audio signal to obtain a
first decomposed signal having a first semantic property, the
first decomposed signal being a foreground signal part, and a
second decomposed signal having a second semantic property
being different from the first semantic property, the second
decomposed signal being a background signal part;
rendering the first decomposed signal using a first rendering
characteristic to obtain a first rendered signal having the
first semantic property by processing the first decomposed
signal in a first DirAC monosynth stage (610), the first DirAC
monosynth stage (610) being configured for creating a first
mono-DirAC stream leading to a perception of a nearby point-
like source;
rendering the second decomposed signal using a second
rendering characteristic to obtain a second rendered signal
having the second semantic property by processing the second
decomposed signal in a second DirAC monosynth stage (620), the
second DirAC monosynth stage (610) being configured for
creating a mono-DirAC stream leading to a perception of
spatially-spread sound; and
processing the first rendered signal and the second rendered
signal to obtain the spatial output multi-channel audio signal

by using a DirAC merging stage (630) for merging the first
mono-DirAC stream and the second mono-DirAC stream.
4. The method of claim 3, in which, in the first DirAC monosynth
stage (610), azimuth data is kept constant with frequency and
changed randomly or controlled by an external process in time,
within a controlled azimuth range, and a diffuseness parameter
is set to zero, and
in which, in the second DirAC monosynth stage (610), azimuth
data is set random in time and frequency within given
constraint azimuth values.
5. Computer program having a program code for performing the
method of claim 3, when the program code runs on a computer or
a processor.

ABSTRACT

An apparatus (100) for determining a spatial output multi-channel
audio signal based on an input audio signal and an
input parameter. The apparatus (100) comprises a decomposer
(110) for decomposing the input audio signal based on the
input parameter to obtain a first decomposed signal and a
second decomposed signal different from each other.
Furthermore, the apparatus (100) comprises a renderer (110)
for rendering the first decomposed signal to obtain a first
rendered signal having a first semantic property and for
rendering the second decomposed signal to obtain a second
rendered signal having a second semantic property being
different from the first semantic property. The apparatus
(100) comprises a processor (130) for processing the first
rendered signal and the second rendered signal to obtain
the spatial output multi-channel audio signal.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 4526-KOLNP-2011-(03-11-2011)-SPECIFICATION.pdf 2011-11-03
1 4526-KOLNP-2011-RELEVANT DOCUMENTS [08-09-2023(online)].pdf 2023-09-08
2 4526-KOLNP-2011-(03-11-2011)-FORM-5.pdf 2011-11-03
2 4526-KOLNP-2011-RELEVANT DOCUMENTS [08-09-2022(online)].pdf 2022-09-08
3 4526-KOLNP-2011-IntimationOfGrant10-09-2020.pdf 2020-09-10
3 4526-KOLNP-2011-(03-11-2011)-FORM-3.pdf 2011-11-03
4 4526-KOLNP-2011-PatentCertificate10-09-2020.pdf 2020-09-10
4 4526-KOLNP-2011-(03-11-2011)-FORM-2.pdf 2011-11-03
5 4526-KOLNP-2011-AMMENDED DOCUMENTS [08-09-2020(online)].pdf 2020-09-08
5 4526-KOLNP-2011-(03-11-2011)-FORM-1.pdf 2011-11-03
6 4526-KOLNP-2011-Annexure [08-09-2020(online)].pdf 2020-09-08
6 4526-KOLNP-2011-(03-11-2011)-DRAWINGS.pdf 2011-11-03
7 4526-KOLNP-2011-FORM 13 [08-09-2020(online)].pdf 2020-09-08
7 4526-KOLNP-2011-(03-11-2011)-DESCRIPTION (COMPLETE).pdf 2011-11-03
8 4526-KOLNP-2011-Written submissions and relevant documents [08-09-2020(online)].pdf 2020-09-08
8 4526-KOLNP-2011-(03-11-2011)-CORRESPONDENCE.pdf 2011-11-03
9 4526-KOLNP-2011-(03-11-2011)-CLAIMS.pdf 2011-11-03
9 4526-KOLNP-2011-FORM-26 [22-08-2020(online)].pdf 2020-08-22
10 4526-KOLNP-2011-(03-11-2011)-ABSTRACT.pdf 2011-11-03
10 4526-KOLNP-2011-Correspondence to notify the Controller [20-08-2020(online)].pdf 2020-08-20
11 4526-KOLNP-2011-Information under section 8(2) [08-08-2020(online)].pdf 2020-08-08
11 ABSTRACT-4526-KOLNP-2011.jpg 2011-12-22
12 4526-KOLNP-2011-(26-03-2012)-PA.pdf 2012-03-26
12 4526-KOLNP-2011-US(14)-HearingNotice-(HearingDate-24-08-2020).pdf 2020-07-23
13 4526-KOLNP-2011-(26-03-2012)-CORRESPONDENCE.pdf 2012-03-26
13 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [14-09-2019(online)].pdf 2019-09-14
14 4526-KOLNP-2011-FORM-18.pdf 2012-05-01
14 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [09-05-2019(online)].pdf 2019-05-09
15 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [17-08-2018(online)].pdf 2018-08-17
15 Other Patent Document [12-08-2016(online)].pdf 2016-08-12
16 4526-KOLNP-2011-CLAIMS [21-03-2018(online)].pdf 2018-03-21
16 Other Patent Document [18-02-2017(online)].pdf 2017-02-18
17 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [12-08-2017(online)].pdf 2017-08-12
17 4526-KOLNP-2011-COMPLETE SPECIFICATION [21-03-2018(online)].pdf 2018-03-21
18 4526-KOLNP-2011-CORRESPONDENCE [21-03-2018(online)].pdf 2018-03-21
18 4526-KOLNP-2011-FER.pdf 2017-09-27
19 4526-KOLNP-2011-FER_SER_REPLY [21-03-2018(online)].pdf 2018-03-21
19 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [04-01-2018(online)]_19.pdf 2018-01-04
20 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [04-01-2018(online)].pdf 2018-01-04
20 4526-KOLNP-2011-OTHERS [21-03-2018(online)].pdf 2018-03-21
21 4526-KOLNP-2011-PETITION UNDER RULE 137 [21-03-2018(online)].pdf 2018-03-21
21 4526-KOLNP-2011-PETITION UNDER RULE 137 [21-03-2018(online)]_31.pdf 2018-03-21
22 4526-KOLNP-2011-PETITION UNDER RULE 137 [21-03-2018(online)].pdf 2018-03-21
22 4526-KOLNP-2011-PETITION UNDER RULE 137 [21-03-2018(online)]_31.pdf 2018-03-21
23 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [04-01-2018(online)].pdf 2018-01-04
23 4526-KOLNP-2011-OTHERS [21-03-2018(online)].pdf 2018-03-21
24 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [04-01-2018(online)]_19.pdf 2018-01-04
24 4526-KOLNP-2011-FER_SER_REPLY [21-03-2018(online)].pdf 2018-03-21
25 4526-KOLNP-2011-CORRESPONDENCE [21-03-2018(online)].pdf 2018-03-21
25 4526-KOLNP-2011-FER.pdf 2017-09-27
26 4526-KOLNP-2011-COMPLETE SPECIFICATION [21-03-2018(online)].pdf 2018-03-21
26 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [12-08-2017(online)].pdf 2017-08-12
27 4526-KOLNP-2011-CLAIMS [21-03-2018(online)].pdf 2018-03-21
27 Other Patent Document [18-02-2017(online)].pdf 2017-02-18
28 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [17-08-2018(online)].pdf 2018-08-17
28 Other Patent Document [12-08-2016(online)].pdf 2016-08-12
29 4526-KOLNP-2011-FORM-18.pdf 2012-05-01
29 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [09-05-2019(online)].pdf 2019-05-09
30 4526-KOLNP-2011-(26-03-2012)-CORRESPONDENCE.pdf 2012-03-26
30 4526-KOLNP-2011-Information under section 8(2) (MANDATORY) [14-09-2019(online)].pdf 2019-09-14
31 4526-KOLNP-2011-(26-03-2012)-PA.pdf 2012-03-26
31 4526-KOLNP-2011-US(14)-HearingNotice-(HearingDate-24-08-2020).pdf 2020-07-23
32 4526-KOLNP-2011-Information under section 8(2) [08-08-2020(online)].pdf 2020-08-08
32 ABSTRACT-4526-KOLNP-2011.jpg 2011-12-22
33 4526-KOLNP-2011-(03-11-2011)-ABSTRACT.pdf 2011-11-03
33 4526-KOLNP-2011-Correspondence to notify the Controller [20-08-2020(online)].pdf 2020-08-20
34 4526-KOLNP-2011-(03-11-2011)-CLAIMS.pdf 2011-11-03
34 4526-KOLNP-2011-FORM-26 [22-08-2020(online)].pdf 2020-08-22
35 4526-KOLNP-2011-(03-11-2011)-CORRESPONDENCE.pdf 2011-11-03
35 4526-KOLNP-2011-Written submissions and relevant documents [08-09-2020(online)].pdf 2020-09-08
36 4526-KOLNP-2011-FORM 13 [08-09-2020(online)].pdf 2020-09-08
36 4526-KOLNP-2011-(03-11-2011)-DESCRIPTION (COMPLETE).pdf 2011-11-03
37 4526-KOLNP-2011-Annexure [08-09-2020(online)].pdf 2020-09-08
37 4526-KOLNP-2011-(03-11-2011)-DRAWINGS.pdf 2011-11-03
38 4526-KOLNP-2011-AMMENDED DOCUMENTS [08-09-2020(online)].pdf 2020-09-08
38 4526-KOLNP-2011-(03-11-2011)-FORM-1.pdf 2011-11-03
39 4526-KOLNP-2011-PatentCertificate10-09-2020.pdf 2020-09-10
39 4526-KOLNP-2011-(03-11-2011)-FORM-2.pdf 2011-11-03
40 4526-KOLNP-2011-IntimationOfGrant10-09-2020.pdf 2020-09-10
40 4526-KOLNP-2011-(03-11-2011)-FORM-3.pdf 2011-11-03
41 4526-KOLNP-2011-RELEVANT DOCUMENTS [08-09-2022(online)].pdf 2022-09-08
41 4526-KOLNP-2011-(03-11-2011)-FORM-5.pdf 2011-11-03
42 4526-KOLNP-2011-(03-11-2011)-SPECIFICATION.pdf 2011-11-03
42 4526-KOLNP-2011-RELEVANT DOCUMENTS [08-09-2023(online)].pdf 2023-09-08

Search Strategy

1 SearchStrategy_23-08-2017.pdf

ERegister / Renewals

3rd: 19 Nov 2020

From 11/08/2011 - To 11/08/2012

4th: 19 Nov 2020

From 11/08/2012 - To 11/08/2013

5th: 19 Nov 2020

From 11/08/2013 - To 11/08/2014

6th: 19 Nov 2020

From 11/08/2014 - To 11/08/2015

7th: 19 Nov 2020

From 11/08/2015 - To 11/08/2016

8th: 19 Nov 2020

From 11/08/2016 - To 11/08/2017

9th: 19 Nov 2020

From 11/08/2017 - To 11/08/2018

10th: 19 Nov 2020

From 11/08/2018 - To 11/08/2019

11th: 19 Nov 2020

From 11/08/2019 - To 11/08/2020

12th: 19 Nov 2020

From 11/08/2020 - To 11/08/2021

13th: 30 Jul 2021

From 11/08/2021 - To 11/08/2022

14th: 01 Aug 2022

From 11/08/2022 - To 11/08/2023

15th: 28 Jul 2023

From 11/08/2023 - To 11/08/2024

16th: 26 Jul 2024

From 11/08/2024 - To 11/08/2025

17th: 11 Aug 2025

From 11/08/2025 - To 11/08/2026