Sign In to Follow Application
View All Documents & Correspondence

Apparatus And Method For Enhancing An Audio Signal Sound Enhancing System

Abstract: An apparatus for enhancing an audio signal comprises a signal processor for processing the audio signal in order to reduce or eliminate transient and tonal portions of the processed signal and a decorrelator for generating a first decorrelated signal and a second decorrelated signal from the processed signal. The apparatus further comprises a combiner for weightedly combining the first and the second decorrelated signal and the audio signal or a signal derived from the audio signal by coherence enhancement using time variant weighting factors and to obtain a two channel audio signal. The apparatus further comprises a controller for controlling the time variant weighting factors by analyzing the audio signal so that different portions of the audio signal are multiplied by different weighting factors and the two channel audio signal has a time variant degree of decorrelation.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 January 2017
Publication Number
22/2017
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-11-10
Renewal Date

Applicants

FRAUNHOFER GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München

Inventors

1. UHLE Christian
Hoher Rain 28 92289 Ursensollen
2. GAMPP Patrick
Eskilstunastraße 29 91054 Erlangen
3. HELLMUTH Oliver
Am Ruhstein 29 91054 Budenhof
4. VARGA Stefan
Kirchenweg 8A 90419 Nürnberg
5. SCHARRER Sebastian
Arzbergweg 39 91217 Hersbruck

Specification

The present application is related to audio signal processing and particularly to audio
processing of a mono or dual-mono signal.
An auditory scene can be modeled as a mixture of direct and ambient sounds. Direct (or
directional) sounds are emitted by sound sources, e.g. a musical instrument, a vocalist or
a loudspeaker and arrive on the shortest possible path at the receiver, e.g. the listener's
ear or a microphone. When capturing a direct sound using a set of spaced microphones,
the received signals are coherent. In contrast, ambient (or diffuse) sounds are emitted by
many spaced sound sources or sound reflecting boundaries that contribute to, for
example, room reverberation, applause or a babble noise. When capturing an ambient
sound field using a set of spaced microphones, the received signals are at least partially
incoherent.
Monophonic sound reproduction can be considered appropriate in some reproduction
scenarios (e.g. dance clubs) or for some types of signals (e.g. speech recordings), but the
majority of musical recordings, movie sound and TV sound are stereophonic signals.
Stereophonic signals can create the sensation of ambient (or diffuse) sounds and of the
directions and widths of sound sources. This is achieved by means of stereophonic
information that is encoded by spatial cues. The most important spatial cues are interchannel
level differences (ICLD), inter-channel time differences (ICTD) and inter-channel
coherence (ICC). Consequently, stereophonic signals and the corresponding sound
reproduction systems have more than one channel. ICLD and ICTD contribute to the
sensation of a direction. ICC evokes the sensation of width of a sound and, in the case of
ambient sounds, that a sound is perceived as coming from all directions.
Although multichannel sound reproduction in various formats exist, the majority of audio
recordings and sound reproduction systems still have two channels. Two-channel
stereophonic sound is the standard for entertainment systems, and the listeners are used
to it. However, stereophonic signals are not restricted to have only two channel signals but
can have more than one channel signal. Similarly, monophonic signals are not restricted
to have only one channel signal, but can have multiple but identical channel signals. For
example, an audio signal comprising two identical channel signals may be called a dualmono
signal.
There are various reasons why monophonic signals instead of stereophonic signals are
available to the listener. First, old recordings are monophonic because stereophonic
techniques were not used at that time. Secondly, restrictions of the bandwidth of a
transmission or storage medium can lead to a loss of stereophonic information. A
prominent example is radio broadcasting using frequency modulation (FM). Here,
interfering sources, multipath distortions or other impairments of the transmission can lead
to noisy stereophonic information, which is for the transmission of two-channel signals
typically encoded as the difference signal between both channels. It is common practice to
partially or completely discard the stereophonic information when the reception conditions
are poor.
The loss of stereophonic information may lead to a reduction of sound quality. In general,
an audio signal comprising a higher number of channels may comprise a higher sound
quality when compared to an audio signal comprising a lower number of channels.
Listeners may prefer to listen to audio signals comprising a high sound quality. For
efficiency reasons such as data rates transmitted over or stored in media sound quality is
often reduced.
Therefore, there exists a need for increasing (enhancing) sound quality of audio signals.
An object of the present invention, therefore, is to provide an apparatus or a method for an
enhancement of audio signals and/or to increase sensation of reproduced audio signals.
This object is achieved by an apparatus for enhancing an audio signal in accordance with
claim 1, a method for enhancing an audio signal in accordance with claim 14 and a sound
enhancing system in accordance with claim 13 or a computer program in accordance with
claim 15.
The present invention is based on the finding that a received audio signal may be
enhanced by artificially generating spatial cues by splitting the received audio signals into
at least two shares and by decorrelating at least one of the shares of the received signal.
A weighted combination of the shares allows for receiving an audio signal perceived as
8
stereophonic and is therefore enhanced. Controlling the applied weights allows for a
variant degree of decorrelation and therefore a variant degree of enhancement such that a
level of enhancement may be low when the decorrelation may lead to annoying effects
that reduce sound quality. Thus, a variant audio signal may be enhanced comprising
portions or time intervals where low or no decorrelation is applied such as for speech
signals and comprising portions or time intervals where more or a high degree of
decorrelation is applied such as for music signals.
An embodiment of the present invention provides an apparatus for enhancing an audio
signal. The apparatus comprises a signal processor for processing the audio signal in
order to reduce or eliminate transient and tonal portions of the processed signal. The
apparatus further comprises a decorrelator for generating a first decorrelated signal and a
second decorrelated signal from the processed signal. The apparatus further comprises a
combiner and a controller. The combiner is configured for weightedly combine the first
decorrelated signal, the second decorrelated signal and the audio signal or a signal
derived from the audio signal by coherence enhancement using time variant weighting
factors and to obtain a two-channel audio signal. The controller is configured to control the
time variant weighting factors by analyzing the audio signal so that different portions of the
audio signal are multiplied by different weighting factors and the two-channel audio signal
has a time variant degree of decorrelation.
The audio signal having little or no stereophonic (or multichannel) information, e.g., a
signal having one channel or a signal having multiple but almost identical channel signals,
may be perceived as a multichannel, e.g., a stereophonic signal, after the enhancement
has been applied. A received mono or dual-mono audio signal may be processed
differently in different paths, wherein in one path transient and/or tonal portions of the
audio signal are reduced or eliminated. A signal processed in such a way being
decorrelated and the decorrelated signal being weightedly combined with the second path
comprising the audio signal or a signal derived thereof allows for obtaining two signal
channels that may comprise a high decorrelation factor with respect to each other such
that the two channels are perceived as a stereophonic signal.
By controlling the weighting factors used for weightedly combining the decorrelated signal
and the audio signal (or the signal derived thereof) a time variant degree of decorrelation
may be obtained such that in situations, in which enhancing the audio signal would
possibly lead to unwanted effects, enhancing may be reduced or skipped. For example, a
signal of a radio speaker or other prominent sound source signals are unwanted to be
enhanced as perceiving a speaker from multiple locations of sources might lead to
annoying effects to a listener.
According to a further embodiment, an apparatus for enhancing an audio signal comprises
a signal processor for processing the audio signal in order to reduce or eliminate transient
and tonal portions of the processed signal. The apparatus further comprises a
decorrelator, a combiner and a controller. The decorrelator is configured to generate a first
decorrelated signal and a second decorrelated signal from the processed signal. The
combiner is configured to weightedly combine the first decorrelated signal and the audio
signal or a signal derived from the audio signal by coherence enhancement using time
variant weighting factors and to obtain a two-channel audio signal. The controller is
configured to control the time variant weighting factors by analyzing the audio signal so
that different portions of the audio signal are multiplied by different weighting factors and
the two-channel audio signal has a time variant degree of decorrelation. This allows for
perceiving a mono signal or a signal similar to a mono signal (such as dual-mono or multimono)
as being a stereo-channel audio signal.
For processing the audio signal, the controller and/or the signal processor may be
configured to process a representation of the audio signal in the frequency domain. The
representation may comprise a plurality or a multitude of frequency bands (subbands),
each comprising a part, i.e., a portion of the audio signal of the spectrum of the audio
signal respectively. For each of the frequency bands, the controller may be configured to
predict a perceived level of decorrelation in the two-channel audio signal. The controller
may further be configured to increase the weighting factors for portions (frequency bands)
of the audio signal allowing a higher degree of decorrelation and to decrease the
weighting factors for portions of the audio signal allowing a lower degree of decorrelation.
For example, a portion comprising a non-prominent sound source signal such as applause
or bubble noise may be combined by a weighting factor that allows for a higher
decorrelation than a portion that comprises a prominent sound source signal, wherein the
term prominent sound source signal is used for portions of the signal that are perceived as
direct sounds, for example speech, a musical instrument, a vocalist or a loudspeaker.
The processor may be configured to determine for each of some or all of the frequency
band, if the frequency band comprises transient or tonal components and to determine
spectral weightings that allow for a reduction of the transient or tonal portions. The
spectral weights and the scaling factors may each comprise a multitude of possible values
such that annoying effects due to binary decisions may be reduced and/or avoided.
The controller may further be configured to scale the weighting factors such that a
perceived level of decorrelation in the two-channel audio signal remains within a range
around a target value. The range may extend, for example to ± 20%, ± 10% or ± 5% of the
target value. The target value may be, for example, a previously determined value for a
measure of the tonal and/or transient portion such that, for example, the audio signal
comprising varying transient and tonal portions varying target value are obtained. This
allows for perform a low or even none decorrelation when the audio signal is decorrelated
or no decorrelation is aimed such as for prominent sound source signals like speech and
for a high decorrelation if the signal is not decorrelated and/or decorrelation is aimed. The
weighting factors and/or the spectral weights may be determined and/or adjusted to
multiple values or even almost continuously.
The decorrelator may be configured to generate the first decorrelated signal based on a
reverberation or a delay of the audio signal. The controller may be configured to generate
the test decorrelated signal also based on a reverberation or a delay of the audio signal. A
reverberation may be performed by delaying the audio signal and by combining the audio
signal and the delayed version thereof similar to an finite impulse response filter structure,
wherein the reverberation may also be implemented as an infinite impulse response filter.
A delay time and/or a number of delays and combinations may vary. A delay time delaying
or reverberating the audio signal for the test decorrelated signal may be shorter than a
delay time, for example, resulting in less filter coefficients of the delay filter, for delaying or
reverberating the audio signal for the first decorrelated signal. For predicting the perceived
intensity of decorrelation, a lower degree of decorrelation and thus a shorter delay time
may be sufficient such that by reducing the delay time and/or the filter coefficients a
computational effort and/or a computational power may be reduced.
Subsequently, preferred embodiments of the present invention are described with respect
to the accompanying drawings, in which:
Fig. 1 shows a schematic block diagram of an apparatus for enhancing an audio
signal;
shows a schematic block diagram of a further apparatus for enhancing the
audio signal;
shows an exemplary table indicating a computing of the scaling factors
(weighting factors) based on the level of the predicted perceived intensity of
decorrelation;
shows a schematic flowchart of a part of a method that may be executed, for
partially determining weighting factors;
shows a schematic flowchart of further steps of the method of Fig. 4a,
depicting a case, where the measure for the perceived level of decorrelation is
compared to the threshold values;
shows a schematic block diagram of a decorrelator that may be configured to
operate as the decorrelator in Fig. 1;
shows a schematic diagram comprising a spectrum of an audio signal
comprising at least one transient (short-time) signal portion;
shows a schematic spectrum of an audio signal comprising a tonal
component;
shows a schematic table illustrating a possible transient processing performed
by a transient processing stage;
shows an exemplary table that illustrates a possible tonal processing as it may
be executed by a tonal processing stage.
shows a schematic block diagram of a sound enhancing system comprising an
apparatus for enhancing the audio signal;
shows a schematic block diagram of a processing of the input signal according
to a foreground/background processing.
Fig. 9b illustrates the separation of the input signal into a foreground and a
background signal;
Fig. 10 shows a schematic block diagram and also an apparatus configured to apply
spectral weights to an input signal;
Fig. 11 shows a schematic flowchart of a method for enhancing an audio signal;
Fig. 12 illustrates an apparatus for determining a measure for a perceived level of
reverberation/decorrelation in a mix signal comprising a direct signal
component or dry signal component and a reverberation signal component;
Fig. 13a-c show implementations of a loudness model processor; and
Fig. 14 illustrates in implementation of the loudness model processor which has
already been discussed in some aspects with respect to the Figs. 2 , 13a,
13b, 13c.
Equal or equivalent elements or elements with equal or equivalent functionality are
denoted in the following description by equal or equivalent reference numerals even if
occurring in different figures.
In the following description, a plurality of details is set forth to provide a more thorough
explanation of embodiments of the present invention. However, it will be apparent to those
skilled in the art that embodiments of the present invention may be practiced without these
specific details. In other instances, well known structures and devices are shown in block
diagram form rather than in detail in order to avoid obscuring embodiments of the present
invention. In addition, features of the different embodiments described hereinafter may be
combined with each other, unless specifically noted otherwise.
In the following, reference will be made to process an audio signal. An apparatus or a
component thereof may be configured to receive, provide and/or process an audio signal.
The respective audio signal may be received, provided or processed in the time domain
and/or the frequency domain. An audio signal representation in the time domain may be
transformed into a frequency representation of the audio signal for example by Fourier
transformations or the like. The frequency representation may be obtained, for example,
by using a Short-Time Fourier transform (STFT), a discrete cosine transform and/or a Fast
Fourier transform (FFT). Alternatively or in addition, the frequency representation may be
obtained a by filterbank which may comprise Quadrature Mirror Filters (QMF). A
frequency domain representation of the audio signal may comprise a plurality of frames
each comprising a plurality of subbands as it is known from Fourier transformations. Each
subband comprises a portion of the audio signal. As the time representation and the
frequency representation of the audio signal may be converted one into the other, the
following description shall not be limited to the audio signal being the time domain
representation or the frequency domain representation.
Fig. 1 shows a schematic block diagram of an apparatus 10 for enhancing an audio signal
102. The audio signal 102 is, for example, a mono signal or a mono-like signal, such as a
dual-mono signal, represented in the frequency domain or the time domain. The
apparatus 10 comprises a signal processor 1 0, a decorrelator 20, a controller 130 and a
combiner 140. The signal processor 1 0 is configured for receiving the audio signal 102
and for processing the audio signal 102 to obtain a processed signal 1 2 in order to
reduce or eliminate transient and tonal portions of the processed signal 112 when
compared to the audio signal 102.
The decorrelator 120 is configured for to receiving the processed signal 112 and for
generating a first decorrelated signal 122 and a second decorrelated signal 124 from the
processed signal 112. The decorrelator 120 may be configured for generating the first
decorrelated signal 122 and the second decorrelated signal 124 at least partially by
reverberating the processed signal 12. The first decorrelated signal 122 and the second
decorrelated signal 124 may comprise different time delays for the reverberation such that
the first decorrelated signal 122 comprises a shorter or longer time delay (reverberation
time) than the second decorrelated signal 124. The first or second decorrelated signal 122
or 24 may also be processed without a delay or reverberation filter.
The decorrelator 120 is configured to provide the first decorrelated signal 122 and the
second decorrelated signal 124 to the combiner 140. The controller 130 is configured to
receive the audio signal 102 and to control time variant weighting factors a and b by
analyzing the audio signal 102 so that different portions of the audio signal 102 are
multiplied by different weighting factors a or b. Therefore, the controller 130 comprises a
controlling unit 132 configured to determine the weighting factors a and b. The controller
130 may be configured to operate in the frequency domain. The controlling unit 132 may
be configured to transform the audio signal 102 into the frequency domain by using a
Short-Time Fourier transform (STFT), a Fast Fourier transform (FFT) and/or a regular
Fourier transform (FT). A frequency domain representation of the audio signal 102 may
comprise a plurality of subbands as it is known from Fourier transformations. Each
subband comprises a portion of the audio signal. Alternatively, the audio signal 102 may
be a representation of a signal in the frequency domain. The controlling unit 132 may be
configured to control and/or to determine a pair of weighting factors a and b for each
subband of the digital representation of the audio signal.
The combiner is configured for weightedly combining the first decorrelated signal 122, the
second decorrelated signal 124, a signal 136 derived from the audio signal 102 using the
weighting factors a and b. The signal 136 derived from the audio signal 102 may be
provided by the controller 130. Therefore, the controller 130 may comprise an optional
deriving unit 134. The deriving unit 134 may be configured, for example, to adapt, modify
or enhance portions of the audio signal 102. Particularly, the deriving unit 110 may be
configured to amplify portions of the audio signal 102 that are attenuated, reduced or
eliminated by the signal processor 110 .
The signal processor 110 may be configured to also operate in the frequency domain and
to process the audio signal 102 such that the signal processor 110 reduces or eliminates
transient and tonal portions for each subband of a spectrum of the audio signal 102. This
may lead to less or even no processing for subbands comprising little or non-transient or
little or non-tonal (i.e. noisy) portions. Alternatively, the combiner 140 may receive the
audio signal 102 instead of the derived signal, i.e., the controller 130 can be implemented
without the deriving unit 134. Then, the signal 136 may be equal to the audio signal 102.
Then combiner 140 is configured to receive a weighting signal 138 comprising the
weighting factors a and b. The combiner 140 is further configured to obtain an output
audio signal 142 comprising a first channel and a second channel y2, i.e., the audio
signal 142 is a two-channeled audio signal.
The signal processor 110, the decorrelator 120, the controller 130 and the combiner 140
may be configured to process the audio signal 102, the signal 136 derived thereof and/or
processed signals 112, 122 and/or 124 frame-wise and subband-wise such that the signal
processor 110, the decorrelator 120, the controller 130 and the combiner 140 may be
configured to execute above described operations to each frequency band by processing
one or more frequency bands (portions of the signal) at a time.
Fig. 2 shows a schematic block diagram of an apparatus 200 for enhancing the audio
signal 102. The apparatus 200 comprises a signal processor 210, the decorrelator 120, a
controller 230 and a combiner 240. The decorreiator 120 is configured to generate the first
decorrelated signal 122 indicated as r 1 and the second decorrelated signal 124, indicated
as r2.
The signal processor 210 comprises a transient processing stage 2 1, a tonal processing
stage 213 and a combining stage 215. The signal processor 210 is configured to process
a representation of the audio signal 102 in the frequency domain. The frequency domain
representation of the audio signal 102 comprises a multitude of subbands (frequency
bands), wherein the transient processing stage 2 11 and the tonal processing stage 213
are configured to process each of the frequency bands. Alternatively, the spectrum
obtained by frequency conversion of the audio signal 102 may be reduced, i.e., cut, to
exclude certain frequency ranges or frequency bands from further processing, such as
frequency bands below 20 Hz, 50 Hz or 100 Hz and/or above 16 kHz, 18 kHz or 22 kHz.
This may allow for a reduced computational effort and thus for faster and/or a more
precise processing.
The transient processing stage 2 11 is configured to determine for each of the processed
frequency bands, if the frequency band comprises transient portions. The tonal processing
stage 2 3 is configured to determine for each of the frequency bands, if the audio signal
102 comprises tonal portions in the frequency band. The transient processing stage 2 1 is
configured to determine at least for the frequency bands comprising transient portions
spectral weighting factors 217, wherein the spectral weighting factors 217 are associated
with the respective frequency band. As it will be described in Figs. 6a and 6b, transient
and tonal characteristics may be identified by spectral processing. A level of transiency
and/or tonality may be measured by the transient processing stage 2 11 and/or the tonal
processing stage 213 and converted to a spectral weight. The tonal processing stage 213
is configured to determine spectral weighting factors 219 at least for frequency bands
comprising the tonal portions. The spectral weighting factors 217 and 219 may comprise a
multitude of possible values, the magnitude of the spectral weighting factors 217 and/or
219 indicating an amount of transient and/or tonal portions in the frequency band.
The spectral weighting factors 217 and 219 may comprise an absolute or relative value.
For example, the absolute value may comprise a value of energy of transient and/or tonal
sound in the frequency band. Alternatively, the spectral weighting factors 217 and/or 219
may comprise the relative value such as a value between 0 and 1, the value 0 indicating
that the frequency band comprises no or almost no transient or tonal portions and the
value 1 indicating the frequency band comprising a high amount or completely transient
and/or tonal portions. The spectral weighting factors may comprise one of a multitude of
values such as a number of 3 , 5 , 10 or more values (steps), e.g., (0, 0.3 and 1), (0.1 , 0.2,
... , 1) or the like. A size of the scale, a number of steps between a minimum value and a
maximum value may at least zero but preferably at least one and more preferably at least
five. Preferably, the multitude of values of the spectral weights 217 and 219 comprises at
least three values comprising a minimum value, a maximum value and a value that is
between the minimum value and the maximum value. A higher number of values between
the minimum value and the maximum value may allow for a more continuous weighting of
each of the frequency bands. The minimum value and the maximum value may be scaled
to a scale between 0 and 1 or other values. The maximum value may indicate a highest or
lowest level of transiency and/or tonality.
The combining stage 215 is configured to combine the spectral weights for each of the
frequency bands as it is described later on. The signal processor 210 is configured to
apply the combined spectral weights to each of the frequency bands. For example the
spectral weights 217 and/or 219 or a value derived thereof may be multiplied with spectral
values of the audio signal 102 in the processed frequency band.
The controller 230 is configured to receive the spectral weighting factors 217 and 219 or
information referring thereto from the signal processor 210. The information derived may
be, for example, an index number of a table, the index number being associated to the
spectral weighting factors. The controller is configured to enhance the audio signal 102 for
coherent signal portions, i.e., for portions not or only partially reduced or eliminated by the
transient processing stage 2 11 and/or the tonal processing stage 213. In simple terms, the
deriving unit 234 may amplify portions not reduced or eliminated by the signal processor
210.
The deriving unit 234 is configured to provide a signal 236 derived from the audio signal
102, indicated as z . The combiner 240 is configured to receive the signal z (236). The
decorrelator 120 is configured to receive a processed signal 212 indicated as s from the
signal processor 210.
The combiner 240 is configured to combine the decorrelated signals r 1 and r2 with the
weighting factors (scaling factors) a and b, to obtain a first channel signal y 1 and a second
channel signal y2. I he signal channels y 1 and y2 may be combined to the output signal
242 or be outputted separately.
In other words, the output signal 242 is a combination of a (typically) correlated signal z
(236) and a decorrelated signal s (r1 or r2, respectively). The decorrelated signal as is
obtained in two steps, first suppressing (reducing or eliminating) transient and tonal signal
components and second decorrelation. The suppression of transient signal components
and of tonal signal components is done by means of spectral weighting. The signal is
processed frame-wise in the frequency domain. Spectral weights are computed for each
frequency bin (frequency band) and time frame. Thus the audio signal is processed fullband,
i.e. all portions that are to be considered are processed.
The input signal of the processing may be a single-channel signal x (102), the output
signal may be a two-channel signal y = [y1 ,y2], where indices denote the first and the
second channel, for example, the left and the right channel of a stereo signal. The output
signal y may be computed by linearly combining a two-channel signal r = [r1 ,r2], with a
single-channel signal z with scaling factors a and b according to
y 1 = a x z + b x r 1 (1)
y2 = a x z + b x r2 (2)
wherein "x" refers to the multiplication operator in equations (1) and (2).
The equations (1) and (2) shall be interpreted qualitatively, indicating that a share of the
signals z . r 1 and r2 may be controlled (varied) by varying weighting factors. By forming,
for example, inverse operations such as dividing by the reciprocal value same or
equivalent results may be obtained by performing different operations. Alternatively or in
addition, a look-up table comprising the scaling factors a and b and/or values for y 1 and/or
y2 may be used to obtain the two-channel signal y.
The scaling factors a and/or b may be computed to be monotonically decreasing with the
perceived intensity of the correlation. The predicted scalar value for the perceived
intensity may be used for controlling the scaling factors.
The decorretated signal r comprising r 1 and r2 may be computed in two steps. First,
attenuation of transient and tonal signal components yielding the signal s . Second,
decorrelation of the signal s may be performed.
The attenuation of transient signal components and of tonal signal components is done,
for example, by means of a spectral weighting. The signal is processed frame-wise in the
frequency domain. Spectral weights are computed for each frequency bin and time frame.
An aim of the attenuation is two-fold:
1. Transient or tonal signal components typically belong to so-called foreground
signals and as such their position within the stereo image is often in the center.
2 . Decorrelation of signals having strong transient signal components lead to
perceivable artifacts. Decorrelation of signals having strong tonal signal
components also leads to perceivable artifacts when the tonal components (i.e.
sinusoidals) are frequency modulated at least when the frequency modulation is
slow enough to be perceived as a change of the frequency and not as change of
timbre due to the enrichment of the signal spectrum (possibly inharmonic)
overtones.
The correlated signal z may be obtained by applying a processing that enhances transient
and tonal signal components, for example, qualitatively the inverse of the suppression for
computing the signal s . Alternatively, the input signal, for example, unprocessed, can be
used as it is. Note that there can be the case where z is also a two-channel signal. In fact,
many storage media (e.g. the Compact Disc) use two channels even if the signal is mono.
A signal having two identical channels is called "dual-mono". There can also be the case
where the input signal z is a stereo signal, and the aim of the processing may be to
increase the stereophonic effect.
The perceived intensity of decorrelation may be predicted similar to a predicted perceived
intensity of late reverberation using computational models of loudness, as it is described
in EP 2 541 542 A1.
Fig. 3 shows an exemplary table indicating a computing of the scaling factors (weighting
factors) a and b based on the level of the predicted perceived intensity of decorrelation.
For example, the perceived intensity of decorrelation may be predicted such that a value
thereof comprises a scalar value that may vary between a value of 0 , indicating a low level
of perceived decorrelation, none respectively and a value of 10, indicating a high level of
decorrelation. The levels may be determined, for example, based on listeners tests or
predictive simulation. Alternatively, the value of level of decorrelation may comprise a
range between a minimum value and a maximum value. The value of the perceived level
of decorrelation may be configured to accept more than the minimum and the maximum
value. Preferably, the perceived level of the correlation may accept at least three different
values and more preferably at least seven different values.
Weighting factors a and b to be applied based on a determined level of perceived
decorrelation may be stored in a memory and accessible to the controller 130 or 230. With
increasing levels of perceived decorrelation the scaling factor a to be multiplied with the
audio signal or the signal derived thereof by the combiner may also increase. An
increased level of perceived decorrelation may be interpreted as "the signal is already
(partially) decorrelated" such that with increasing levels of decorrelation the audio signal
or the signal derived thereof comprises a higher share in the output signal 142 or 242.
With increased levels of decorrelation, the weighting factor b is configured to be
decreased, i.e., the signals r 1 and r2 generated by the decorrelator based on an output
signal of the signal processor may comprise a lower share when being combined in the
combiner 140 or 240.
Although the weighting factor a is depicted as comprising a scalar value of at least 1
(minimum value) and at most 9 (maximum value). Although the weighting factor b is
depicted as comprising a scalar value in a range comprising a minimum value of 2 and a
maximum value of 8 , both weighting factors a and b may comprise a value within a range
comprising a minimum value and a maximum value and preferably at least one value
between the minimum value and the maximum value. Alternatively to the values of the
weighting factors a and b depicted in Fig. 3 and with an increased level of perceived
decorrelation, the weighting factor a may increase linearly. Alternatively or in addition, the
weighting factor b may decrease linearly with an increased level of perceived
decorrelation. In addition, for a level of perceived decorrelation, a sum of the weighting
factors a and b determined for a frame may be constant or almost constant. For example,
the weighting factor a may increase from 0 to 10 and the weighting factor b may decrease
from a value of 10 to a value of 0 with an increasing level of perceived decorrelation. If
both weighting factors decrease or increase linearly, for example with step size 1, the sum
of the weighting factors a and b may comprise a value of 10 for each level of perceived
decorrelation. The weighting factors a and b to be applied may be determined by
simulation or by experiment.
Fig. 4a shows a schematic flowchart of a part of a method 400 that may be executed, for
example, by the controller 130 and/or 230. The controller is configured to determine a
measure for the perceived level of a decorrelation in a step 4 0 yielding, for example, in a
scalar value as it is depicted in Fig. 3 . In a step 420, the controller is configured to
compare the determined measure with a threshold value. If the measure is higher than the
threshold value, the controller is configured to modify or adapt the weighting factors a
and/or b in a step 430. In the step 430, the controller is configured to decrease the
weighting factor b, to increase the weighting factor a or to decrease the weighting factor b
and to increase the weighting factor a with respect to a reference value for a and b. The
threshold may vary, for example, within frequency bands of the audio signal. For example,
the threshold may comprise a low value for frequency bands comprising a prominent
sound source signal indicating that a low level of decorrelation is preferred or aimed.
Alternatively or in addition, the threshold may comprise a high value for frequency bands
comprising a non-prominent sound source signal indicating that a high level of
decorrelation is preferred.
It may be an aim to increase the correlation of frequency bands comprising non-prominent
sound source signals and to limit decorrelation for frequency bands comprising prominent
sound source -signals. A threshold may be, for example, 20%, 50% or 70% of a range of
values the weighting factors a and/or b may accept. For example, and with reference to
Fig. 3 , the threshold value may be lower than 7 , lower than 5 or lower than 3 for a
frequency frame comprising a prominent sound source signal. If the perceived level of
decorrelation is too high, then, by executing step 430, the perceived level of decorrelation
may be decreased. The weighting factors a and b may be varied solely or both at a time.
The table depicted in Fig. 3 may be, for example, a value comprising initial values for the
weighting factors a and/or b, the initial values to be adapted by the controller.
Fig. 4b shows a schematic flowchart of further steps of the method 400, depicting a case,
where the measure for the perceived level of decorrelation (determined in step 410) is
compared to the threshold values, wherein the measure is lower than the threshold value
(step 440). The controller is configured to increase b, to decrease a or to increase b and
to decrease a with respect to a reference for a and b to increase the perceived level of
decorrelation and such that the measure comprises a vaiue that is at least the threshold
value.
Alternatively or in addition, the controller may be configured to scale the weighting factors
a and b such that a perceived level of decorrelation in the two-channel audio signal
remains within a range around a target value. The target value may be, for example, the
threshold value, wherein the threshold value may vary based on the type of signal being
comprised by the frequency band for which the weighting factors and/or the spectral
weights are determined. The range around the target value may extend to ± 20%, ± 10%,
or ± 5% of the target value. This may allow to stop adapting the weighting factors when
the perceived decorrelation is approximately the target value (threshold).
Fig. 5 shows a schematic block diagram of a decorrelator 520 that may be configured to
operate as the decorrelator 120. The decorrelator 520 comprises a first decorrelating filter
522 and a second decorrelating filter 524. The first decorrelating filter 526 and the second
decorrelating filter 528 are configured to both receive the processed signal s (512), e.g.,
from the signal processor. The decorrelator 520 is configured to combine the processed
signal 512 and an output signal 523 of the first decorrelating filter 526 to obtain the first
decorrelated signal 522 (r1) and to combine an output signal 525 of the second correlating
filter 528 to obtain the second decorrelated signal 524 (r2). For combining of signals, the
decorrelator 520 may be configured to convolve signals with impulse responses and/or to
multiply spectral values with real and/or imaginary values. Alternatively or in addition,
other operations may be executed such as divisions, sums, differences or the like.
The decorrelating filters 526 and 528 may be configured to reverberate or delay the
processed signal 512. The decorrelating filters 526 and 528 may comprise a finite impulse
response (FIR) and/or an infinite impulse response ( IR) filter. For example, the
decorrelating filters 526 and 528 may be configured to convolve the processed signal 512
with an impulse response obtained from a noise signal that decays or exponentially
decays over time and/or frequency. This allows for generating a decorrelated signal 523
and/or 525 that comprises a reverberation with respect to the signal 512. A reverberation
time of the reverberation signal may comprise, for example, a value between 50 and 1000
ms, between 80 and 500 s and/or between 120 and 200 ms. The reverberation time
may be understood as the duration it takes for the power of the reverberation to decay to
a small value after it had been excited by an impulse, e.g. to decay to 60 dB below the
initial power. Preferably, the decorrelating filters 526 and 528 comprise IIR-filters. This
allows for reducing an amount of calculation when at least some of the filter coefficients
are set to zero such that calculations for this (zero-) filter coefficient may be skipped.
Optionally, a decorrelating filter can comprise more than one filter, where the filters are
connected in series and / or in parallel.
In other words, reverberation comprises a decorreteting effect. The decorrelator may be
configured to not just decorrelate, but also to only slightly change the sonority.
Technically, reverberation may be regarded as a linear time invariant (LTI)-system that
may be characterized considering its impulse response. A length of the impulse response
is often stated as RT60 for reverberation. That is the time after which the impulse
response is decreased by 60 dB. Reverberation may have a length of up to one second or
even up to some seconds. The decorrelator may be implemented comprising a similar
structure as reverberation but comprising different settings for parameters that influence
the length of the impulse response.
Fig. 6a shows a schematic diagram comprising a spectrum of an audio signal 602a
comprising at least one transient (short-time) signal portion. A transient signal portion
leads to a broadband spectrum. The spectrum is depicted as magnitudes S(f) over
frequencies f , wherein the spectrum is subdivided into a multitude of frequency bands b1-
3 . The transient signal portion may be determined in one or more of the frequency bands
at b1-3.
Fig. 6b shows a schematic spectrum of an audio signal 602b comprising a tonal
component. An example of a spectrum is depicted in seven frequency bands fb1-7. The
frequency band fb4 is arranged in the center of the frequency bands fb1-7 and comprises
a maximum magnitude S(f) when compared to the other frequency bands fb1-3 and fb5-7.
Frequency bands with increasing distance with respect to the center frequency (frequency
band fb5) comprise harmonic repetitions of the tonal signal with decreasing magnitudes.
The signal processor may be configured to determine the tonal component, for example,
by evaluating the magnitude S(f). An increasing magnitude S(f) of a tonal component may
be incorporated by the signal processor by decreased spectral weighting factors. Thus,
the higher a share of transient and/or tonal components within a frequency band, the less
contribution the frequency band may have in the processed signal of the signal processor.
For example, the spectral weight for the frequency band fb4 may comprise a value of zero
or close to zero or another value indicating that the frequency band fb4 is considered with
a low share.
Fig. 7a shows a schematic table illustrating a possible transient processing 2 11 performed
by a signal processor such as the signal processor 110 and/or 210. The signal processor
is configured to determine an amount, e.g., a share, of transient components in each of
the frequency bands of the representation of the audio signal in the frequency domain to
be considered. An evaluation may comprise a determining of an amount of the transient
components with a starter value comprising at least a minimum value (for example 1) and
at most a maximum value (for example 15), wherein a higher value may indicate a higher
amount of transient components within the frequency band. The higher the amount of
transient components in the frequency band, the lower the respective spectral weight, for
example the spectral weight 217, may be. For example, the spectral weight may comprise
a value of at least a minimum value such as 0 and of at most a maximum value such as 1.
The spectral weight may comprise a plurality of values between the minimum and the
maximum value, wherein the spectral weight may indicate a consideration-factor and/or a
consideration-factor of the frequency band for later processing. For example, a spectral
weight of 0 may indicate that the frequency band is to be attenuated completely.
Alternatively, also other scaling ranges may be implemented, i.e., the table depicted in
Fig. 7a may be scaled and/or transformed to tables with other step sizes with respect to
an evaluation of the frequency band being a transient frequency band and/or of a step
size of the spectral weight. The spectral weight may even vary continuously.
Fig. 7b shows an exemplary table that illustrates a possible tonal processing as it may be
executed, for example, by the tonal processing stage 213. The higher an amount of tonal
components within the frequency band, the lower the respective spectral weight 219 may
be. For example, the amount of tonal components in the frequency band may be scaled
between a minimum value of 1 and a maximum value of 8, wherein the minimum value
indicates that no or almost no tonal components are comprised by the frequency band.
The maximum value may indicate that the frequency band comprises a large amount of
tonal components. The respective spectral weight, such as the spectral weight 219 may
also comprise a minimum value and a maximum value. The minimum value, for example,
0.1 , may indicate that the frequency band is attenuated almost completely or completely.
The maximum value may indicate that the frequency band is almost unattenuated or
completely unattenuated. The spectral weight 219 may accept one of a multitude of
values including the minimum value, the maximum value and preferably at least one value
between the minimum value and the maximum value. Alternatively, the spectral weight
may decrease for a decreased share of tonal frequency bands such that the spectral
weight is a consideration factor.
The signal processor may be configured to combine the spectral weight for transient
processing and/or the spectral weight for tonal processing with the spectral values of the
frequency band as it is described for the signal processor 210. For example, for a
processed frequency band an average value of the spectral weight 217 and/or 2 9 may
be determined by the combining stage 2 5. The spectral weights of the frequency band
may be combined, for example multiplied, with the spectral values of the audio signal 102.
Alternatively, the combining stage may be configured to compare both spectral weights
217 and 219 and/or to select the lower or higher spectral weight of both and to combine
the selected spectral weight with the spectral values. Alternatively, the spectral weights
may be combined differently, for example as a sum, as a difference, as a quotient or as a
factor.
A characteristic of an audio signal may vary over time. For example, a radio broadcast
signal may first comprise a speech signal (prominent sound source signal) and afterwards
a music signal (non-prominent sound source signal) or vice versa. Also, variations within a
speech signal and/or a music signal may occur. This may lead to rapid changes of
spectral weights and/or weighting factors. The signal processor and/or the controller may
be configured to additionally adapt the spectral weights and/or the weighting factors to
decrease or to limit variations between two frames, for example by limiting a maximum
step size between two signal frames. One or more frames of the audio signal may be
summed up in a time period, wherein the signal processor and/or the controller may be
configured to compare spectral weights and/or weighting factors of a previous time period,
e.g. one or more previous frames and to determine if a difference of spectral weights
and/or weighting factors determined for an actual time period exceeds a threshold value.
The threshold value may represent, for example, a value that leads to annoying effects for
a listener. The signal processor and/or the controller may be configured to limit the
variations such that such annoying effects are reduced or prevented. Alternatively, instead
of the difference, also other mathematical expressions such as a ratio may be determined
for comparing the spectral weights and/or the weighting factors of the previous and the
actual time period.
In other words, each frequency band is assigned a feature comprising an amount of tonal
and/or transient characteristics.
Fig. 8 shows a schematic block diagram of a sound enhancing system 800 comprising an
apparatus 801 for enhancing the audio signal 102. The sound enhancing system 800
comprises a signal input 106 configured to receive the audio signal and to provide the
audio signal to the apparatus 801 . The sound enhancing system 800 comprises two
loudspeakers 808a and 808b. The loudspeaker 808a is configured to receive the signal
y 1. The loudspeaker 808b is configured to receive the signal y2 such that by means of the
loudspeakers 808a and 808b the signals y 1 and y2 may be transferred to sound waves or
signals. The signal input 106 may be a wired or wireless signal input, such as a radio
antenna. The apparatus 801 may be, for example, the apparatus 100 and/or 200.
The correlated signal z is obtained by applying a processing that enhances transient and
tonal components (qualitatively inverse of the suppression for computing the signal s).
The combination performed by the combiner may be linear expressed by y (y1/y2) =
scaling factor 1-z+scaling factor 2 scaling factor (r1/r2). The scaling factors may be
obtained by predicting the perceived intensity of decorrelation.
Alternatively, the signals y 1 and/or y2 may be further processed before being received by
a loudspeaker 808a and/or 808b. For example, the signals y 1 and/or y2 may be amplified,
equalized or the like such that a signal or signals derived by processing the signal y 1
and/or y2 are provided to the loudspeakers 808a and/or 808b.
Artificial reverberation added to the audio signal may be implemented such that the level
of the reverberation is audible, but not too loud (intensive). Levels that are audible or
annoying may be determined in tests and/or simulations. A level that is too high does not
sound good because the clarity suffers, percussive sounds are slurred in time, etc. A
target level may depend from the input signal. If the input signal comprises a low amount
of transients and comprises a low amount of tones with frequency modulations, then the
reverberation is audible with a lower degree and the level may be increased. Similar
applies for a decorrelation as the decorrelator may comprise a similar active principle.
Thus, an optimal intensity of the decorrelator may depend on the input signal. The
computation may be equal, with modified parameters. The decorrelation executed in the
signal processor and in the controller may be performed with two decorrelators that may
be structurally equal but are operated with different sets of parameters. The decorrelation
processors are not limited to two-channel stereo signals but may also be applied to
channels with more than two signals. The decorreiation may be quantified with a
correlation metrics that may comprise up to all values for decorreiation of all signal pairs.
A finding of the invented method is to generate spatial cues and to introduce the spatial
cues to the signal such that the processed signal creates the sensation of a stereophonic
signal. The processing may be regarded as being designed according to the following
criteria:
1. Direct sound sources that have high intensity (or loud- ness level) are localized in
the center. These are prominent direct sound sources, for example a singer or loud
instrument in a musical recording.
2 . Ambient sounds are perceived as being diffuse.
3 . Diffuseness is added to direct sound sources having low intensity (i.e., low
loudness levels), possibly to a smaller extend than to ambient sounds.
4 . The processing should sound natural and should not introduce artifacts.
The design criteria are consistent with common practice in the production of audio
recordings and with signal characteristics of stereophonic signals:
Prominent direct sounds are typically panned to the center, i.e. they are mixed with
negligible ICLD and ICTD. These signals exhibit a high coherence.
Ambient sounds exhibit a low coherence.
When recording multiple direct sources in a reverberant environment, e.g. opera
singers with accompanying orchestra the amount of diffuseness of each direct
sound is related to their distance to the microphones, because the ratio between
the direct signal and the reverberation decreases when the distance to the
microphone is increased. Therefore, sounds that are captured with low intensity
are typically less coherent (or vice versa, more diffuse) than the prominent direct
sounds.
The processing generates the spatial information by means of decorrelation. In
other words, the ICC of the input signals is decreased. Only in extreme cases the
decorrelation leads to completely uncorrelated signals. Typically, a partial
decorrelation is achieved and desired. The processing does not manipulate the
directional cues (i.e., ICLD and ICTD). The reason for this restriction is that no
information about the original or intended position of direct sound sources is
available.
According to above design criteria, the decorrelation is applied selectively to the
signal components in a mixture signal such that:
1. No or little decorrelation is applied to signal components as discussed in design
criterion 1.
2 . Decorrelation is applied to signal components as dis- cussed in design criterion 2 .
This decorrelation largely contributes to the perceived width of the mixture signal
that is obtained at the output of the processing.
Decorrelation is applied to signal components as dis- cussed in design criterion 3 ,
but to a lesser extent than to signal components as discussed in design criterion 2 .
This processing is illustrated by means of a signal model that represents the input signal x
as an additive mixture of a foreground signal x and a background signal x , i.e., x = xa +
x . The foreground signal comprises all signal components as discussed in design
criterion 1. The background signal comprises all signal components as discussed in de¬
sign criterion 2 . All signal components as discussed in design criterion 3 are not
exclusively assigned to either one of the separated signal components but are partially
contained in the foreground signal and in the background signal.
The output signal y is computed as y = ya + y , where y is computed by decorrelating xb,
and ya = xa or, alternatively, ya is computed by decorrelating x . In other words, the
background signal is processed by means of decorrelation and the foreground signal is
not processed by means of decorrelation or is processed by means of decorrelation, but
to a lesser extent than the background signal. Fig. 9b illustrates this processing.
This approach does not only meet the design criteria above. An additional advantage is
that the foreground signal can be prone to undesired coloration when applying
decorrelation, whereas the background can be decorrelated without introducing such
audible artifacts. Therefore, the described processing yields better sound quality
compared to a processing that applies decorrelation equally to all signal components in
the mixture.
So far, the input signal is decomposed into two signals denoted as "foreground signal"
and "background signal" that are separately processed and combined to the output signal.
It should be noted that equivalent methods are feasible that follow the same rationale.
The signal decomposition is not necessarily a processing that outputs audio signals, i.e.
signals that resemble the shape of the waveform over time. Instead, the signal
decomposition can result in any other signal representation that can be used as the input
to the decorrelation processing and subsequently transformed into a waveform signal. An
example for such signal representation is a spectrogram that is computed by means of
Short-term Fourier transform. In general, invertible and linear transforms lead to
appropriate signal representations.
Alternatively, the spatial cues are selectively generated without the preceding signal
decomposition by generating the stereophonic information based on the input signal x .
The derived stereophonic information is weighted with time variant and frequencyselective
values and combined with the input signal. The time-variant and frequencyselective
weighting factors are computed such that they are large at time-frequency
regions that are dominated by the background signal and are small at time-frequency
regions that are dominated by the foreground signal. This can be formalized by
quantifying the time-variant and frequency-selective ratio of background signal and
foreground signal. The weighting factors can be computed from the background-toforeground
ratio, e g . by means of monotonically increasing functions.
Alternatively, the preceding signal decomposition can result in more than two separated
signals.
Fig. 9a and 9b illustrate the separation of the input signal into a foreground and a
background signal, e.g., by suppressing (reducing or eliminating) tonal transient portions
in one of the signals.
A simplified processing is derived by using the assumption that the input signal is an
additive mixture of the foreground signal and the background signal. Figure 9b illustrates
this. Here, separation 1 denotes the separation of either the foreground signal or of the
background signal. If the foreground signal is separated, output 1 denotes the foreground
signal and output 2 is the background signal. If the background signal is separated, output
1 denotes the background signal and output 2 is the foreground signal.
The design and implementation of the signal separation method is based on the finding
that foreground signals and background signals have distinct characteristics. However,
deviations from an ideal separation, i.e. leakage of signal components of the prominent
direct sound sources into the background signal or leakage of ambient signal components
into the foreground signal, are acceptable and do not necessarily impair the sound quality
of the final result.
For temporal characteristics, in general it can be observed that the temporal envelopes of
subband signals of foreground signals feature stronger amplitude modulations than the
temporal envelopes of subband signals of background signals. In contrast, background
signals are typically less transient (or percussive, i.e. more sustained) than foreground
signals.
For spectral characteristics, in general it can be observed that the foreground signals can
be more tonal. In contrast, background signals are typically noisier than foreground
signals.
For phase characteristics, in general it can be observed that the phase information of
background signals is more noisy than of foreground signals. The phase information for
many examples of foreground signals is congruent across multiple frequency bands.
Signals featuring characteristics that are similar to prominent sound source signals are
more likely foreground signals than background signals. Prominent sound source signals
are characterized by transitions between tonal and noisy signal components, where the
tonal signal components are time-variant filtered pulse trains whose fundamental
frequency is strongly modulated. Spectral processing may be based on these
characteristics, the decomposition may be implemented by means of spectral subtraction
or spectral weighting.
Spectral subtraction is performed, for example, in the frequency domain, where the
spectra of short frames of successive (possibly overlapping) portions of the input signal
are processed. The basic principle is to subtract an estimate of the magnitude spectrum of
an interfering signal from the magnitude spectra of the input signals which is assumed to
be an additive mixture of a desired signal and an interfering signal. For the separation of
the foreground signal, the desired signal is the foreground and the interfering signal is the
background signal. For the separation of the background signal, the desired signal is the
background and the interfering signal is the foreground signal.
Spectral weighting (or Short-term spectral attenuation) follows the same principle and
attenuates the interfering signal by scaling the input signal representation. The input
signal x(t) is transformed using a Short-time Fourier transform (STFT), a filter bank or any
other means for deriving a signal representation with multiple frequency bands X(n,k), with
frequency band index n and time index k . The frequency domain representations of the
input signals are processed such that the subband signals are scaled with time variant
weights G(n,k),
Y{n, k) = G{n, k)X(n, k
The result of the weighting operation Y(n,k) is the frequency domain representation of the
output signal. The output time signal y(t) is computed using the inverse processing of the
frequency domain transform, e.g. the Inverse STFT. Figure 10 illustrates the spectral
weighting.
Decorrelation refers to a processing of one or more identical input signal such that
multiple output signals are obtained that are mutually (partially or completely)
uncorrelated, but which sound similar to the input signal. The correlation between two
signals can be measured by means of the correlation coefficient or normalized correlation
coefficient. The normalized correlation coefficient NCC in frequency bands for two signals
X (n,k) and X2(n,k) is defined as
(4)
where and , are the auto power spectral densities (PSD) of the first and second
input signal, respectively, and ' is the cross-PSD, given by
Fϊ S{Xi(n, k)X*(n, k)} , i = 1, 2
(5)
where · is the expectation operation and X* denotes the complex conjugate of
Decorrelation can be implemented by using decorrelating filters or by manipulating the
phase of the input signals in the frequency domain. An example for decorrelating filters is
the allpass filter, which by definition does not change the magnitude spectrum of the input
signals but only their phase. This leads to neutrally sounding output signals in the sense
that the output signals sound similar to the input signals. Another example is
reverberation, which can also be modeled as a fitter or a linear time-invariant system. In
general, decorrelation can be achieved by adding multiple delayed (and possibly filtered)
copies of the input signal to the input signal. In mathematical terms, artificial reverberation
can be implemented as convolution of the input signal with the impulse response of the
reverberating (or decorrelating) system. When the delay time is small e.g. smaller than
50 ms, the delayed copies of the signal are not perceived as separate signals (echoes).
The exact value of the delay time that leads the sensation of echoes is the echo threshold
and depends on spectral and temporal signal characteristics. It is for example smaller
for impulse like sounds than for sound whose envelope rises slowly. For the problem at
hand it is desired to use delay times that are smaller than the echo threshold.
In the general case, the decorrelation processes an input signal having N channels and
outputs a signal having M channels such that the channel signals of the output are
mutually uncorrelated (partially or completely).
In many application scenarios for the described method it is not appropriate to
constantly process the input signal but to activate it and to control its impact based on
an analysis of the input signal. An example is FM broadcasting, where the described
method is applied only when impairments of the transmission lead to a complete or
partial loss of stereo-phonic information. Another example is listening to a collection of
musical recordings, where a subset of the recordings are monophonic and another
subset are stereo recordings. Both scenarios are characterized by a time-varying amount
of stereophonic information of the audio signals. This requires a control of the activation
and the impact of the stereophonic enhancement, i.e. a control of the algorithm.
The control is implemented by means of an analysis of the audio signals that estimates
the spatial cues (ICLD, ICTD and ICC, or a subset thereof) of the audio signals. The
estimation can be performed in a frequency selective manner. The output of the
estimation is mapped to a scalar value that controls the activation or the impact of the
processing. The signal analysis processes the input signal or, alternatively, the separated
background signal.
A straightforward way of controlling the impact of the processing is to decrease its impact
by adding a (possibly scaled) copy of the input signal to the (possibly scaled) output signal
of the stereophonic enhancement. Smooth transitions of the control are obtained by lowpass
filtering the control signal over time.
Fig. 9a shows a schematic block diagram of a processing 900 of the input signal 102
according to a foreground/background processing. The input signal 102 is separated such
that a foreground signal 914 may be processed. In a step 916 decorrelation is performed
to the foreground signal 914. Step 916 is optional. Alternatively, the foreground signal 914
may remain unprocessed, i.e. undecorrelated. In a step 922 of a processing path 920, a
background signal 924 is extracted, i.e., filtered. In a step 926 the background signal 924
is decorrelated. In a step 904 a decorrelated foreground signal 918 (alternatively the
foreground signal 914) and a decorrelated background signal 928 are mixed such that an
output signal 906 is obtained. In other words, Fig. 9a shows a block diagram of the
stereophonic enhancement. A foreground signal and a background signal is computed.
The background signal is processed by decorrelation. Optionally, the foreground signal
can be processed by decorrelation, but to a lesser extent than the background signal. The
processed signals are combined to the output signal.
Fig. 9b illustrates a schematic block diagram of a processing 900' comprising a separation
step 912' of the input signal 102. The separation step 912' may be performed as it was
described above. A foreground signal (output signal 1) 914' is obtained by the separation
step 912'. A background signal 928' is obtained by combining the foreground signal 914',
the weighting factors a and/or b and the input signal 102 in a combining step 926'. A
background signal (output signal 2) 928' is obtained by the combining step 926'.
Fig. 0 shows a schematic block diagram and also an apparatus 1000 configured to apply
spectral weights to an input signal 1002 which may be, for example, the input signal 1002.
The input signal 1002 in the time domain is divided into subbands X(1 ,k)...X(n,k) in the
frequency domain. A filterbank 1004 is configured to divide the input signal 1002 into N
subbands. The apparatus 1000 comprises N computation instances configured to
determine the transient spectral weight and/or the tonal spectral weight G(1,k)...G(n,k) for
each of the N subbands at time instance (frame) k . The spectral weights G(1 ,k)...G(n,k)
are combined with the subband signal X(1 ,k)...X(n,k), such that weighted subband signals
Y(1 ,k)...Y(n,k) are obtained. The apparatus 1000 comprises an inverse processing unit
1008 configured to combine the weighted subband signals to obtain a filtered output
signal 1012 indicated as Y(t) in the time domain. The apparatus 1000 may be a part of the
signal processor 110 or 210. In other words, Fig. 10 illustrates the decomposition of an
input signal into a foreground signal and a background signal.
Fig. 11 shows a schematic flowchart of a method 1 00 for enhancing an audio signal. The
method 1100 comprises a first step 110 in which the audio signal is processed in order to
reduce or eliminate transient and tonal portions of the processed signal. The method 1100
comprises a second step 1 20 in which a first decorrelated signal and a second
decorrelated signal are generated from the processed signal. In a step 1130 of method
1100 the first decorrelated signal, the second decorrelated signal and the audio signal or a
signal derived from the audio signal by coherence enhancement are weightedly combined
by using time variant weighting factors to obtain a two-channel audio signal. In a step
1140 of method 1100 the time variant weighting factors are controlled by analyzing the
audio signal so that different portions of the audio signal are multiplied by different
weighting factors and the two-channel audio signal has a time variant degree of a
decorreiation.
In the following details will be set forth for illustrating the possibility of determining the
perceived level of decorreiation based on a loudness measure. As will be shown, a
loudness measure may allow for predicting a perceived level of reverberation. As was
stated above, reverberation also refers to decorreiation such that the perceived level of
reverberation may also be regarded as a perceived level of decorreiation, wherein for a
decorreiation, reverberation may be shorter than one second, for example shorter than
500 ms, shorter than 250 s or shorter than 200 ms.
Fig. 12 illustrates an apparatus for determining a measure for a perceived level of
reverberation in a mix signal comprising a direct signal component or dry signal
component 1201 and a reverberation signal component 102. The dry signal component
1201 and the reverberation signal component 1202 are input into a loudness model
processor 1204. The loudness model processor is configured for receiving the direct
signal component 1201 and the reverberation signal component 1202 and is furthermore
comprising a perceptual filter stage 1204a and a subsequently connected loudness
calculator 1204b as illustrated in Fig. 13a. The loudness model processor generates, at its
output, a first loudness measure 1206 and a second loudness measure 1208. Both
loudness measures are input into a combiner 1210 for combining the first loudness
measure 1206 and the second loudness measure 1208 to finally obtain a measure 1212
for the perceived level of reverberation. Depending on the implementation, the measure
for the perceived level 1212 can be input into a predictor 1214 for predicting the perceived
level of reverberation based on an average value of at least two measures for the
perceived loudness for different signal frames. However, the predictor 1214 in Fig. 12 is
optional and actually transforms the measure for the perceived level into a certain value
range or unit range such as the Sone-unit range which is useful for giving quantitative
values related to loudness. However, other usages for the measure for the perceived level
1212 which is not processed by the predictor 1214 can be used as well, for example, in
the controller, which does not necessarily have to rely on a value output by the predictor
1214, but which can also directly process the measure for the perceived level 1212, either
in a direct form or preferably in a kind of a smoothed form where smoothing over time is
preferred in order to not have strongly changing level corrections of the reverberated
signal or of a gain factor g .
Particularly, the perceptual filter stage is configured for filtering the direct signal
component, the reverberation signal component or the mix signal component, wherein the
perceptual filter stage is configured for modeling an auditory perception mechanism of an
entity such as a human being to obtain a filtered direct signal, a filtered reverberation
signal or a filtered mix signal. Depending on the implementation, the perceptual filter stage
may comprise two filters operating in parallel or can comprise a storage and a single filter
since one and the same filter can actually be used for filtering each of the three signals,
i.e., the reverberation signal, the mix signal and the direct signal. In this context, however,
it is to be noted that, although Fig. 13a illustrates n filters modeling the auditory perception
mechanism, actually two filters will be enough or a single filter filtering two signals out of
the group comprising the reverberation signal component, the mix signal component and
the direct signal component.
The loudness calculator 1204b or loudness estimator is configured for estimating the first
loudness-related measure using the filtered direct signal and for estimating the second
loudness measure using the filtered reverberation signal or the filtered mix signal, where
the mix signal is derived from a super position of the direct signal component and the
reverberation signal component.
Fig. 13c illustrates four preferred modes of calculating the measure for the perceived level
of reverberation. An implementation relies on the partial loudness where both, the direct
signal component x and the reverberation signal component r are used in the loudness
model processor, but where, in order to determine the first measure EST1 , the
reverberation signal is used as the stimulus and the direct signal is used as the noise. For
determining the second loudness measure EST2, the situation is changed, and the direct
signal component is used as a stimulus and the reverberation signal component is used
as the noise. Then, the measure for the perceived level of correction generated by the
combiner is a difference between the first loudness measure EST1 and the second
loudness measure EST2.
However, other computationally efficient embodiments additionally exist which are
indicated at lines 2 , 3 , and 4 in Fig. 13c. These more computationally efficient measures
rely on calculating the total loudness of three signals comprising the mix signal m, the
direct signal x and the reverberation signal n . Depending on the required calculation
performed by the combiner indicated in the last column of Fig. 13c, the first loudness
measure EST1 is the total loudness of the mix signal or the reverberation signal and the
second loudness measure EST2 is the total loudness of the direct signal component x or
the mix signal component m, where the actual combinations are as illustrated in Fig. 13c.
Fig. 14 illustrates in implementation of the loudness model processor which has already
been discussed in some aspects with respect to the Figs. 12, 13a, 13b, 13c. Particularly,
the perceptual filter stage 1204a comprises a time-frequency converter 1401 for each
branch, where, in the Fig. 3 embodiment, x[k] indicates the stimulus and n[k] indicates the
noise. The time/frequency converted signal is forwarded into an ear transfer function block
1402 (Please note that the ear transfer function can alternatively be computed prior to the
time-frequency converter with similar results, but higher computational load) and the
output of this block 1402 is input into a compute excitation pattern block 1404 followed by
a temporal integration block 1406. Then, in block 1408, the specific loudness in this
embodiment is calculated, where block 1408 corresponds to the loudness calculator block
1204b in Fig. 13a. Subsequently, an integration over frequency in block 1410 is
performed, where block 1410 corresponds to the adder already described as 1204c and
1204d in Fig. 13b. It is to be noted that block 1410 generates the first measure for a first
set of stimulus and noise and the second measure for a second set of stimulus and noise.
Particularly, when Fig. 13b is considered, the stimulus for calculating the first measure is
the reverberation signal and the noise is the direct signal while, for calculating the second
measure, the situation is changed and the stimulus is the direct signal component and the
noise is the reverberation signal component. Hence, for generating two different loudness
measures, the procedure illustrated in Fig. 14 has been performed twice. However,
changes in the calculation only occur in block 1408 which operates differently, so that the
steps illustrated by blocks 1401 to 1406 only have to be performed once, and the result of
the temporal integration block 1406 can be stored in order to compute the first estimated
loudness and the second estimated loudness for the implementation depicted in Fig. 13c.
It is to be noted that, for the other implantation, block 1408 may replaced by an individual
block "compute total loudness" for each branch, where, in this implementation it is
indifferent, whether one signal is considered to be a stimulus or a noise.
Although some aspects have been described in the context of an apparatus, it is clear that
these aspects also represent a description of the corresponding method, where a block or
device corresponds to a method step or a feature of a method step. Analogously, aspects
described in the context of a method step also represent a description of a corresponding
block or item or feature of a corresponding apparatus.
Depending on certain implementation requirements, embodiments of the invention can be
implemented in hardware or in software. The implementation can be performed using a
digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an
EPROM, an EEPROM or a FLASH memory, having electronically readable control signals
stored thereon, which cooperate (or are capable of cooperating) with a programmable
computer system such that the respective method is performed.
Some embodiments according to the invention comprise a data carrier having
electronically readable control signals, which are capable of cooperating with a
programmable computer system, such that one of the methods described herein is
performed.
Generally, embodiments of the present invention can be implemented as a computer
program product with a program code, the program code being operative for performing
one of the methods when the computer program product runs on a computer. The
program code may for example be stored on a machine readable carrier.
Other embodiments comprise the computer program for performing one of the methods
described herein, stored on a machine readable carrier.
In other words, an embodiment of the inventive method is, therefore, a computer program
having a program code for performing one of the methods described herein, when the
computer program runs on a computer.
A further embodiment of the inventive methods is, therefore, a data carrier (or a digital
storage medium, or a computer-readable medium) comprising, recorded thereon, the
computer program for performing one of the methods described herein.
A further embodiment of the inventive method is, therefore, a data stream or a sequence
of signals representing the computer program for performing one of the methods
described herein. The data stream or the sequence of signals may for example be
configured to be transferred via a data communication connection, for example via the
Internet.
A further embodiment comprises a processing means, for example a computer, or a
programmable logic device, configured to or adapted to perform one of the methods
described herein.
A further embodiment comprises a computer having installed thereon the computer
program for performing one of the methods described herein.
The above described embodiments are merely illustrative for the principles of the present
invention. It is understood that modifications and variations of the arrangements and the
details described herein will be apparent to others skilled in the art. It is the intent,
therefore, to be limited only by the scope of the impending patent claims and not by the
specific details presented by way of description and explanation of the embodiments
herein.

Claims
Apparatus (100; 200) for enhancing an audio signal ( 02), comprising:
a signal processor ( 1 10; 210) for processing the audio signal (102) in order to
reduce or eliminate transient and tonal portions of the processed signal ( 1 12; 212);
a decorrelator (120; 520) for generating a first decorrelated signal and a second
decorrelated signal (124; r2) from the processed signal ( 1 12; 212);
a combiner (140; 240) for weightedly combining the first decorrelated signal (122;
522, r1), the second decorrelated signal (124; r2) and the audio signal or a signal
derived from the audio signal (102) by coherence enhancement using time variant
weighting factors (a, b) and to obtain a two-channel audio signal (142; 242); and
a controller (130; 230) for controlling the time variant weighting factors (a, b) by
analyzing the audio signal (122) so that different portions (fb1-fb7) of the audio
signal are multiplied by different weighting factors (a, b) and the two-channel audio
signal (142; 242) has a time variant degree of decorrelation.
Apparatus according to claim 1, wherein the controller (130; 230) is configured to
increase the weighting factors (a, b) for portions (fb1-fb7) of the audio signal (102)
allowing a higher degree of decorrelation and to decrease the weighting factors (a,
b) for portions (fb1-fb7) of the audio signal (102) allowing a lower degree of
decorrelation.
Apparatus according to claim 1 or 2 , wherein the controller (130; 230) is configured
to scale the weighting factors (a, b) such that a perceived level of decorrelation in
the two-channel audio signal (142; 242) remains within a range around a target
value, the range extending to ±20 % of the target value.
Apparatus according to claim 3 , wherein the controller (130; 230) is configured to
determine the target value by reverberate the audio signal (102) to obtain a
reverberated audio signal and by comparing the reverberated audio signal (102)
with the audio signal to obtain a result of the comparison, wherein the controller is
O 2016/016189 PCT/EP2015/067158
configured to determine the perceived level of decorrelation (232) based on the
result of the comparison.
5 . Apparatus according to one of previous claims, wherein the controller (130; 230) is
configured to determine a prominent sound source signal portion in the audio signal
(102) and to decrease the weighting factors (a, b) for the prominent sound source
signal portion compared to a portion of the audio signal (102) not comprising a
prominent sound source signal; and
wherein the controller (130; 230) is configured to determine a non-prominent sound
source signal portion in the audio signal ( 02) and to increase the weighting factors
(a, b) for the non-prominent sound source signal portion compared to a portion of
the audio signal (102) not comprising a non-prominent sound source signal.
6 . Apparatus according to one of previous claims, wherein the controller (130; 230) is
configured to:
generate a test decorrelated signal from a portion of the audio signal (102);
derive a measure for a perceived level of decorrelation from the portion of the audio
signal and the test decorrelated signal; and
to derive the weighting factors (a, b) from the measure for the perceived level of
decorrelation.
7 . Apparatus according to claim 6 , wherein the decorrelator (120, 520) is configured to
generate the first decorrelated signal (122; r1) based on a reverberation of the audio
signal (102) with a first reverberation time wherein the controller (130; 230) is
configured to generate the test decorrelated signal based on a reverberation of the
audio signal (102) with a second reverberation time, wherein the second
reverberation time is shorter than the first reverberation time.
8 . Apparatus according to one of previous claims, wherein
the controller (130; 230) is configured to control the weighting factors (a, b) such that
the weighting factors (a, b) each comprise one value of a first multitude of possible
values the first multitude comprising at least three values comprising a minimum
6/016189 PCT/EP2015/067158
value, a maximum value and a value between the minimum value and the maximum
value; and wherein
the signal processor ( 1 10; 210) is configured to determine spectral weights (217,
219) for a second multitude of frequency bands each representing a portion of the
audio signal (102) in the frequency domain, wherein the spectral weights (217, 219)
each comprise one value of a third multitude of possible values, the third multitude
comprising at least three values comprising a minimum value, a maximum value and
a value between the minimum value and the maximum value.
Apparatus according to one of previous claims wherein the signal processor ( 1 10;
210) is configured to:
process the audio signal (102) such that the audio signal (102) is transferred into the
frequency domain and such that a second multitude of frequency bands (fb1-fb7)
represents the second multitude of portions of the audio signal (102) in the
frequency domain;
to determine for each frequency band (fb1-fb7) a first spectral weight (217)
representing a processing value for transient processing (21 1) of the audio signal
(102);
to determine for each frequency band (fb1-fb7) a second spectral weight (219)
representing a processing value for tonal processing (213) of the audio signal (102);
and
to apply for each frequency band (fb1-fb7) at least one of the first spectral weight
(217) and the second spectral weight (219) to spectral values of the audio signal
(102) in the frequency band (fb1-fb7);
wherein the first spectral weights (217) and the second spectral weights (219) each
comprise one value of a third multitude of possible values, the third multitude
comprising at least three values comprising a minimum value, a maximum value and
a value between the minimum value and the maximum value.
6/016189 PCT/EP2015/067158
Apparatus according to claim 9 , wherein for each of the second multitude of
frequency bands (fb1-fb7) the signal processor ( 1 10; 210) is configured to compare
the first spectral weight (217) and the second spectral weight (219) determined for
the frequency band (fb1-fb7), to determine, if one of the two values comprises a
smaller value and to apply the spectral weight (217, 219) comprising the smaller
value to the spectral values of the audio signal (102) in the frequency band (fb1-fb7).
Apparatus according to one of previous claims, wherein the decorrelator (520)
comprises a first decorrelating filter (526) configured to filter the processed audio
signal (51 2 ,s) to obtain the first decorrelated signal (522, r1) and a second
decorrelation filter (528) configured to filter the processed audio signal (51 2 ,s) to
obtain a second decorrelated signal (524, r2), wherein the combiner (140; 240) is
configured for weightedly combining the first decorrelated signal 522, r1), the
second decorrelated signal (524, r2) and the audio signal (102) or the signal (136;
236) derived from the audio signal (102) to obtain the two-channel audio signal (142;
242).
Apparatus according to one of previous claims, wherein for a second plurality of
frequency bands (fb1-fb7), each of the frequency bands (fb1-fb7) comprising a
portion the audio signal (102) represented in the frequency domain and with a first
time period
the controller (130; 230) is configured to control the weighting factors (a, b) such
that the weighting factors (a, b) each comprise one value of a first multitude of
possible values the first multitude comprising at least three values comprising a
minimum value, a maximum value and a value between the minimum value and
the maximum value and to adapt the weighting factors (a, b) determined for an
actual time period if a ratio or a difference based on a value of the weighting
factors (a, b) determined for the actual time period and a value of the weighting
factors (a, b) determined for a previous time period is larger than or equal than a
threshold value such that a value of the ratio or the difference is reduced; and
the signal processor ( 110; 210) is configured to determine the spectral weights
(217, 219) each comprising one value of a third multitude of possible values, the
third multitude comprising at least three values comprising a minimum value, a
maximum value and a value between the minimum value and the maximum
value.
O 2016/016189 PCT/EP2015/067158
Sound enhancing system (800) comprising
an apparatus (801) for enhancing an audio signal according to one of previous
claims;
a signal input (106) configured to receive the audio signal (102);
at least two loudspeakers (808a, 808b) configured to receive the two-channel audio
signal (yi/y 2) or a signal derived from the two-channel audio signal (yi/y 2) and to
generate acoustic signals from the two-channel audio signal (yi/y 2) or the signal
derived from the two-channel audio signal (yi/y 2)
14. Method ( 1 100) for enhancing an audio signal (102), comprising:
processing ( 1 110) the audio signal (102) in order to reduce or eliminate transient
and tonal portions of the processed signal ( 1 12; 212);
generating ( 1 120) a first decorrelated signal (122, r1) and a second decorrelated
signal (124, r2) from the processed signal ( 1 12, 212);
weightedly combining ( 1 130) the first decorrelated signal (122, r1), the second
decorrelated signal (124, r2) and the audio signal (102) or a signal (136; 236)
derived from the audio signal (102) by coherence enhancement using time variant
weighting factors (a, b) and to obtain a two-channel audio signal (142; 242); and
controlling ( 1 140) the time variant weighting factors (a, b) by analyzing the audio
signal (102) so that different portions of the audio signal are multiplied by different
weighting factors (a, b) and the two-channel audio signal (142; 242)
has a time variant degree of decorrelation.
15. Non-transitory storage medium having stored thereon a computer program having a
program code for performing, when running on a computer, a method for enhancing
an audio signal according to claim 14.

Documents

Application Documents

# Name Date
1 Form 5 [19-01-2017(online)].pdf 2017-01-19
2 Form 3 [19-01-2017(online)].pdf 2017-01-19
3 Form 18 [19-01-2017(online)].pdf_330.pdf 2017-01-19
4 Form 18 [19-01-2017(online)].pdf 2017-01-19
5 Drawing [19-01-2017(online)].pdf 2017-01-19
6 Description(Complete) [19-01-2017(online)].pdf_329.pdf 2017-01-19
7 Description(Complete) [19-01-2017(online)].pdf 2017-01-19
8 201717002189.pdf 2017-01-23
9 Form 26 [06-03-2017(online)].pdf 2017-03-06
10 201717002189-Power of Attorney-080317.pdf 2017-03-14
11 201717002189-Correspondence-080317.pdf 2017-03-14
12 Other Patent Document [27-04-2017(online)].pdf 2017-04-27
13 201717002189-OTHERS-010517.pdf 2017-05-02
14 201717002189-Correspondence-010517.pdf 2017-05-02
15 Form 3 [23-06-2017(online)].pdf 2017-06-23
16 201717002189-FORM 3 [14-12-2017(online)].pdf 2017-12-14
17 201717002189-FORM 3 [14-06-2018(online)].pdf 2018-06-14
18 201717002189-FORM 3 [14-12-2018(online)].pdf 2018-12-14
19 201717002189-FORM 3 [27-06-2019(online)].pdf 2019-06-27
20 201717002189-FORM 3 [18-12-2019(online)].pdf 2019-12-18
21 201717002189-FER.pdf 2020-01-31
22 201717002189-FORM 3 [13-05-2020(online)].pdf 2020-05-13
23 201717002189-Information under section 8(2) [17-07-2020(online)].pdf 2020-07-17
24 201717002189-FORM 4(ii) [30-07-2020(online)].pdf 2020-07-30
25 201717002189-OTHERS [30-10-2020(online)].pdf 2020-10-30
26 201717002189-FER_SER_REPLY [30-10-2020(online)].pdf 2020-10-30
27 201717002189-DRAWING [30-10-2020(online)].pdf 2020-10-30
28 201717002189-CLAIMS [30-10-2020(online)].pdf 2020-10-30
29 201717002189-ABSTRACT [30-10-2020(online)].pdf 2020-10-30
30 201717002189-FORM 3 [09-11-2020(online)].pdf 2020-11-09
31 201717002189-FORM 3 [07-05-2021(online)].pdf 2021-05-07
32 201717002189-Information under section 8(2) [10-05-2021(online)].pdf 2021-05-10
33 201717002189-FORM 3 [06-11-2023(online)].pdf 2023-11-06
34 201717002189-PatentCertificate10-11-2023.pdf 2023-11-10
35 201717002189-IntimationOfGrant10-11-2023.pdf 2023-11-10

Search Strategy

1 TotalPatentOne_30-01-2020.pdf

ERegister / Renewals

3rd: 17 Nov 2023

From 27/07/2017 - To 27/07/2018

4th: 17 Nov 2023

From 27/07/2018 - To 27/07/2019

5th: 17 Nov 2023

From 27/07/2019 - To 27/07/2020

6th: 17 Nov 2023

From 27/07/2020 - To 27/07/2021

7th: 17 Nov 2023

From 27/07/2021 - To 27/07/2022

8th: 17 Nov 2023

From 27/07/2022 - To 27/07/2023

9th: 17 Nov 2023

From 27/07/2023 - To 27/07/2024

10th: 17 Nov 2023

From 27/07/2024 - To 27/07/2025

11th: 01 Jul 2025

From 27/07/2025 - To 27/07/2026