Specification
Sound Acquisition via the Extraction of Geometrical Information from Direction of
Arrival Estimates
Description
The present invention relates to audio processing and, in particular, to an apparatus and
method for sound acquisition via the extraction of geometrical information from direction
of arrival estimates.
Traditional spatial sound recording aims at capturing a sound field with multiple
microphones such that at the reproduction side, a listener perceives the sound image as it
was at the recording location. Standard approaches for spatial sound recording usually use
spaced, omnidirectional microphones, for example, in AB stereophony, or coincident
directional microphones, for example, in intensity stereophony, or more sophisticated
microphones, such as a B-format microphone, e.g. in Ambisonics, see, for example,
[1] R. K. Furness, "Ambisonics - An overview," in AES 8th International Conference,
April 1990, pp. 181-189.
For the sound reproduction, these non-parametric approaches derive the desired audio
playback signals (e.g., the signals to be sent to the loudspeakers) directly from the recorded
microphone signals.
Alternatively, methods based on a parametric representation of sound fields can be applied,
which are referred to as parametric spatial audio coders. These methods often employ
microphone arrays to determine one or more audio downmix signals together with spatial
side information describing the spatial sound. Examples are Directional Audio Coding
(DirAC) or the so-called spatial audio microphones (SAM) approach. More details on
DirAC can be found in
[2] Pulkki, V., "Directional audio coding in spatial sound reproduction and stereo
upmixing," in Proceedings of the AES 28th International Conference, pp. 251-258, Pitea,
Sweden, June 30 - July 2, 2006,
[3] V. Pulkki, "Spatial sound reproduction with directional audio coding," J . Audio Eng.
Soc, vol. 55, no. 6, pp. 503-516, June 2007.
For more details on the spatial audio microphones approach, reference is made to
[4] C. Fallen "Microphone Front-Ends for Spatial Audio Coders", in Proceedings of the
AES 125th International Convention, San Francisco, Oct. 2008.
In DirAC, for instance the spatial cue information comprises the direction-of-arrival
(DOA) of sound and the diffuseness of the sound field computed in a time-frequency
domain. For the sound reproduction, the audio playback signals can be derived based on
the parametric description. In some applications, spatial sound acquisition aims at
capturing an entire sound scene. In other applications spatial sound acquisition only aims at
capturing certain desired components. Close talking microphones are often used for
recording individual sound sources with high signal-to-noise ratio (SNR) and low
reverberation, while more distant configurations such as XY stereophony represent a way
for capturing the spatial image of an entire sound scene. More flexibility in terms of
directivity can be achieved with beamforming, where a microphone array can be used to
realize steerable pick-up patterns. Even more flexibility is provided by the abovementioned
methods, such as directional audio coding (DirAC) (see [2], [3]) in which it is
possible to realize spatial filters with arbitrary pick-up patterns, as described in
[5] M. Kallinger, H. Ochsenfeld, G. Del Galdo, F. Kuch, D. Mahne, R. Schultz-Amling.
and O. Thiergart, "A spatial filtering approach for directional audio coding," in Audio
Engineering Society Convention 126, Munich, Germany, May 2009,
as well as other signal processing manipulations of the sound scene, see, for example,
[6] R. Schultz-Amling, F. Kiich, O. Thiergart, and M. Kallinger, "Acoustical zooming
based on a parametric sound field representation," in Audio Engineering Society
Convention 128, London UK, May 2010,
[7] J. Herre, C. Falch, D. Mahne, G. Del Galdo, M. Kallinger, and O. Thiergart,
"Interactive teleconferencing combining spatial audio object coding and DirAC
technology," in Audio Engineering Society Convention 8, London UK, May 2010.
All the above-mentioned concepts have in common that the microphones are arranged in a
fixed known geometry. The spacing between microphones is as small as possible for
coincident microphonics, whereas it is normally a few centimeters for the other methods.
In the following, we refer to any apparatus for the recording of spatial sound capable of
retrieving direction of arrival of sound (e.g. a combination of directional microphones or a
microphone array, etc.) as a spatial microphone.
Moreover, all the above-mentioned methods have in common that they are limited to a
representation of the sound field with respect to only one point, namely the measurement
location. Thus, the required microphones must be placed at very specific, carefully selected
positions, e.g. close to the sources or such that the spatial image can be captured optimally.
In many applications however, this is not feasible and therefore it would be beneficial to
place several microphones further away from the sound sources and still be able to capture
the sound as desired.
There exist several field reconstruction methods for estimating the sound field in a point in
space other than where it was measured. One method is acoustic holography, as described
in
[8] E. G. Williams, Fourier Acoustics: Sound Radiation and Nearfield Acoustical
Holography, Academic Press, 99.
Acoustic holography allows to compute the sound field at any point with an arbitrary
volume given that the sound pressure and particle velocity is known on its entire surface.
Therefore, when the volume is large, an unpractically large number of sensors is required.
Moreover, the method assumes that no sound sources are present inside the volume,
making the algorithm unfeasible for our needs. The related wave field extrapolation (see
also [8]) aims at extrapolating the known sound field on the surface of a volume to outer
regions. The extrapolation accuracy however degrades rapidly for larger extrapolation
distances as well as for extrapolations towards directions orthogonal to the direction of
propagation of the sound, see
[9] A. untz and R. Rabenstein, "Limitations in the extrapolation of wave fields from
circular measurements," in 15th European Signal Processing Conference (EUSIPCO
2007), 2007.
[10] A. Walther and C. Faller, "Linear simulation of spaced microphone arrays using bformat
recordings," in Audio Engineering Society Convention 128, London UK, May
2010,
describes a plane wave model, wherein the field extrapolation is possible only in points far
from the actual sound sources, e.g., close to the measurement point.
A major drawback of traditional approaches is that the spatial image recorded is always
relative to the spatial microphone used. In many applications, it is not possible or feasible
to place a spatial microphone in the desired position, e.g., close to the sound sources. In
this case, it would be more beneficial to place multiple spatial microphones further away
from the sound scene and still be able to capture the sound as desired.
[ 1 1] US61/287,596: An Apparatus and a Method for Converting a First Parametric
Spatial Audio Signal into a Second Parametric Spatial Audio Signal,
proposes a method for virtually moving the real recording position to another position
when reproduced over loudspeakers or headphones. However, this approach is limited to a
simple sound scene in which all sound objects are assumed to have equal distance to the
real spatial microphone used for the recording. Furthermore, the method can only take
advantage of one spatial microphone.
It is an object of the present invention to provide improved concepts for sound acquisition
via the extraction of geometrical information. The object of the present invention is solved
by an apparatus according to claim 1, by a method according to claim 24 and by a
computer program according to claim 25.
According to an embodiment, an apparatus for generating an audio output signal to
simulate a recording of a virtual microphone at a configurable virtual position in an
environment is provided. The apparatus comprises a sound events position estimator and
an information computation module. The sound events position estimator is adapted to
estimate a sound source position indicating a position of a sound source in the
environment, wherein the sound events position estimator is adapted to estimate the sound
source position based on a first direction information provided by a first real spatial
microphone being located at a first real microphone position in the environment, and based
on a second direction information provided by a second real spatial microphone being
located at a second real microphone position in the environment.
The information computation module is adapted to generate the audio output signal based
on a first recorded audio input signal being recorded by the first real spatial microphone,
based on the first real microphone position, based on the virtual position of the virtual
microphone, and based on the sound source position.
In an embodiment, the information computation module comprises a propagation
compensator, wherein the propagation compensator is adapted to generate a first modified
audio signal by modifying the first recorded audio input signal, based on a first amplitude
decay between the sound source and the first real spatial microphone and based on a
second amplitude decay between the sound source and the virtual microphone, by
adjusting an amplitude value, a magnitude value or a phase value of the first recorded
audio input signal, to obtain the audio output signal. In an embodiment, the first amplitude
decay may be an amplitude decay of a sound wave emitted by a sound source and the
second amplitude decay may be an amplitude decay of the sound wave emitted by the
sound source.
According to another embodiment, the information computation module comprises a
propagation compensator being adapted to generate a first modified audio signal by
modifying the first recorded audio input signal by compensating a first delay between an
arrival of a sound wave emitted by the sound source at the first real spatial microphone and
an arrival of the sound wave at the virtual microphone by adjusting an amplitude value, a
magnitude value or a phase value of the first recorded audio input signal, to obtain the
audio output signal.
According to an embodiment, it is assumed to use two or more spatial microphones, which
are referred to as real spatial microphones in the following. For each real spatial
microphone, the DOA of the sound can be estimated in the time-frequency domain. From
the information gathered by the real spatial microphones, together with the knowledge of
their relative position, it is possible to constitute the output signal of an arbitrary spatial
microphone virtually placed at will in the environment. This spatial microphone is referred
to as virtual spatial microphone in the following.
Note that the Direction of Arrival (DOA) may be expressed as an azimuthal angle if 2D
space, or by an azimuth and elevation angle pair in 3D. Equivalently, a unit norm vector
pointed at the DOA may be used.
In embodiments, means are provided to capture sound in a spatially selective way, e.g.,
sound originating from a specific target location can be picked up, just as if a close-up
"spot microphone" had been installed at this location. Instead of really installing this spot
microphone, however, its output signal can be simulated by using two or more spatial
microphones placed in other, distant positions.
The term "spatial microphone" refers to any apparatus for the acquisition of spatial sound
capable of retrieving direction of arrival of sound (e.g. combination of directional
microphones, microphone arrays, etc.) .
The term "non-spatial microphone" refers to any apparatus that is not adapted for
retrieving direction of arrival of sound, such as a single omnidirectional or directive
microphone.
It should be noted, that the term "real spatial microphone" refers to a spatial microphone as
defined above which physically exists.
Regarding the virtual spatial microphone, it should be noted, that the virtual spatial
microphone can represent any desired microphone type or microphone combination, e.g. it
can, for example, represent a single omnidirectional microphone, a directional microphone,
a pair of directional microphones as used in common stereo microphones, but also a
microphone array.
The present invention is based on the finding that when two or more real spatial
microphones are used, it is possible to estimate the position in 2D or 3D space of sound
events, thus, position localization can be achieved. Using the determined positions of the
sound events, the sound signal that would have been recorded by a virtual spatial
microphone placed and oriented arbitrarily in space can be computed, as well as the
corresponding spatial side information, such as the Direction of Arrival from the point-ofview
of the virtual spatial microphone.
For this purpose, each sound event may be assumed to represent a point like sound source,
e.g. an isotropic point like sound source. In the following "real sound source" refers to an
actual sound source physically existing in the recording environment, such as talkers or
musical instruments etc.. On the contrary, with "sound source" or "sound event" we refer
in the following to an effective sound source, which is active at a certain time instant or in
a certain time-frequency bin, wherein the sound sources may, for example, represent real
sound sources or mirror image sources. According to an embodiment, it is implicitly
assumed that the sound scene can be modeled as a multitude of such sound events or point
like sound sources. Furthermore, each source may be assumed to be active only within a
specific time and frequency slot in a predefined time-frequency representation. The
distance between the real spatial microphones may be so, that the resulting temporal
difference in propagation times is shorter than the temporal resolution of the timefrequency
representation. The latter assumption guarantees that a certain sound event is
picked up by all spatial microphones within the same time slot. This implies that the DOAs
estimated at different spatial microphones for the same time-frequency slot indeed
correspond to the same sound event. This assumption is not difficult to meet with real
spatial microphones placed at a few meters from each other even in large rooms (such as
living rooms or conference rooms) with a temporal resolution of even a few ms.
Microphone arrays may be employed to localize sound sources. The localized sound
sources may have different physical interpretations depending on their nature. When the
microphone arrays receive direct sound, they may be able to localize the position of a true
sound source (e.g. talkers). When the microphone arrays receive reflections, they may
localize the position of a mirror image source. Mirror image sources are also sound
sources.
A parametric method capable of estimating the sound signal of a virtual microphone placed
at an arbitrary location is provided. In contrast to the methods previously described, the
proposed method does not aim directly at reconstructing the sound field, but rather aims at
providing sound that is perceptually similar to the one which would be picked up by a
microphone physically placed at this location. This may be achieved by employing a
parametric model of the sound field based on point-like sound sources, e.g. isotropic pointlike
sound sources (IPLS). The required geometrical information, namely the instantaneous
position of all IPLS, may be obtained by conducting triangulation of the directions of
arrival estimated with two or more distributed microphone arrays. This might be achieved,
by obtaining knowledge of the relative position and orientation of the arrays.
Notwithstanding, no a priori knowledge on the number and position of the actual sound
sources (e.g. talkers) is necessary. Given the parametric nature of the proposed concepts,
e.g. the proposed apparatus or method, the virtual microphone can possess an arbitrary
directivity pattern as well as arbitrary physical or non-physical behaviors, e. g., with
respect to the pressure decay with distance. The presented approach has been verified by
studying the parameter estimation accuracy based on measurements in a reverberant
environment.
While conventional recording techniques for spatial audio are limited in so far as the
spatial image obtained is always relative to the position in which the microphones have
been physically placed, embodiments of the present invention take into account that in
many applications, it is desired to place the microphones outside the sound scene and yet
be able to capture the sound from an arbitrary perspective. According to embodiments,
concepts are provided which virtually place a virtual microphone at an arbitrary point in
space, by computing a signal perceptually similar to the one which would have been
11 071629
picked up, if the microphone had been physically placed in the sound scene. Embodiments
may apply concepts, which may employ a parametric model of the sound field based on
point-like sound sources, e.g. point-like isotropic sound sources. The required geometrical
information may be gathered by two or more distributed microphone arrays.
According to an embodiment, the sound events position estimator may be adapted to
estimate the sound source position based on a first direction of arrival of the sound wave
emitted by the sound source at the first real microphone position as the first direction
information and based on a second direction of arrival of the sound wave at the second real
microphone position as the second direction information.
In another embodiment, the information computation module may comprise a spatial side
information computation module for computing spatial side information. The information
computation module may be adapted to estimate the direction of arrival or an active sound
intensity at the virtual microphone as spatial side information, based on a position vector of
the virtual microphone and based on a position vector of the sound event.
According to a further embodiment, the propagation compensator may be adapted to
generate the first modified audio signal in a time-frequency domain, by compensating the
first delay or amplitude decay between the arrival of the sound wave emitted by the sound
source at the first real spatial microphone and the arrival of the sound wave at the virtual
microphone by adjusting said magnitude value of the first recorded audio input signal
being represented in a time-frequency domain.
In an embodiment, the propagation compensator may be adapted to conduct propagation
compensation by generating a modified magnitude value of the first modified audio signal
by applying the formula:
wherein di(k, n) is the distance between the position of the first real spatial microphone
and the position of the sound event, wherein s(k, n) is the distance between the virtual
position of the virtual microphone and the sound source position of the sound event,
wherein Pref(k, n) is a magnitude value of the first recorded audio input signal being
represented in a time-frequency domain, and wherein P (k, n) is the modified magnitude
value.
In a further embodiment, the information computation module may moreover comprise a
combiner, wherein the propagation compensator may be furthermore adapted to modify a
second recorded audio input signal, being recorded by the second real spatial microphone,
by compensating a second delay or amplitude decay between an arrival of the sound wave
emitted by the sound source at the second real spatial microphone and an arrival of the
sound wave at the virtual microphone, by adjusting an amplitude value, a magnitude value
or a phase value of the second recorded audio input signal to obtain a second modified
audio signal, and wherein the combiner may be adapted to generate a combination signal
by combining the first modified audio signal and the second modified audio signal, to
obtain the audio output signal.
According to another embodiment, the propagation compensator may furthermore be
adapted to modify one or more further recorded audio input signals, being recorded by the
one or more further real spatial microphones, by compensating delays between an arrival
of the sound wave at the virtual microphone and an arrival of the sound wave emitted by
the sound source at each one of the further real spatial microphones. Each of the delays or
amplitude decays may be compensated by adjusting an amplitude value, a magnitude value
or a phase value of each one of the further recorded audio input signals to obtain a plurality
of third modified audio signals. The combiner may be adapted to generate a combination
signal by combining the first modified audio signal and the second modified audio signal
and the plurality of third modified audio signals, to obtain the audio output signal.
In a further embodiment, the information computation module may comprise a spectral
weighting unit for generating a weighted audio signal by modifying the first modified
audio signal depending on a direction of arrival of the sound wave at the virtual position of
the virtual microphone and depending on a virtual orientation of the virtual microphone to
obtain the audio output signal, wherein the first modified audio signal may be modified in
a time-frequency domain.
Moreover, the information computation module may comprise a spectral weighting unit for
generating a weighted audio signal by modifying the combination signal depending on a
direction of arrival or the sound wave at the virtual position of the virtual microphone and
a virtual orientation of the virtual microphone to obtain the audio output signal, wherein
the combination signal may be modified in a time-frequency domain.
According to another embodiment, the spectral weighting unit may be adapted to apply the
weighting factor
+ (l-oc)cos((pv(k, n)), or the weighting factor
0.5 + 0.5 cos((pv(k, n))
on the weighted audio signal,
wherein p v(k, n) indicates a direction of arrival vector of the sound wave emitted by the
sound source at the virtual position of the virtual microphone.
In an embodiment, the propagation compensator is furthermore adapted to generate a third
modified audio signal by modifying a third recorded audio input signal recorded by an
omnidirectional microphone by compensating a third delay or amplitude decay between an
arrival of the sound wave emitted by the sound source at the omnidirectional microphone
and an arrival of the sound wave at the virtual microphone by adjusting an amplitude
value, a magnitude value or a phase value of the third recorded audio input signal, to obtain
the audio output signal.
In a further embodiment, the sound events position estimator may be adapted to estimate a
sound source position in a three-dimensional environment.
Moreover, according to another embodiment, the information computation module may
further comprise a diffuseness computation unit being adapted to estimate a diffuse sound
energy at the virtual microphone or a direct sound energy at the virtual microphone.
The diffuseness computation unit may, according to a further embodiment, be adapted to
estimate the diffuse sound energy at the virtual microphone by applying the
formula:
V _ - S
wherein N is the number of a plurality of real spatial microphones comprising the first and
the second real spatial microphone, and wherein is the diffuse sound energy at the
i-th real spatial microphone.
In a further embodiment, the diffuseness computation unit may be adapted to estimate the
direct sound energy by applying the formula:
distance SMi - IPLS
distance VM IPLS
wherein "distance SMi - IPLS" is the distance between a position of the i-th real
microphone and the sound source position, wherein "distance VM - IPLS" is the distance
between the virtual position and the sound source position, and wherein E r
M is the
direct energy at the i-th real spatial microphone.
Moreover, according to another embodiment, the diffuseness computation unit may
furthermore be adapted to estimate the diffuseness at the virtual microphone by estimating
the diffuse sound energy at the virtual microphone and the direct sound energy at the
virtual microphone and by applying the formula:
wherein y ' indicates the diffuseness at the virtual microphone being estimated, wherein
indicates the diffuse sound energy being estimated and wherein E indicates the
direct sound energy being estimated.
Preferred embodiments of the present invention will be described in the following, in
which:
illustrates an apparatus for generating an audio output signal according to an
embodiment,
Fig. 2 illustrates the inputs and outputs of an apparatus and a method for
generating an audio output signal according to an embodiment,
Fig. 3 illustrates the basic structure of an apparatus according to an embodiment
which comprises a sound events position estimatior and an information
computation module,
Fig. 4 shows an exemplary scenario in which the real spatial microph
depicted as Uniform Linear Arrays of 3 microphones each,
Fig. 5 depicts two spatial microphones in 3D for estimating the direction of arrival
in 3D space,
Fig. 6 illustrates a geometry where an isotropic point-like sound source of the
current time-frequency bin (k, n) is located at a position piPLs (k, n),
Fig. 7 depicts the information computation module according to an embodiment,
Fig. 8 depicts the information computation module according to another
embodiment,
Fig. 9 shows two real spatial microphones, a localized sound event and a position
of a virtual spatial microphone, together with the corresponding delays and
amplitude decays,
Fig. 10 illustrates, how to obtain the direction of arrival relative to a virtual
microphone according to an embodiment,
Fig. 11 depicts a possible way to derive the DOA of the sound from the point of
view of the virtual microphone according to an embodiment,
Fig. 12 illustrates an information computation block additionally comprising a
diffuseness computation unit according to an embodiment,
Fig. 13 depicts a diffuseness computation unit according to an embodiment,
Fig. 14 illustrates a scenario, where the sound events position estimation is not
possible, and
Fig. 15a- 15c illustrate scenarios where two microphone arrays receive direct sound,
sound reflected by a wall and diffuse sound.
Fig. 1 illustrates an apparatus for generating an audio output signal to simulate a recording
of a virtual microphone at a configurable virtual position posVmic in an environment. The
apparatus comprises a sound events position estimator 110 and an information computation
module 120. The sound events position estimator 110 receives a first direction information
dil from a first real spatial microphone and a second direction information di2 from a
second real spatial microphone. The sound events position estimator 110 is adapted to
estimate a sound source position ssp indicating a position of a sound source in the
environment, the sound source emitting a sound wave, wherein the sound events position
estimator 110 is adapted to estimate the sound source position ssp based on a first direction
information dil provided by a first real spatial microphone being located at a first real
microphone position poslmic in the environment, and based on a second direction
information di2 provided by a second real spatial microphone being located at a second
real microphone position in the environment. The information computation module 120 is
adapted to generate the audio output signal based on a first recorded audio input signal isl
being recorded by the first real spatial microphone, based on the first real microphone
position poslmic and based on the virtual position posVmic of the virtual microphone. The
information computation module 120 comprises a propagation compensator being adapted
to generate a first modified audio signal by modifying the first recorded audio input signal
is 1 by compensating a first delay or amplitude decay between an arrival of the sound wave
emitted by the sound source at the first real spatial microphone and an arrival of the sound
wave at the virtual microphone by adjusting an amplitude value, a magnitude value or a
phase value of the first recorded audio input signal isl, to obtain the audio output signal.
Fig. 2 illustrates the inputs and outputs of an apparatus and a method according to an
embodiment. Information from two or more real spatial microphones 111, 12, 1IN is
fed to the apparatus/is processed by the method. This information comprises audio signals
picked up by the real spatial microphones as well as direction information from the real
spatial microphones, e.g. direction of arrival (DOA) estimates. The audio signals and the
direction information, such as the direction of arrival estimates may be expressed in a timefrequency
domain. If, for example, a 2D geometry reconstruction is desired and a
traditional STFT (short time Fourier transformation) domain is chosen for the
representation of the signals, the DOA may be expressed as azimuth angles dependent on k
and n, namely the frequency and time indices.
In embodiments, the sound event localization in space, as well as describing the position of
the virtual microphone may be conducted based on the positions and orientations of the
real and virtual spatial microphones in a common coordinate system. This information may
be represented by the inputs 121 ... 12N and input 104 in Fig. 2. The input 104 may
additionally specify the characteristic of the virtual spatial microphone, e.g., its position
and pick-up pattern, as will be discussed in the following. If the virtual spatial microphone
comprises multiple virtual sensors, their positions and the corresponding different pick-up
patterns may be considered.
The output of the apparatus or a corresponding method may be, when desired, one or more
sound signals 105, which may have been picked up by a spatial microphone defined and
placed as specified by 104. Moreover, the apparatus (or rather the method) may provide as
output corresponding spatial side information 106 which may be estimated by employing
the virtual spatial microphone.
Fig. 3 illustrates an apparatus according to an embodiment, which comprises two main
processing units, a sound events position estimator 201 and an information computation
module 202. The sound events position estimator 201 may carry out geometrical
reconstruction on the basis of the DOAs comprised in inputs 111 ... 1IN and based on the
knowledge of the position and orientation of the real spatial microphones, where the DOAs
have been computed. The output of the sound events position estimator 205 comprises the
position estimates (either in 2D or 3D) of the sound sources where the sound events occur
for each time and frequency bin. The second processing block 202 is an information
computation module. According to the embodiment of Fig. 3, the second processing block
202 computes a virtual microphone signal and spatial side information. It is therefore also
referred to as virtual microphone signal and side information computation block 202. The
virtual microphone signal and side information computation block 202 uses the sound
events' positions 205 to process the audio signals comprised in 111... N to output the
virtual microphone audio signal 105. Block 202, if required, may also compute the spatial
side information 106 corresponding to the virtual spatial microphone. Embodiments below
illustrate possibilities, how blocks 201 and 202 may operate.
In the following, position estimation of a sound events position estimator according to an
embodiment is described in more detail.
Depending on the dimensionality of the problem (2D or 3D) and the number of spatial
microphones, several solutions for the position estimation are possible.
If two spatial microphones in 2D exist, (the simplest possible case) a simple triangulation
is possible. Fig. 4 shows an exemplary scenario in which the real spatial microphones are
depicted as Uniform Linear Arrays (ULAs) of 3 microphones each. The DOA, expressed
as the azimuth angles al(k, n) and a2(k, n), are computed for the time-frequency bin (k, n).
This is achieved by employing a proper DOA estimator, such as ESPRIT,
[13] R. Roy, A. Paulraj, and T. Kailath, "Direction-of-arrival estimation by subspace
rotation methods - ESPRIT," in IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP), Stanford, CA, USA, April 1986,
or (root) MUSIC, see
[14] R. Schmidt, "Multiple emitter location and signal parameter estimation," IEEE
Transactions on Antennas and Propagation, vol. 34, no. 3, pp. 276-280, 1986
to the pressure signals transformed into the time-frequency domain.
In Fig. 4, two real spatial microphones, here, two real spatial microphone arrays 410, 420
are illustrated. The two estimated DOAs al(k, n) and a2(k, n) are represented by two lines,
a first line 430 representing DOA al(k, n) and a second line 440 representing DOA a2(k,
n). The triangulation is possible via simple geometrical considerations knowing the
position and orientation of each array.
The triangulation fails when the two lines 430, 440 are exactly parallel. In real
applications, however, this is very unlikely. However, not all triangulation results
correspond to a physical or feasible position for the sound event in the considered space.
For example, the estimated position of the sound event might be too far away or even
outside the assumed space, indicating that probably the DOAs do not correspond to any
sound event which can be physically interpreted with the used model. Such results may be
caused by sensor noise or too strong room reverberation. Therefore, according to an
embodiment, such undesired results are flagged such that the information computation
module 202 can treat them properly.
Fig. 5 depicts a scenario, where the position of a sound event is estimated in 3D space.
Proper spatial microphones are employed, for example, a planar or 3D microphone array.
In Fig. 5, a first spatial microphone 510, for example, a first 3D microphone array, and a
second spatial microphone 520, e.g. , a first 3D microphone array, is illustrated. The DOA
in the 3D space, may for example, be expressed as azimuth and elevation. Unit vectors
530, 540 may be employed to express the DOAs. Two lines 550, 560 are projected
according to the DOAs. In 3D, even with very reliable estimates, the two lines 550, 560
projected according to the DOAs might not intersect. However, the triangulation can still
be carried out, for example, by choosing the middle point of the smallest segment
connecting the two lines.
Similarly to the 2D case, the triangulation may fail or may yield unfeasible results for
certain combinations of directions, which may then also be flagged, e.g. to the information
computation module 202 of Fig. 3.
If more than two spatial microphones exist, several solutions are possible. For example, the
triangulation explained above, could be carried out for all pairs of the real spatial
microphones (if N = 3, 1 with 2, 1 with 3, and 2 with 3). The resulting positions may then
be averaged (along x and y, and, if 3D is considered, z).
Alternatively, more complex concepts may be used. For example, probabilistic approaches
may be applied as described in
[15] J . Michael Steele, "Optimal Triangulation of Random Samples in the Plane", The
Annals of Probability, Vol. 10, No.3 (Aug., 1982), pp. 548-553.
According to an embodiment, the sound field may be analyzed in the time-frequency
domain, for example, obtained via a short-time Fourier transform (STFT), in which k and n
denote the frequency index k and time index n, respectively. The complex pressure P (k, n)
at an arbitrary position p for a certain k and n is modeled as a single spherical wave
emitted by a narrow-band isotropic point-like source, e.g. by employing the formula:
P v (k, n ) = P P s ( , n ) (ft, IP LS (ft, n), p v ) ,
where PiPLs(k, n) is the signal emitted by the IPLS at its position p Ls( , n). The complex
factor y(k, P P S Pv) expresses the propagation from pipLs(k, n) to pv, e.g., it introduces
appropriate phase and magnitude modifications. Here, the assumption may be applied that
in each time-frequency bin only one IPLS is active. Nevertheless, multiple narrow-band
IPLSs located at different positions may also be active at a single time instance.
Each IPLS either models direct sound or a distinct room reflection. Its position pipLs (k, n)
may ideally correspond to an actual sound source located inside the room, or a mirror
image sound source located outside, respectively. Therefore, the position piPLs ( , n) may
also indicates the position of a sound event.
Please note that the term "real sound sources" denotes the actual sound sources physically
existing in the recording environment, such as talkers or musical instruments. On the
contrary, with "sound sources" or "sound events" or "IPLS" we refer to effective sound
sources, which are active at certain time instants or at certain time-frequency bins, wherein
the sound sources may, for example, represent real sound sources or mirror image sources.
Fig. 15a-15b illustrate microphone arrays localizing sound sources. The localized sound
sources may have different physical interpretations depending on their nature. When the
microphone arrays receive direct sound, they may be able to localize the position of a true
sound source (e.g. talkers). When the microphone arrays receive reflections, they may
localize the position of a mirror image source. Mirror image sources are also sound
sources.
Fig. 15a illustrates a scenario, where two microphone arrays 1 1 and 152 receive direct
sound from an actual sound source (a physically existing sound source) 153.
Fig. 15b illustrates a scenario, where two microphone arrays 161, 162 receive reflected
sound, wherein the sound has been reflected by a wall. Because of the reflection, the
microphone arrays 161, 162 localize the position, where the sound appears to come from,
at a position of an mirror image source 165, which is different from the position of the
speaker 163.
Both the actual sound source 153 of Fig. 15a, as well as the mirror image source 165 are
sound sources.
Fig. 15c illustrates a scenario, where two microphone arrays 171, 172 receive diffuse
sound and are not able to localize a sound source.
While this single-wave model is accurate only for mildly reverberant environments given
that the source signals fulfill the W-disjoint orthogonality (WDO) condition, i.e. the timefrequency
overlap is sufficiently small. This is normally true for speech signals, see, for
example,
[12] S. Rickard and Z. Yilmaz, "On the approximate W-disjoint orthogonality of speech,"
in Acoustics, Speech and Signal Processing, 2002. ICASSP 2002. IEEE International
Conference on, April 2002, vol. 1.
However, the model also provides a good estimate for other environments and is therefore
also applicable for those environments.
In the following, the estimation of the positions piPLs(k, n) according to an embodiment is
explained. The position i Ls (k, n) of an active IPLS in a certain time-frequency bin, and
thus the estimation of a sound event in a time-frequency bin, is estimated via triangulation
on the basis of the direction of arrival (DOA) of sound measured in at least two different
observation points.
Fig. 6 illustrates a geometry, where the IPLS of the current time-frequency slot (k, n) is
located in the unknown position piPLs (k, n). In order to determine the required DOA
information, two real spatial microphones, here, two microphone arrays, are employed
having a known geometry, position and orientation, which are placed in positions 610 and
620, respectively. The vectors i and p2 point to the positions 610, 620, respectively. The
array orientations are defined by the unit vectors c i and c2. The DOA of the sound is
determined in the positions 610 and 620 for each (k, n) using a DOA estimation algorithm,
for instance as provided by the DirAC analysis (see [2], [3]). By this, a first point-of-view
unit vector e ov (k, n) and a second point-of-view unit vector e 0V(k, n) with respect to a
point of view of the microphone arrays (both not shown in Fig. 6) may be provided as
output of the DirAC analysis. For example, when operating in 2D, the first point-of-view
unit vector results to:
(2)
Here, cpi(k, n) represents the azimuth of the DOA estimated at the first microphone array,
as depicted in Fig. 6. The corresponding DOA unit vectors ei(k, n) and e (k, n), with
respect to the global coordinate system in the origin, may be computed by applying the
formulae:
e \ k , n ) = R e k , n),
e 2 (fc, n ) = · e (fc, ) ,
(3)
where R are coordinate transformation matrices, e.g.,
(4)
when operating in 2D and C C
Documents
Application Documents
| # |
Name |
Date |
| 1 |
1674-KOLNP-2013-(27-05-2013)-PCT SEARCH REPORT & OTHERS.pdf |
2013-05-27 |
| 1 |
1674-KOLNP-2013-FER.pdf |
2024-12-30 |
| 1 |
1674-KOLNP-2013-FORM 3 [14-02-2024(online)].pdf |
2024-02-14 |
| 1 |
1674-KOLNP-2013-FORM 3 [24-01-2025(online)].pdf |
2025-01-24 |
| 2 |
1674-KOLNP-2013-(27-05-2013)-OTHERS.pdf |
2013-05-27 |
| 2 |
1674-KOLNP-2013-FORM 3 [04-08-2023(online)].pdf |
2023-08-04 |
| 2 |
1674-KOLNP-2013-FORM 3 [14-02-2024(online)].pdf |
2024-02-14 |
| 2 |
1674-KOLNP-2013-Information under section 8(2) [24-01-2025(online)].pdf |
2025-01-24 |
| 3 |
1674-KOLNP-2013-(27-05-2013)-FORM-5.pdf |
2013-05-27 |
| 3 |
1674-KOLNP-2013-FER.pdf |
2024-12-30 |
| 3 |
1674-KOLNP-2013-FORM 3 [04-08-2023(online)].pdf |
2023-08-04 |
| 3 |
1674-KOLNP-2013-FORM 3 [14-02-2023(online)].pdf |
2023-02-14 |
| 4 |
1674-KOLNP-2013-(27-05-2013)-FORM-3.pdf |
2013-05-27 |
| 4 |
1674-KOLNP-2013-FORM 3 [14-02-2023(online)].pdf |
2023-02-14 |
| 4 |
1674-KOLNP-2013-FORM 3 [14-02-2024(online)].pdf |
2024-02-14 |
| 4 |
1674-KOLNP-2013-FORM 3 [25-08-2022(online)].pdf |
2022-08-25 |
| 5 |
1674-KOLNP-2013-Information under section 8(2) [21-02-2022(online)].pdf |
2022-02-21 |
| 5 |
1674-KOLNP-2013-FORM 3 [25-08-2022(online)].pdf |
2022-08-25 |
| 5 |
1674-KOLNP-2013-FORM 3 [04-08-2023(online)].pdf |
2023-08-04 |
| 5 |
1674-KOLNP-2013-(27-05-2013)-FORM-2.pdf |
2013-05-27 |
| 6 |
1674-KOLNP-2013-Information under section 8(2) [21-02-2022(online)].pdf |
2022-02-21 |
| 6 |
1674-KOLNP-2013-Information under section 8(2) [09-08-2021(online)].pdf |
2021-08-09 |
| 6 |
1674-KOLNP-2013-FORM 3 [14-02-2023(online)].pdf |
2023-02-14 |
| 6 |
1674-KOLNP-2013-(27-05-2013)-FORM-1.pdf |
2013-05-27 |
| 7 |
1674-KOLNP-2013-(27-05-2013)-CORRESPONDENCE.pdf |
2013-05-27 |
| 7 |
1674-KOLNP-2013-FORM 3 [25-08-2022(online)].pdf |
2022-08-25 |
| 7 |
1674-KOLNP-2013-Information under section 8(2) [09-08-2021(online)].pdf |
2021-08-09 |
| 7 |
1674-KOLNP-2013-Information under section 8(2) [13-08-2020(online)].pdf |
2020-08-13 |
| 8 |
1674-KOLNP-2013-Information under section 8(2) [13-08-2020(online)].pdf |
2020-08-13 |
| 8 |
1674-KOLNP-2013-Information under section 8(2) [21-02-2022(online)].pdf |
2022-02-21 |
| 8 |
1674-KOLNP-2013-Information under section 8(2) [26-02-2020(online)].pdf |
2020-02-26 |
| 8 |
1674-KOLNP-2013.pdf |
2013-06-07 |
| 9 |
1674-KOLNP-2013-FORM-18.pdf |
2013-08-13 |
| 9 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [05-08-2019(online)].pdf |
2019-08-05 |
| 9 |
1674-KOLNP-2013-Information under section 8(2) [09-08-2021(online)].pdf |
2021-08-09 |
| 9 |
1674-KOLNP-2013-Information under section 8(2) [26-02-2020(online)].pdf |
2020-02-26 |
| 10 |
1674-KOLNP-2013-(17-10-2013)-CORRESPONDENCE.pdf |
2013-10-17 |
| 10 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [05-08-2019(online)].pdf |
2019-08-05 |
| 10 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [11-05-2019(online)].pdf |
2019-05-11 |
| 10 |
1674-KOLNP-2013-Information under section 8(2) [13-08-2020(online)].pdf |
2020-08-13 |
| 11 |
1674-KOLNP-2013-(17-10-2013)-ANNEXURE TO FORM 3.pdf |
2013-10-17 |
| 11 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [11-05-2019(online)].pdf |
2019-05-11 |
| 11 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf |
2019-02-18 |
| 11 |
1674-KOLNP-2013-Information under section 8(2) [26-02-2020(online)].pdf |
2020-02-26 |
| 12 |
1674-KOLNP-2013-(07-11-2013)-GPA_.pdf |
2013-11-07 |
| 12 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [05-08-2019(online)].pdf |
2019-08-05 |
| 12 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [14-08-2018(online)].pdf |
2018-08-14 |
| 12 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf |
2019-02-18 |
| 13 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [23-02-2018(online)].pdf |
2018-02-23 |
| 13 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [14-08-2018(online)].pdf |
2018-08-14 |
| 13 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [11-05-2019(online)].pdf |
2019-05-11 |
| 13 |
1674-KOLNP-2013-(07-11-2013)-CORRESPONDENCE_.pdf |
2013-11-07 |
| 14 |
1674-KOLNP-2013-(07-11-2013)-ASSIGNMENT_.pdf |
2013-11-07 |
| 14 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf |
2019-02-18 |
| 14 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [19-02-2018(online)].pdf |
2018-02-19 |
| 14 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [23-02-2018(online)].pdf |
2018-02-23 |
| 15 |
1674-KOLNP-2013-(18-11-2013)-OTHERS.pdf |
2013-11-18 |
| 15 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [01-12-2017(online)].pdf |
2017-12-01 |
| 15 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [14-08-2018(online)].pdf |
2018-08-14 |
| 15 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [19-02-2018(online)].pdf |
2018-02-19 |
| 16 |
1674-KOLNP-2013-(18-11-2013)-CORRESPONDENCE.pdf |
2013-11-18 |
| 16 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [01-12-2017(online)].pdf |
2017-12-01 |
| 16 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [07-08-2017(online)].pdf |
2017-08-07 |
| 16 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [23-02-2018(online)].pdf |
2018-02-23 |
| 17 |
1674-KOLNP-2013-(26-08-2014)-FORM-6.pdf |
2014-08-26 |
| 17 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [07-08-2017(online)].pdf |
2017-08-07 |
| 17 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [19-02-2018(online)].pdf |
2018-02-19 |
| 17 |
Other Patent Document [25-03-2017(online)].pdf |
2017-03-25 |
| 18 |
1674-KOLNP-2013-(26-08-2014)-FORM-5.pdf |
2014-08-26 |
| 18 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [01-12-2017(online)].pdf |
2017-12-01 |
| 18 |
Other Patent Document [15-02-2017(online)].pdf |
2017-02-15 |
| 18 |
Other Patent Document [25-03-2017(online)].pdf |
2017-03-25 |
| 19 |
1674-KOLNP-2013-(26-08-2014)-FORM-3.pdf |
2014-08-26 |
| 19 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [07-08-2017(online)].pdf |
2017-08-07 |
| 19 |
Other Patent Document [06-01-2017(online)].pdf |
2017-01-06 |
| 19 |
Other Patent Document [15-02-2017(online)].pdf |
2017-02-15 |
| 20 |
1674-KOLNP-2013-(26-08-2014)-FORM-2.pdf |
2014-08-26 |
| 20 |
Other Patent Document [06-01-2017(online)].pdf |
2017-01-06 |
| 20 |
Other Patent Document [11-08-2016(online)].pdf |
2016-08-11 |
| 20 |
Other Patent Document [25-03-2017(online)].pdf |
2017-03-25 |
| 21 |
Other Patent Document [24-05-2016(online)].pdf |
2016-05-24 |
| 21 |
Other Patent Document [15-02-2017(online)].pdf |
2017-02-15 |
| 21 |
Other Patent Document [11-08-2016(online)].pdf |
2016-08-11 |
| 21 |
1674-KOLNP-2013-(26-08-2014)-FORM-1.pdf |
2014-08-26 |
| 22 |
1674-KOLNP-2013-(26-08-2014)-ASSIGNMENT.pdf |
2014-08-26 |
| 22 |
1674-KOLNP-2013-(26-08-2014)-DRAWINGS.pdf |
2014-08-26 |
| 22 |
Other Patent Document [06-01-2017(online)].pdf |
2017-01-06 |
| 22 |
Other Patent Document [24-05-2016(online)].pdf |
2016-05-24 |
| 23 |
1674-KOLNP-2013-(26-08-2014)-ASSIGNMENT.pdf |
2014-08-26 |
| 23 |
1674-KOLNP-2013-(26-08-2014)-CORRESPONDENCE.pdf |
2014-08-26 |
| 23 |
Other Patent Document [11-08-2016(online)].pdf |
2016-08-11 |
| 24 |
Other Patent Document [24-05-2016(online)].pdf |
2016-05-24 |
| 24 |
1674-KOLNP-2013-(26-08-2014)-DRAWINGS.pdf |
2014-08-26 |
| 24 |
1674-KOLNP-2013-(26-08-2014)-CORRESPONDENCE.pdf |
2014-08-26 |
| 24 |
1674-KOLNP-2013-(26-08-2014)-ASSIGNMENT.pdf |
2014-08-26 |
| 25 |
1674-KOLNP-2013-(26-08-2014)-DRAWINGS.pdf |
2014-08-26 |
| 25 |
1674-KOLNP-2013-(26-08-2014)-FORM-1.pdf |
2014-08-26 |
| 25 |
Other Patent Document [24-05-2016(online)].pdf |
2016-05-24 |
| 25 |
1674-KOLNP-2013-(26-08-2014)-ASSIGNMENT.pdf |
2014-08-26 |
| 26 |
1674-KOLNP-2013-(26-08-2014)-CORRESPONDENCE.pdf |
2014-08-26 |
| 26 |
1674-KOLNP-2013-(26-08-2014)-FORM-1.pdf |
2014-08-26 |
| 26 |
1674-KOLNP-2013-(26-08-2014)-FORM-2.pdf |
2014-08-26 |
| 26 |
Other Patent Document [11-08-2016(online)].pdf |
2016-08-11 |
| 27 |
1674-KOLNP-2013-(26-08-2014)-DRAWINGS.pdf |
2014-08-26 |
| 27 |
1674-KOLNP-2013-(26-08-2014)-FORM-2.pdf |
2014-08-26 |
| 27 |
1674-KOLNP-2013-(26-08-2014)-FORM-3.pdf |
2014-08-26 |
| 27 |
Other Patent Document [06-01-2017(online)].pdf |
2017-01-06 |
| 28 |
Other Patent Document [15-02-2017(online)].pdf |
2017-02-15 |
| 28 |
1674-KOLNP-2013-(26-08-2014)-FORM-5.pdf |
2014-08-26 |
| 28 |
1674-KOLNP-2013-(26-08-2014)-FORM-3.pdf |
2014-08-26 |
| 28 |
1674-KOLNP-2013-(26-08-2014)-FORM-1.pdf |
2014-08-26 |
| 29 |
1674-KOLNP-2013-(26-08-2014)-FORM-2.pdf |
2014-08-26 |
| 29 |
1674-KOLNP-2013-(26-08-2014)-FORM-5.pdf |
2014-08-26 |
| 29 |
1674-KOLNP-2013-(26-08-2014)-FORM-6.pdf |
2014-08-26 |
| 29 |
Other Patent Document [25-03-2017(online)].pdf |
2017-03-25 |
| 30 |
1674-KOLNP-2013-(18-11-2013)-CORRESPONDENCE.pdf |
2013-11-18 |
| 30 |
1674-KOLNP-2013-(26-08-2014)-FORM-3.pdf |
2014-08-26 |
| 30 |
1674-KOLNP-2013-(26-08-2014)-FORM-6.pdf |
2014-08-26 |
| 30 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [07-08-2017(online)].pdf |
2017-08-07 |
| 31 |
1674-KOLNP-2013-(18-11-2013)-CORRESPONDENCE.pdf |
2013-11-18 |
| 31 |
1674-KOLNP-2013-(18-11-2013)-OTHERS.pdf |
2013-11-18 |
| 31 |
1674-KOLNP-2013-(26-08-2014)-FORM-5.pdf |
2014-08-26 |
| 31 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [01-12-2017(online)].pdf |
2017-12-01 |
| 32 |
1674-KOLNP-2013-(07-11-2013)-ASSIGNMENT_.pdf |
2013-11-07 |
| 32 |
1674-KOLNP-2013-(18-11-2013)-OTHERS.pdf |
2013-11-18 |
| 32 |
1674-KOLNP-2013-(26-08-2014)-FORM-6.pdf |
2014-08-26 |
| 32 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [19-02-2018(online)].pdf |
2018-02-19 |
| 33 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [23-02-2018(online)].pdf |
2018-02-23 |
| 33 |
1674-KOLNP-2013-(18-11-2013)-CORRESPONDENCE.pdf |
2013-11-18 |
| 33 |
1674-KOLNP-2013-(07-11-2013)-CORRESPONDENCE_.pdf |
2013-11-07 |
| 33 |
1674-KOLNP-2013-(07-11-2013)-ASSIGNMENT_.pdf |
2013-11-07 |
| 34 |
1674-KOLNP-2013-(07-11-2013)-CORRESPONDENCE_.pdf |
2013-11-07 |
| 34 |
1674-KOLNP-2013-(07-11-2013)-GPA_.pdf |
2013-11-07 |
| 34 |
1674-KOLNP-2013-(18-11-2013)-OTHERS.pdf |
2013-11-18 |
| 34 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [14-08-2018(online)].pdf |
2018-08-14 |
| 35 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [18-02-2019(online)].pdf |
2019-02-18 |
| 35 |
1674-KOLNP-2013-(17-10-2013)-ANNEXURE TO FORM 3.pdf |
2013-10-17 |
| 35 |
1674-KOLNP-2013-(07-11-2013)-GPA_.pdf |
2013-11-07 |
| 35 |
1674-KOLNP-2013-(07-11-2013)-ASSIGNMENT_.pdf |
2013-11-07 |
| 36 |
1674-KOLNP-2013-(07-11-2013)-CORRESPONDENCE_.pdf |
2013-11-07 |
| 36 |
1674-KOLNP-2013-(17-10-2013)-ANNEXURE TO FORM 3.pdf |
2013-10-17 |
| 36 |
1674-KOLNP-2013-(17-10-2013)-CORRESPONDENCE.pdf |
2013-10-17 |
| 36 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [11-05-2019(online)].pdf |
2019-05-11 |
| 37 |
1674-KOLNP-2013-(07-11-2013)-GPA_.pdf |
2013-11-07 |
| 37 |
1674-KOLNP-2013-(17-10-2013)-CORRESPONDENCE.pdf |
2013-10-17 |
| 37 |
1674-KOLNP-2013-FORM-18.pdf |
2013-08-13 |
| 37 |
1674-KOLNP-2013-Information under section 8(2) (MANDATORY) [05-08-2019(online)].pdf |
2019-08-05 |
| 38 |
1674-KOLNP-2013-(17-10-2013)-ANNEXURE TO FORM 3.pdf |
2013-10-17 |
| 38 |
1674-KOLNP-2013-FORM-18.pdf |
2013-08-13 |
| 38 |
1674-KOLNP-2013-Information under section 8(2) [26-02-2020(online)].pdf |
2020-02-26 |
| 38 |
1674-KOLNP-2013.pdf |
2013-06-07 |
| 39 |
1674-KOLNP-2013.pdf |
2013-06-07 |
| 39 |
1674-KOLNP-2013-Information under section 8(2) [13-08-2020(online)].pdf |
2020-08-13 |
| 39 |
1674-KOLNP-2013-(27-05-2013)-CORRESPONDENCE.pdf |
2013-05-27 |
| 39 |
1674-KOLNP-2013-(17-10-2013)-CORRESPONDENCE.pdf |
2013-10-17 |
| 40 |
1674-KOLNP-2013-(27-05-2013)-CORRESPONDENCE.pdf |
2013-05-27 |
| 40 |
1674-KOLNP-2013-(27-05-2013)-FORM-1.pdf |
2013-05-27 |
| 40 |
1674-KOLNP-2013-FORM-18.pdf |
2013-08-13 |
| 40 |
1674-KOLNP-2013-Information under section 8(2) [09-08-2021(online)].pdf |
2021-08-09 |
| 41 |
1674-KOLNP-2013-(27-05-2013)-FORM-1.pdf |
2013-05-27 |
| 41 |
1674-KOLNP-2013-(27-05-2013)-FORM-2.pdf |
2013-05-27 |
| 41 |
1674-KOLNP-2013-Information under section 8(2) [21-02-2022(online)].pdf |
2022-02-21 |
| 41 |
1674-KOLNP-2013.pdf |
2013-06-07 |
| 42 |
1674-KOLNP-2013-(27-05-2013)-CORRESPONDENCE.pdf |
2013-05-27 |
| 42 |
1674-KOLNP-2013-(27-05-2013)-FORM-2.pdf |
2013-05-27 |
| 42 |
1674-KOLNP-2013-(27-05-2013)-FORM-3.pdf |
2013-05-27 |
| 42 |
1674-KOLNP-2013-FORM 3 [25-08-2022(online)].pdf |
2022-08-25 |
| 43 |
1674-KOLNP-2013-(27-05-2013)-FORM-1.pdf |
2013-05-27 |
| 43 |
1674-KOLNP-2013-(27-05-2013)-FORM-3.pdf |
2013-05-27 |
| 43 |
1674-KOLNP-2013-(27-05-2013)-FORM-5.pdf |
2013-05-27 |
| 43 |
1674-KOLNP-2013-FORM 3 [14-02-2023(online)].pdf |
2023-02-14 |
| 44 |
1674-KOLNP-2013-(27-05-2013)-FORM-2.pdf |
2013-05-27 |
| 44 |
1674-KOLNP-2013-(27-05-2013)-FORM-5.pdf |
2013-05-27 |
| 44 |
1674-KOLNP-2013-(27-05-2013)-OTHERS.pdf |
2013-05-27 |
| 44 |
1674-KOLNP-2013-FORM 3 [04-08-2023(online)].pdf |
2023-08-04 |
| 45 |
1674-KOLNP-2013-(27-05-2013)-FORM-3.pdf |
2013-05-27 |
| 45 |
1674-KOLNP-2013-(27-05-2013)-OTHERS.pdf |
2013-05-27 |
| 45 |
1674-KOLNP-2013-(27-05-2013)-PCT SEARCH REPORT & OTHERS.pdf |
2013-05-27 |
| 45 |
1674-KOLNP-2013-FORM 3 [14-02-2024(online)].pdf |
2024-02-14 |
| 46 |
1674-KOLNP-2013-(27-05-2013)-FORM-5.pdf |
2013-05-27 |
| 46 |
1674-KOLNP-2013-(27-05-2013)-PCT SEARCH REPORT & OTHERS.pdf |
2013-05-27 |
| 46 |
1674-KOLNP-2013-FER.pdf |
2024-12-30 |
| 47 |
1674-KOLNP-2013-(27-05-2013)-OTHERS.pdf |
2013-05-27 |
| 47 |
1674-KOLNP-2013-Information under section 8(2) [24-01-2025(online)].pdf |
2025-01-24 |
| 48 |
1674-KOLNP-2013-FORM 3 [24-01-2025(online)].pdf |
2025-01-24 |
| 48 |
1674-KOLNP-2013-(27-05-2013)-PCT SEARCH REPORT & OTHERS.pdf |
2013-05-27 |
Search Strategy
| 1 |
Search_Strategy_1674-KOLNP-2013E_18-12-2024.pdf |