Sign In to Follow Application
View All Documents & Correspondence

Apparatus And Method For Generating A Plurality Of Parametric Audio Streams And Apparatus And Method For Generating A Plurality Of Loudspeaker Signals

Abstract: An apparatus (100) for generating a plurality of parametric audio streams ( 125) (θi, Ψi, Wi) from an input spatial audio signal (105) obtained from a recording in a recording space comprises a segmentor (110) and a generator (120). The segmentor (110) is configured for providing at least two input segmental audio signals (115) (Wi, Xi, Yi, Zi) from the input spatial audio signal (105), wherein the at least two input segmental audio signals (1 15) (Wi, Xi, Yi, Zi) are associated with corresponding segments (Segi) of the recording space. The generator (120) is configured for generating a parametric audio stream for each of the at least two input segmental audio signals (115) (Wi, Xi, Yi, Zi) to obtain the plurality of parametric audio streams (125) (θi, Ψi, Wi).

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 May 2015
Publication Number
03/2016
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-10-30
Renewal Date

Applicants

FRAUNHOFER GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V.
Hansastraße 27c 80686 München
TECHNISCHE UNIVERSITÄT ILMENAU
Ehrenbergstraße 29 98693 Ilmenau

Inventors

1. KÜCH Fabian
Schützenweg 13 91052 Erlangen
2. DEL GALDO Giovanni
Neue Länder 20 98693 Martinroda
3. KUNTZ Achim
Weiherstrasse 12 91334 Hemhofen
4. PULKKI Ville
Ylaportti 4A7 FIN 02210 Espoo
5. POLITIS Archontis
Korppaanmäentie 25A6 FI 00300 Helsinki

Specification

Apparatus and Method for Generating a Plurality of Parametric Audio Streams and Apparatus and Method for Generating a Plurality of Loudspeaker Signals

Description

Technical Field

The present invention generally relates to a parametric spatial audio processing, and in particular to an apparatus and a method for generating a plurality of parametric audio streams and an apparatus and a method for generating a plurality of loudspeaker signals. Further embodiments of the present invention relate to a sector-based parametric spatial audio processing.

Background of the Invention

In multichannel listening, the listener is surrounded with multiple loudspeakers. A variety of known methods exist to capture audio for such setups. Let us first consider loudspeaker systems and the spatial impression that can be created with them. Without special techniques, common two-channel stereophonic setups can only create auditory events on the line connecting the loudspeakers. Sound emanating from other directions cannot be produced. Logically, by using more loudspeakers around the listener, more directions can be covered and a more natural spatial impression can be created. The most well known multichannel loudspeaker system and layout is the 5.1 standard ("ITU-R 775-1"), which consists o five loudspeakers at azimuthal angles of 0°, 30° and 1 10° with respect to the listening position. Other systems with a varying number f loudspeakers located at different directions are also known.

In the art, several different recording methods have been designed for the previously mentioned loudspeaker systems, in order to reproduce the spatial impression in the listening situation as it would be perceived in the recording environment. The ideal way to record spatial sound for a chosen multichannel loudspeaker system would be to use the same number of microphones as there are loudspeakers. In such a case, the directivity patterns of the microphones should also correspond to the loudspeaker layout such that sound from any single direction would only be recorded with one, two, or three microphones. The more loudspeakers are used, the narrower directivity patterns are thus needed. However, such narrow directional microphones are relatively expensive, and have typically a non-flat frequency response, which is not desired. Furthermore, using several microphones with too broad directivity patterns as input to multichannel reproduction results in a colored and blurred auditor perception, due to the fact that sound emanating from a single direction is always reproduced with more loudspeakers than necessary. Hence, current microphones are best suited for two-channel recording and reproduction without the goal of a surrounding spatial impression.

Another known approach to spatial sound recording is to record a large number of microphones which are distributed over a wide spatial area. For example, when recording an orchestra on a stage, the single instruments can be picked up by so-called spot microphones, which are positioned closely to the sound sources. The spatial distribution of the frontal sound stage can, for example, be captured by conventional stereo microphones. The sound field components corresponding to the late reverberation can be captured by several microphones placed at a relatively far distance to the stage. A sound engineer can then mix the desired multichannel output by using a combination of all microphone channels available. However, this recording technique implies a very large recording setup and hand crafted mixing of the recorded channels, which is not always feasible in practice.

Conventional systems for the recording and reproduction of spatial audio based on directional audio coding (DirAC), as described in T. Lokki, J. Merimaa, V. Pulkki: Method for Reproducing Natural or Modified Spatial Impression in Multichannel Listening, U.S. Patent 7,787,638 B2, Aug. 31, 2010 and V. Pulkki: Spatial Sound Reproduction with Directional Audio Coding. J. Audio Eng. Soc, Vol. 55, No. 6, pp. 503-516, 2007, rely on a simple global model for the sound field. Therefore, they suffer from some systematic drawbacks, which limits the achievable sound quality and experience in practice.

A general problem of known solutions is that they are relatively comple and typically associated with a degradation o the spatial sound quality.

Therefore, it is an object of the present invention to provide an improved concept for a parametric spatial audio processing which allows for a higher quality, more realistic spatial sound recording and reproduction using relatively simple and compact microphone configurations.

Summary of the Invention

This object is achieved by an apparatus according to claim 1 , an apparatus according to claim 13, a method according to claim 15, a method according to claim 16, a computer program according to claim 17 or a computer program according to claim 18.

According to an embodiment of the present invention, an apparatus for generating a plurality of parametric audio streams from an input spatial audio signal obtained from a recording in a recording space comprises a segmentor and a generator. The segmentor is configured for providing at least two input segmental audio signals from the input spatial audio signal. Here, the at least two input segmental audio signals are associated with corresponding segments of the recording space. The generator is configured for generating a parametric audio stream for each of the at least two input segmental audio signals to obtain the plurality of parametric audio streams.

The basic idea underlying the present invention is that the improved parametric spatial audio processing can be achieved if at least two input segmental audio signals are provided from the input spatial audio signal, wherein the at least two input segmental audio signals are associated with corresponding segments of the recording space, and if a parametric audio stream is generated for each of the at least two input segmental audio signals to obtain the plurality of parametric audio streams. This allows to achieve the higher quality, more realistic spatial sound recording and reproduction using relatively simple and compact microphone configurations.

According to a further embodiment, the segmentor is configured to use a directivity pattern for each of the segments of the recording space. Here, the directivity pattern indicates a directivity of the at least two input segmental audio signals. By the use of the directivity patterns, it is possible to obtain a better model match of the observed sound field, especially in complex sound scenes.

According to a further embodiment, the generator is configured for obtaining the plurality of parametric audio streams, wherein the plurality of parametric audio streams each comprise a component of the at least two input segmental audio signals and a corresponding parametric spatial information. For example, the parametric spatial information of each of the parametric audio streams comprises a d i recti on-of-arri val (DOA) parameter and/or a diffuseness parameter. By providing the DOA parameters and/or the diffuseness parameters, it is possible to describe the observed sound field in a parametric signal representation domain.

According to a further embodiment, an apparatus for generating a plurality of loudspeaker signals from a plurality of parametric audio streams derived from an input spatial audio signal recorded in a recording space comprises a renderer and a combiner. The renderer is configured for providing a plurality of input segmental loudspeaker signals from the plurality of parametric audio streams. Here, the input segmental loudspeaker signals are associated with corresponding segments o the recording space. The combiner is configured for combining the input segmental loudspeaker signals to obtain the plurality of loudspeaker signals.

Further embodiments of the present invention provide methods for generating a plurality of parametric audio streams and for generating a plurality of loudspeaker signals.

Brief Description of the Figures

In the following, embodiments of the present invention will be explained with reference to the accompanying drawings, in which:

Fig. 1 shows a block diagram of an embodiment of an apparatus for generating a plurality of parametric audio streams from an input spatial audio recording in a recording space with a segmentor and a generator;

Fig. 2 shows a schematic illustration of the segmentor of the embodiment of the apparatus in accordance with Fig. 1 based on a mixing or matrixing operation;

Fig. 3 shows a schematic illustration of the segmentor of the embodiment of the apparatus in accordance with Fig. 1 using a directivity pattern:

Fig. 4 shows a schematic illustration of the generator of the embodiment of the apparatus in accordance with Fig. 1 based on a parametric spatial analysis:

Fig. 5 shows a block diagram of an embodiment of an apparatus for generating a plurality of loudspeaker signals from a plurality of parametric audio streams with a renderer and a combiner;

Fig. 6 shows a schematic illustration of example segments of a recording space, each representing a subset of directions within a two-dimensional (2D) plane or within a three-dimensional (3D) space;

Fig. 7 shows a schematic illustration of an example loudspeaker signal computation for two segments or sectors of a recording space;

Fig. 8 shows a schematic illustration of an example loudspeaker signal computation for two segments or sectors of a recording space using second order B-format input signals;

Fig. 9 shows a schematic illustration of an example loudspeaker signal computation for two segments or sectors of a recording space including a signal modification in a parametric signal representation domain;

Fig. 10 shows a schematic illustration of example polar patterns of input segmental audio signals provided by the segmentor of the embodiment of the apparatus in accordance with Fig. 1 ;

Fig. 1 1 shows a schematic illustration of an example microphone configuration for performing a sound field recording; and

Fig. 12 shows a schematic illustration of an example circular array of omnidirectional microphones for obtaining higher order microphone signals.

Detailed Description of the Embodiments

Before discussing the present invention in further detail using the drawings, it is pointed out that in the figures identical elements, elements having the same function or the same effect are provided with the same reference numerals so that the description of these elements and the functionality thereof illustrated in the different embodiments is mutually exchangeable or may be applied to one another in the different embodiments.

Fig. 1 shows a block diagram of an embodiment of an apparatus 100 for generating a plurality of parametric audio streams 125 (θ,, Ψ„ W;) from an input spatial audio signal 105 obtained from a recording in a recording space with a segmentor 110 and a generator 120. For example, the input spatial audio signal 105 comprises an omnidirectional signal W and a plurality of different directional signals X, Y, Z, U, V (or X, Y, U, V). As shown in Fig. 1 , the apparatus 100 comprises a segmentor 1 10 and a generator 120. For example, the segmentor 1 10 is configured for providing at least two input segmental audio signals 1 15 (Wi, Xj. Yj, Zj) from the omnidirectional signal W and the plurality of different, directional signals X. Y, Z. U, V of the input spatial audio signal 105. wherein the at least two input segmental audio signals 1 15 (Wj, X;. Yj, Zj) are associated with corresponding segments Segi of the recording space. Furthermore, the generator 120 may be configured for generating a parametric audio stream for each of the at least two input segmentor audio signals 1 15 (Wj, Xj, Yj, Zi) to obtain the plurality of parametric audio streams 1 25 (θ;, Ψ,, W,).

By the apparatus 100 for generating the plurality of parametric audio streams 125, it is possible to avoid a degradation of the spatial sound quality and to avoid relatively complex microphone configurations. Accordingly, the embodiment of the apparatus 100 in accordance with Fig. 1 allows for a higher quality, more realistic spatial sound recording using relatively simple and compact microphone configurations.

In embodiments, the segments Segi of the recording space each represent a subset of directions within a two-dimensional (2D) plane or within a three-dimensional (3D) space.

In embodiments, the segments Segi of the recording space each are characterized by an associated directional measure.

According to embodiments, the apparatus 100 is configured for performing a sound field recording to obtain the input spatial audio signal 105. For example, the segmentor 110 is configured to divide a full angle range of interest into the segments Segi of the recording space. Furthermore, the segments Segi of the recording space may each cover a reduced angle range compared to the full angle range of interest.

Fig. 2 shows a schematic illustration of the segmentor 1 10 of the embodiment of the apparatus 100 in accordance with Fig. 1 based on a mixing (or matrixing) operation. As exemplarily depicted in Fig. 2. the segmentor 1 10 is configured to generate the at least two input segmental audio signals 1 1 5 (Wj, Xj. Yj. Zj) from the omnidirectional signal W and the plurality o different directional signals X, Y, Z, U, V using a mixing or matrixing operation which depends on the segments Segi f the recording space. By the segmentor 110 exemplarily shown in Fig. 2, it is possible to map the omnidirectional signal W and the plurality of different directional signals X, Y, Z, U, V constituting the input spatial audio signal 105 to the at least two input segmental audio signal 1 15 (Wj, Xj, Yj, Zj) using a predefined mixing or matrixing operation. This predefined mixing or matrixing operation depends on the segments Seg; of the recording space and can substantially be used to branch off the at least two input segmental audio signals 1 15 (Wj, Xj, Yj, Zj) from the input spatial audio signal 105. The branching off of the at least two input segmental audio

signals 1 15 (Wj, Xj, Yj, Zj) by the segmental 1 10 which is based on the mixing or matrix ing operation substantially allows to achieve the above mentioned advantages as opposed to a simple global model for the sound field.

Fig. 3 shows a schematic illustration of the segmentor 1 10 of the embodiment of the apparatus 100 in accordance with Fig. 1 using a (desired or predetermined) directivity pattern 305, qi($). As exemplarily depicted in Fig. 3, the segmentor 1 10 is configured to use a directivity pattern 305, q ) for each of the segments Segj of the recording space. Furthermore, the directivity pattern 305, qj( ), may indicate a directivity of the at least two input segmental audio signals 1 15 (Wj, Xj, Yj, ¾).

In embodiments, the directivity pattern 305, qi( ), is given by

φ(θ) = a + b cos(d + Θ (1)

where a and b denote multipliers that can be modified to obtain desired directivity patterns and wherein θ denotes an azimuthal angle and Θ; indicates a preferred direction of the i'th segment of the recording space. For example, a lies in a range of 0 to 1 and b in a range of -l to l .

One useful choice of multipliers a, b may be a=0.5 and b=0.5, resulting in the following directivity pattern:

¾(θ) = 0.5 + 0.5 cos( + 0j) (l a)

By the segmentor 1 10 exemplarily depicted in Fig. 3. it is possible to obtain the at least two input segmental audio signals 1 1 5 (Wj, Xj, Yj, Zj) associated with the corresponding segments Segj of the recording space having a predetermined directivity pattern 305, respectively. It is pointed out here that the use of the directivity pattern 305, for each of the segments Seg, of the recording space allows to enhance the spatial sound quality-obtained with the apparatus 100.

Fig. 4 shows a schematic illustration of the generator 120 of the embodiment of the apparatus 100 in accordance with Fig. 1 based on a parametric spatial analysis. As exemplarily depicted in Fig. 4, the generator 120 is configured for obtaining the plurality of parametric audio streams 1 25 (0j, Ψ,, Wj). Furthermore, the plurality of parametric audio streams 125 (0j, Wj) may each comprise a component Wj of the at least two input

segmental audio signals 1 15 (Wj, Xj, Yj, Zj) and a corresponding parametric spatial information θ;, Ψ,.

In embodiments, the generator 120 may be configured for performing a parametric spatial analysis for each of the at least two input segmental audio signals 1 15 (Wj, Xj, Yj, Zj) to obtain the corresponding parametric spatial information 9j,

In embodiments, the parametric spatial information θ¾, Ψ, of each of the parametric audio streams 125 (θ;, Ψί, Wj) comprises a direction-of-arrival (DOA) parameter Oj and/or a diffuseness parameter Ψ;.

In embodiments, the direction-of-arrival (DOA) parameter Θ; and the diffuseness parameter Ψ; provided by the generator 120 exemplarily depicted in Fig. 4 may constitute DirAC parameters for a parametric spatial audio signal processing. For example, the generator 120 is configured for generating the DirAC parameters (e.g. the DOA parameter 0j and the diffuseness parameter Ψ ) using a time-frequency representation of the at least two input segmental audio signals 1 15.

Fig. 5 shows a block diagram of an embodiment of an apparatus 500 for generating a plurality of loudspeaker signals 525 ( Lj . L2, ...) from a plurality of parametric audio streams 125 (0j, j, Wj) with a renderer 510 and a combiner 520. In the embodiment of Fig. 5, the plurality of parametric audio streams 125 (0j, Ψ„ Wj) may be derived from an input spatial audio signal (e.g. the input spatial audio signal 105 exemplarily depicted in the embodiment of Fig. 1 ) recorded in a recording space. As shown in Fig. 5, the apparatus 500 comprises a renderer 510 and a combiner 520. For example, the renderer 510 is configured for providing a plurality of input segmental loudspeaker signals 515 from the plurality of parametric audio streams 125 (9j, Ψ , Wj), wherein the input segmental loudspeaker signals 515 are associated with corresponding segments (Segj) of the recording space. Furthermore, the combiner 520 may be configured for combining the input segmental loudspeaker signals 515 to obtain the plurality of loudspeaker signals 525 (Ll 5 L2, ...).

By providing the apparatus 500 of Fig. 5, it is possible to generate the plurality of loudspeaker signals 525 (L1 ; L2, ...) from the plurality of parametric audio streams 125 (9j, Pj, Wj), wherein the parametric audio streams 125 (Θ;, Tj, Wj) may be transmitted from the apparatus 100 of Fig. 1. Furthermore, the apparatus 500 of Fig. 5 allows to achieve a higher quality, more realistic spatial sound reproduction using parametric audio streams derived from relatively simple and compact microphone configurations.

In embodiments, the Tenderer 510 is configured for receiving the plurality of parametric audio streams 125 (Oj, Ψ|, Wj). For example, the plurality of parametric audio streams 125 (Oj, lPj, Wj) each comprise a segmental audio component Wj and a corresponding parametric spatial information 0;, Furthermore, the rendcrer 510 may be configured f r rendering each of the segmental audio components W, using the corresponding parametric spatial information 505 (Oj, Ψ,) to obtain the plurality of input segmental loudspeaker signals 515.

Fig. 6 shows a schematic illustration 600 of example segments Segj (i = 1 , 2, 3, 4) 610, 620, 630, 640 of a recording space. In the schematic illustration 600 of Fig. 6, the example segments 610, 620, 630, 640 of the recording space each represent a subset of directions within a two-dimensional (2D) plane. In addition, the segments Segj of the recording space may each represent a subset of directions within a three-dimensional (3D) space. For example, the segments Segj representing the subsets of directions within the three-dimensional (3D) space can be similar to the segments 610, 620, 630, 640 exemplarily depicted in Fig. 6. According to the schematic illustration 600 of Fig. 6. four example segments 610, 620, 630. 640 for the apparatus 100 of Fig. 1 are exemplarily shown. However, it is also possible to use a different number of segments Segj (i = 1, 2, n, wherein i is an integer index, and n denotes the number of segments). The example segments 610, 620, 630, 640 may each be represented in a polar coordinate system (see, e.g. Fig. 6). For the three-dimensional (3D) space, the segments Segi may similarly be represented in a spherical coordinate system.

In embodiments, the segmentor 1 10 exemplarily shown in Fig. 1 may be configured to use the segments Segi (e.g. the example segments 610. 620, 630, 640 of Fig. 6) for providing the at least two input segmental audio signals 115 (Wj, X,. Y„ Z¾). By using the segments (or sectors), it is possible to realize a segment-based (or sector-based) parametric model of the sound field. This enables to achieve a higher quality spatial audio recording and reproduction with a relatively compact microphone configuration.

Fig. 7 shows a schematic illustration 700 of an example loudspeaker signal computation for two segments or sectors of a recording space. In the schematic illustration 700 of Fig. 7, the embodiment of the apparatus 100 for generating the plurality of parametric audio streams 125 (Oj, vFj, Wj) and the embodiment of the apparatus 500 for generating the plurality of loudspeaker signals 525 (Li, L2, ...) are exemplarily depicted. As shown in the schematic illustration 700 of Fig. 7. the segmentor 1 10 may be configured for receiving the input spatial audio signal 105 (e.g. microphone signal ). Furthermore, the segmentor 1 10 may be configured for providing the at least two input segmental audio signals 1 15 (e.g. segmental microphone signals 715-1 of a first segment and segmental microphone signals 715-2 of a second segment). The generator 120 may comprise a first parametric spatial analysis block 720-1 and a second parametric spatial analysis block 720-2. Furthermore, the generator 120 may be configured for generating the parametric audio stream for each of the at least two input segmental audio signals 1 15. At the output of the embodiment of the apparatus 100, the plurality of parametric audio streams 125 will be obtained. For example, the first parametric spatial analysis block 720-1 will output a first parametric audio stream 725-1 of a first segment, while the second parametric spatial analysis block 720-2 will output a second parametric audio stream 725-2 of a second segment. Furthermore, the first parametric audio stream 725-1 provided by the first parametric spatial analysis block 720-1 may comprise parametric spatial information (e.g. θι; Ψι) of a first segment and one or more segmental audio signals (e.g. Wj) of the first segment, while the second parametric audio stream 725-2 provided by the second parametric spatial analysis block 720-2 may comprise parametric spatial information (e.g. θ2, Ψ2) of a second segment and one or more segmental audio signals (e.g. W2) of the second segment. The embodiment of the apparatus 100 may be configured for transmitting the plurality of parametric audio streams 125. As also shown in the schematic illustration 700 of Fig. 7, the embodiment of the apparatus 500 may be configured for receiving the plurality of parametric audio streams 125 from the embodiment of the apparatus 100. The Tenderer 510 may comprise a first rendering unit 730-1 and a second rendering unit 730-2. Furthermore, the Tenderer 510 may be configured for providing the plurality of input segmental loudspeaker signals 515 from the received plurality of parametric audio streams 125. For example, the first rendering unit 730-1 may be configured for providing input segmental loudspeaker signals 735-1 of a first segment from the first parametric audio stream 725-1 of the first segment, while the second rendering unit 730-2 may be configured for providing input segmental loudspeaker signals 735-2 of a second segment from the second parametric audio stream 725-2 of the second segment. Furthermore, the combiner 520 may be configured for combining the input segmental loudspeaker signals 51 5 to obtain the plurality of loudspeaker signals 525 (e.g. L, . L2, ...).

The embodiment of Fig. 7 essentially represents a higher quality spatial audio recording and reproduction concept using a segment-based (or sector-based) parametric model of the sound field, which allows to record also complex spatial audio scenes with a relatively compact microphone configuration.

Fig. 8 shows a schematic illustration 800 o an example loudspeaker signal computation for two segments or sectors of a recording space using second order B -format input signals EP2013/073574

1 1

105. The example loudspeaker signal computation schematically illustrated in Fig. 8 essentially corresponds to the example loudspeaker signal computation schematically illustrated in Fig. 7. In t he schematic illustration of Fig. 8, the embodiment of the apparatus 100 for generating the plurality of parametric audio streams 125 and the embodiment of the apparatus 500 for generating the plurality of loudspeaker signals 525 are exemplarily depicted. As shown in Fig. 8, the embodiment of the apparatus 100 may be configured for receiving the input spatial audio signal 105 (e.g. B-format microphone channels such as [W, X, Y, U, V]). Here, it is to be noted that the signals U, V in Fig. 8 are second order B-format components. The segmentor 1 10 exemplarily denoted by "matrixing" may be configured for generating the at least two input segmental audio signals 1 15 from the omnidirectional signal and the plurality of different directional signals using a mixing or matrixing operation which depends on the segments Segj of the recording space. For example, the at least two input segmental audio signals 115 may comprise the segmental microphone signal 715-1 of a first segment (e.g. [W1 ? Xi, Y]]) and the segmental microphone signals 715-2 of a second segment (e.g. [W2, X2, Y2]). Furthermore, the generator 120 may comprise a first directional and diffuseness analysis block 720-1 and a second directional and diffuseness analysis block 720-2. The first and the second directional and diffuseness analysis blocks 720-1 , 720-2 exemplarily shown in Fig. 8 essentially correspond to the first and the second parametric spatial analysis blocks 720-1 , 720-2 exemplarily shown in Fig. 7. The generator 120 may be configured for generating a parametric audio stream for each of the at least two input segmental audio signals 115 to obtain the plurality of parametric audio streams 125. For example, the generator 120 may be configured for performing a spatial analysis on the segmental microphone signals 715-1 of the first segment using the first directional and diffuseness analysis block 720-1 and for extracting a first component (e.g. a segmental audio signal Wi) from the segmental microphone signals 715-1 of the first segment to obtain the first parametric audio stream 725-1 of the first segment. Furthermore, the generator 120 may be configured for performing a spatial analysis on the segmental microphone signals 715-2 of the second segment and for extracting a second component (e.g. a segmental audio signal W2) from the segmental microphone signals 715-2 of the second segment using the second directional and diffuseness analysis block 720-2 to obtain the second parametric audio stream 725-2 of the second segment. For example, the first parametric audio stream 725- 1 of the first segment may comprise parametric spatial information of the first segment comprising a first direction-of-arrival (DOA) parameter Θ] and a first diffuseness parameter Ψ] as well as a first extracted component Wj, while the second parametric audio stream 725-2 of the second segment may comprise parametric spatial information of the second segment comprising a second direction-of-arrival (DOA) parameter θ2 and a second diffuseness parameter Ψ2 as well as a second extracted component W2. The embodiment of

the apparatus 100 may be configured for transmitting the plurality of parametric audio streams 125.

As also shown in the schematic illustration 800 of Fig. 8, the embodiment of the apparatus 500 for generating the plurality of loudspeaker signals 525 may be configured for receiving the plurality of parametric audio streams 125 transmitted from the embodiment of the apparatus 100. In the schematic illustration 800 of Fig. 8, the renderer 510 comprises the first rendering unit 730-1 and the second rendering unit 730-2. For example, the first rendering unit 730-1 comprises a first multiplier 802 and a second multiplier 804. The first multiplier 802 of the first rendering unit 730-1 may be configured for applying a first weighting factor 803 (e.g. Vl- Ψ ) to the segmental audio signal Wj of the first parametric audio stream 725-1 of the first segment to obtain a direct sound substream 810 by the first rendering unit 730-1, while the second multiplier 804 of the first rendering unit 730-1 may be configured for applying a second weighting factor 805 (e.g. *F ) to the segmental audio signal W\ of the first parametric audio stream 725-1 of the first segment to obtain a diffuse substream 812 by the first rendering unit 730-1. Furthermore, the second rendering unit 730-2 may comprise a first multiplier 806 and a second multiplier 808. For example, the first multiplier 806 of the second rendering unit 730-2 may be configured for applying a first weighting factor 807 (e.g. VI - Ψ ) to the segmental audio signal W2 of the second parametric audio stream 725-2 of the second segment to obtain a direct sound stream 814 by the second rendering unit 730-2, while the second multiplier 808 of the second rendering unit 730-2 may be configured for applying a second weighting factor 809 (e.g. to the segmental audio signal W2 of the second parametric audio stream 725-2 of the second segment to obtain a diffuse substream 816 by the second rendering unit 730-2. In embodiments, the first and the second weighting factors 803, 805, 807, 809 of the first and the second rendering units 730-1. 730-2 are derived from the corresponding diffuseness parameters Fj. According to embodiments, the first rendering unit 730-1 may comprise gain factor multipliers 81 1, decorrelating processing blocks 81 and combining units 832, while the second rendering unit 730-2 may comprise gain factor multipliers 815. decorrelating processing blocks 817 and combining units 834. For example, the gain factor multipliers 81 1 of the first rendering unit 730-1 may be configured for applying gain factors obtained from a vector base amplitude panning (VBAP) operation by blocks 822 to the direct sound substream 810 output by the first multiplier 802 of the first rendering unit 730-1. Furthermore, the decorrelating processing blocks 813 of the first rendering unit 730-1 may be configured for applying a decorrelation/gain operation to the diffuse substream 812 at the output of the second multiplier 804 of the first rendering unit 730-1. In addition, the combining units 832 of the first rendering unit 730-1 may be configured for combining the signals obtained from the gain factor multipliers 81 1 and the decorrelating processing blocks 813 to obtain the segmental loudspeaker signals 735-1 of the first segment. For example, the gain factor multipliers 815 of the second rendering unit 730-2 may be configured for applying gain factors obtained from a vector base amplitude panning (VBAP) operation by blocks 824 to the direct sound substream 814 output by the first multiplier 806 of the second rendering unit 730-2. Furthermore, the decorrelating processing blocks 817 of the second rendering unit 730-2 may be configured for applying a decorrelation/gain operation to the diffuse substream 816 at the output of the second multiplier 808 of the second rendering unit 730-2. In addition, the combining units 834 of the second rendering unit 730-2 may be configured for combining the signals obtained from the gain factor multipliers 815 and the decorrelating processing blocks 817 to obtain the segmental loudspeaker signals 735-2 of the second segment.

In embodiments, the vector base amplitude panning (VBAP) operation by blocks 822, 824 of the first and the second rendering unit 730-1 , 730-2 depends on the corresponding direction-of-arrival (DOA) parameters θ;. As exemplarily depicted in Fig. 8, the combiner 520 may be configured for combining the input segmental loudspeaker signals 515 to obtain the plurality of loudspeaker signals 525 (e.g. Li, L2,...). As exemplarily depicted in Fig. 8, the combiner 520 may comprise a first summing up unit 842 and a second summing up unit 844. For example, the first summing up unit 842 is configured to sum up a first of the segmental loudspeaker signals 735-1 of the first segment and a first of the segmental loudspeaker signals 735-2 of the second segment to obtain a first loudspeaker signal 843. In addition, the second summing up unit 844 may be configured to sum up a second of the segmental loudspeaker signals 735-1 of the first segment and a second of the segmental loudspeaker signals 735-2 of the second segment to obtain a second loudspeaker signal 845. The first and the second loudspeaker signals 843, 845 may constitute the plurality of loudspeaker signals 525. Referring to the embodiment of Fig. 8, it should be noted that for each segment, potentially loudspeaker signals for all loudspeakers of the playback can be generated.

Fig. 9 shows a schematic illustration 900 of an example loudspeaker signal computation for two segments or sectors of a recording space including a signal modification in a parametric signal representation domain. The example loudspeaker signal computation in the schematic illustration 900 of Fig. 9 essentially corresponds to the example loudspeaker signal computation in the schematic illustration 700 of Fig. 7. However, the example loudspeaker signal computation in the schematic illustration 900 of Fig. 9 includes an additional signal modi fication.

In the schematic illustration 900 of Fig. 9, the apparatus 100 comprises the segmentor 1 10 and the generator 120 for obtaining the plurality of parametric audio streams 125 (0i, Ψ , Wj). Furthermore, the apparatus 500 comprises the rendercr 510 and the combiner 520 for obtaining the plurality of loudspeaker signals 525.

For example, the apparatus 100 may further comprise a modifier 910 for modifying the plurality of parametric audio streams 125 (9j, Wj) in a parametric signal representation domain. Furthermore, the modifier 910 may be configured to modify at least one of the parametric audio streams 125 (0j, Wj) using a corresponding modification control parameter 905. In this way, a first modified parametric audio stream 916 of a first segment and a second modified parametric audio stream 918 of a second segment may be obtained. The first and the second modified parametric audio streams 916, 918 may constitute a plurality of modified parametric audio streams 915. In embodiments, the apparatus 100 may be configured for transmitting the plurality of modified parametric audio streams 915. In addition, the apparatus 500 may be configured for receiving the plurality of modified parametric audio streams 915 transmitted from the apparatus 100.

By providing the example loudspeaker signal computation according to Fig. 9, it is possible to achieve a more flexible spatial audio recording and reproduction scheme. In particular, it is possible to obtain higher quality output signals when applying modifications in the parametric domain. By segmenting the input signals before generating the plurality of parametric audio representations (streams), a higher spatial selectivity is obtained that better allows to treat different components of the captured sound field differently.

Fig. 10 shows a schematic illustration 1000 of example polar patterns of input segmental audio signals 1 15 (e.g. Wj, Xj, Yj) provided by the segmentor 1 10 of the embodiment of the apparatus 100 for generating the plurality of parametric audio streams 125 (0j, Ψ , Wj) in accordance with Fig. 1. In the schematic illustration 1000 of Fig. 10, the example input segmental audio signals 1 15 are visualized in a respective polar coordinate system for the two-dimensional (2D) plane. Similarly, the example input segmental audio signals 1 15 can be visualized in a respective spherical coordinate system for the three-dimensional (3D) space. The schematic illustration 1000 of Fig. 10 exemplarily depicts a first directional response 1010 for a first input segmental audio signal (e.g. an omnidirectional signal Wj), a second directional response 1020 of a second input segmental audio signal (e.g. a first directional signal Xj) and a third directional response 1030 of a third input segmental audio signal (e.g. a second directional signal Yj). Furthermore, a fourth directional response 1022 with opposite sign compared to the second directional response 1020 and a fifth directional response 1032 with opposite sign compared to the third directional response 1030 are exemplarily depicted in the schematic illustration 1000 of Fig. 10. Thus, different directional responses 1010, 1020, 1030, 1022, 1032 (polar patterns) can he used for the input segmental audio signals 115 by the segmentor 1 10. It is pointed out here that the input segmental audio signals 1 15 can be dependent on time and frequency, i.e. Wj = Wj(m, k), Xj = Xj(m, k), and Y, = Yj(m, k), wherein (m, k) are indices indicating a time-frequency tile in a spatial audio signal representation.

In this context, it should be noted that Fig. 10 exemplarily depicts the polar diagrams for a single set of input signals, i.e. the signals 1 15 for a single sector i (e.g. [Wj, Xj, Yj]). Furthermore, the positive and negative parts of the polar diagram plots together represent the polar diagram of a signal, respectively (for example, the parts 1020 and 1022 together show the polar diagram of signal Xj, while the parts 1030 and 1032 together show the polar diagram of signal Yj.).

Fig. 1 1 shows a schematic illustration 1 100 of an example microphone configuration 1 1 10 for performing a sound field recording. In the schematic illustration 1100 of Fig. 11 , the microphone configuration 1 110 may comprise multiple linear arrays of directional microphones 1 1 12, 1 1 14, 1 1 16. The schematic illustration 1 100 of Fig. 1 1 exemplarily depicts how a two-dimensional (2D) observation space can be divided into different segments or sectors 1101 , 1 102, 1103 (e.g. Segj, i = 1, 2. 3) of the recording space. Here, the segments 1 101 , 1 102, 1103 of Fig. 11 may correspond to the segments Segj exemplarily depicted in Fig. 6. Similarly, the example microphone configuration 11 10 can also be used in the three-dimensional (3D) observation space, wherein the three-dimensional (3D) observation space can be divided into the segments or sectors for the given microphone configuration. In embodiments, the example microphone configuration 1 1 10 in the schematic illustration 1 100 of Fig. 1 1 can be used to provide the input spatial audio signal 105 for the embodiment of the apparatus 100 in accordance with Fig. 1 . For example, the multiple linear arrays of directional microphones 1 1 12, 1 1 14. 1 1 16 of the microphone configuration 1 1 10 may be configured to provide the different directional signals for the input spatial audio signal 105. By the use of the example microphone configuration 1 1 10 of Fig. 1 1. it is possible to optimize the spatial audio recording quality using the segment-based (or sector-based) parametric model of the sound field.

In the previous embodiments, the apparatus 100 and the apparatus 500 may be configured to be operative in the time-frequency domain.

3574

16

In summary, embodiments of the present invention relate to the field of high quality spatial audio recording and reproduction. The use of a segment-based or sector-based parametric model of the sound field al lows to also record complex spatial audio scenes with relatively compact microphone configurations. In contrast to a simple global model of the sound field assumed by the current state of the art methods, the parametric information can be determined for a number of segments in which the entire observation space is divided. Therefore, the rendering for an almost arbitrary loudspeaker configuration can be performed based on the parametric information together with the recorded audio channels.

According to embodiments, for a planar two-dimensional (2D) sound field recording, the entire azimuthal angle range of interest can be divided into multiple sectors or segments covering a reduced range of azimuthal angles. Analogously, in the 3D case the full solid angle range (azimuthal and elevation) can be divided into sectors or segments covering a smaller angle range. The different sectors or segments may also partially overlap.

According to embodiments, each sector or segment is characterized by an associated directional measure, which can be used to specify or refer to the corresponding sector or segment. The directional measure can, for example, be a vector pointing to (or from) the center of the sector or segment, or an azimuthal angle in the 2D case, or a set of an azimuth and an elevation angle in the 3D case. The segment or sector can be referred to as both a subset of directions within a 2D plane or within a 3D space. For presentational simplicity, the previous examples were exemplarily described for the 2D case; however the extension to 3D configurations is straightforward.

With reference to Fig. 6, the directional measure may be defined as a vector which, for the segment Seg3, points from the origin, i.e. the center with the coordinate (0, 0), to the right, i.e. towards the coordinate (1 , 0) in the polar diagram, or the azimuthal angle of 0° if. in Fig. 6, angles are counted from (or referred to) the x-axis (horizontal axis).

Referring to the embodiment of Fig. 1 , the apparatus 100 may be configured to receive a number of microphone signals as an input (input spatial audio signal 105). These microphone signals can. for example, either result from a real recording or can be artificially generated by a simulated recording in a virtual environment. From these microphone signals, corresponding segmental microphone signals (input segmental audio signals 1 15) can be determined, which are associated with the corresponding segments (Segj). The segmental microphone signals feature specific characteristics. Their directional pick-up pattern may show a significantly increased sensitivity within the associated angular sector compared to the sensitivity outside this sector. An example of the

segmentation of a full azimuth range of 360° and the pick-up patterns of the associated segmental microphone signals were illustrated with reference to Fig. 6. In the example of Fig. 6, the directivity of the microphones associated with the sectors exhibit cardioid patterns which are rotated in accordance to the angular range covered by the corresponding sector. For example, the directivity of the microphone associated with the sector 3 (Seg. pointing towards 0° is also pointing towards 0°. Here, it should be noted that in the polar diagrams of Fig. 6, the direction of the maximum sensitivity is the direction in which the radius of the depicted curve comprises the maximum. Thus, Seg3 has the highest sensitivity for sound components which come from the right. In other words, the segment Seg3 has its preferred direction at the azimuthal angle of 0° (assuming that angles are counted from the x-axis).

According to embodiments, for each sector, a DOA parameter (θ,) can be determined together with a sector-based diffuseness parameter (Ψί). In a simple realization, the diffuseness parameter (Ψ;) may be the same for all sectors. In principle, any preferred DOA estimation algorithm can be applied (e.g. by the generator 120). For example, the DOA parameter (θ,) can be interpreted to reflect the opposite direction in which most of the sound energy is traveling within the considered sector. Accordingly, the sector-based diffuseness relates to the ratio of the diffuse sound energy and the total sound energy within the considered sector. It is to be noted that the parameter estimation (such as performed with the generator 120) can be performed time-variantly and individually for each frequency band.

According to embodiments, for each sector, a directional audio stream (parametric audio stream) can be composed including the segmental microphone signal (Wj) and the sector-based DOA and diffuseness parameters (0j, ¾¾ which predominantly describe the spatial audio properties of the sound field within the angular range represented by that sector. For example, the loudspeaker signals 525 for playback can be determined using the parametric directional information (9j, Ψ,) and one or more of the segmental microphone signals 125 (e.g. Wj). Thereby, a set f segmental loudspeaker signals 515 can be determined for each segment which can then be combined such as by the combiner 520 (e.g. summed up or mixed) to build the final loudspeaker signals 525 for playback. The direct sound components within a sector can, for example, be rendered as point-like sources by applying an example vector base amplitude panning (as described in V. Pulkki: Virtual sound source positioning using Vector Base Amplitude Panning. J. Audio Eng. Soc, Vol. 45, pp. 456-466, 1997), whereas the diffuse sound can be played back from several loudspeakers at the same time.

P T/EP2013/073574

18

The block diagram in Fig. 7 illustrates the computation of the loudspeaker signals 525 as described above for the case of two sectors. In Fig. 7, bold arrows represent audio signals, whereas thin arrows represent parametric signals or control signals. In Fig. 7, the generation of the segmental microphone signals 115 by the segmentor 1 10, the application of the parametric spatial signal analysis (blocks 720-1, 720-1) for each sector (e.g. by the generator 120), the generation of the segmental loudspeaker signals 5 15 by the Tenderer 510 and the combining of the segmental loudspeaker signals 515 by the combiner 520 are schematically illustrated.

In embodiments, the segmentor 110 may be configured for performing the generation of the segmental microphone signals 1 15 from a set of microphone input signals 105. Furthermore, the generator 120 may be configured for performing the application of the parametric spatial signal analysis for each sector such that the parametric audio streams 725-1, 725-2 for each sector will be obtained. For example, each of the parametric audio streams 725-1, 725-2 may consist of at least one segmental audio signal (e.g. W,. W2, respectively) as well as associated parametric information (e.g. DO A parameters θι, θ2 and diffuseness parameters Ψι, Ψ2, respectively). The Tenderer 510 may be configured for performing the generation of the segmental loudspeaker signals 515 for each sector based on the parametric audio streams 725-1, 725-2 generated for the particular sectors. The combiner 520 may be configured for performing the combining of the segmental loudspeaker signals 515 to obtain the final loudspeaker signals 525.

The block diagram in Fig. 8 illustrates the computation of the loudspeaker signals 525 for the example case of two sectors shown as an example for a second order B-format microphone signal application. As shown in the embodiment of Fig. 8, two (sets of) segmental microphone signals 715- 1 (e.g. [Wi, Xi, Yi]) and 715-2 (e.g. [W2, X2, Y2]) can be generated from a set of input microphone signals 105 by a mixing or matrixing operation (e.g. by block 1 10) as described before. For each of the two segmental microphone signals, a directional audio analysis (e.g. by blocks 720-1. 720-2) can be performed, yielding the directional audio streams 725- 1 (e.g. 0j, F ·, . Wi) and 725-2 (e.g. O2. ;2, W2) for the first sector and the second sector, respectively.

In Fig. 8, the segmental loudspeaker signals 515 can be generated separately for each sector as follows. The segmental audio component W; can be divided into two complementary substreams 810, 812. 814, 816 by weighting with multipliers 803. 805, 807, 809 derived from the diffuseness parameter Ψ·;. One substream may carry predominately direct sound components, whereas the other substream may carry predominately diffuse sound components. The direct sound substreams 810, 814 can be

rendered using panning gains 81 1, 81 5 determined by the DOA parameter 8j, whereas the diffuse substreams 812, 816 can be rendered incoherently using decorrelating processing blocks 813. 817.

As an example last step, the segmental loudspeaker signals 515 can be combined (e.g. by block 520) to obtain the final output signals 525 for loudspeaker reproduction.

Referring to the embodiment of Fig. 9, it should be mentioned that the estimated parameters (within the parametric audio streams 125) may also be modified (e.g. by modifier 910) before the actual loudspeaker signals 525 for playback are determined. For example, the DOA parameter Θ; may be remapped to achieve a manipulation of the sound scene. In other cases, the audio signals (e.g. W,) of certain sectors may be attenuated before computing the loudspeaker signals 525 if the sound coming from a certain or all directions included in these sectors are not desired. Analogously, diffuse sound components can be attenuated if mainly or only direct sound should be rendered. This processing including a modification 910 of the parametric audio streams 125 is exemplarily illustrated in Fig. 9 for the example of a segmentation into two segments.

An embodiment of a sector-based parameter estimation in the example 2D case performed with the previous embodiments will be described in the following. It is assumed that the microphone signals used for capturing can be converted into so-called second-order B-format signals. Second-order B-format signals can be described by the shape of the directivity patterns of the corresponding microphones:

bx (&) cos(5) (3) sin(,9) (4)

sin (2.9) (6)

where θ denotes the azimuth angle. The corresponding B-format signals (e.g. input 105 of Fig. 8) are denoted by W(m, k), X(m, k), Y(m, k), U(m, k) and V(m, k), where m and k represent a time and frequency index, respectively. It is now assumed that the segmental microphone signal associated with the i'th sector has a directivity pattern %(θ). We can then determine (e.g. by block 1 10) the additional microphone signals 1 15, W;(m, k), Xj(m, k), Yj(m, k) having a directivity pattern which can be expressed by

K ) = q,i&) (7)

bx (&) = qi (&)cos{s) (8)

Some examples for the directivity patterns of the described microphone signals in case of an example card io id pattern = 0.5 + 0.5 cos(& + © are shown in Fig. 10. The preferred direction of the i'th sector depends on an azimuth angle Θ,. In Fig. 10, the dashed lines indicate the directional responses 1022, 1032 (polar patterns) with opposite sign compared to the directional responses 1020, 1030 depicted with solid lines.

Note that for the example case of ©j = 0, the signals Wj(m, k), Xj(m, k), Yi(m, k) can be determined from the second-order B-format signals by mixing the input components W,X,Y,U,V according to

0.5W(m,k)+ 0.5X(m, k) (10) 0.25W(m, k)+ 0.5X(m,k)+ 0.25U(m, k) (11) ().5Y(m. k) + 0.25V( ) (12)

This mixing operation is performed e.g. in Fig. 2 in building block 1 10. Note that a different choice of ¾(θ) leads to a different mixing rule to obtain the components Wj, Xj,Yi from the second-order B-format signals.

From the segmental microphone signals 1 15, Wj(m, k), X,(m, k), Y,(m, k), we can then determine (e.g. by block 120) the DOA parameter 0j associated with the i'th sector by computing the sector-based active intensity vector

where Re {A} denotes the real part o the complex number A and * denotes comple conjugate. Furthermore. p0 is the air density and c is the sound velocity. The desired DOA estimate 0i(m, k), for example represented by the unit vector ej(m, k), can be obtained by

I ., (m, k)

el (m, k) = -■ (14)

Ia (m, k)

We can further determine the sector-based, sound field energy related quantity

3 073574

21

E. (m, k) = (15)

The desired diffuseness parameter I'j(m, k) of the i'th sector can then be determined by

where g denotes a suitable scaling factor, E{ } is the expectation operator and || || denotes the vector norm. It can be shown that the diffuseness parameter Ψι(ιτι, k) is zero if only a plane wave is present and takes a positive value smaller than or equal to one in the case of purely diffuse sound fields. In general, an alternative mapping function can be defined for the diffuseness which exhibits a similar behavior, i.e. giving 0 for direct sound only, and approaching 1 for a completely diffuse sound field.

Referring to the embodiment of Fig. 1 1 , an alternative realization for the parameter estimation can be used for different microphone configurations. As exemplarily illustrated in Fig. 1 1 , multiple linear arrays 1 1 12, 1 1 14, 1 1 16 of directional microphones can be used. Fig. 1 1 also shows an example of how the 2D observation space can be divided into sectors 1 101 , 1 102, 1 103 for the given microphone configuration. The segmental microphone signals 1 15 can be determined by beam forming techniques such as filter and sum beam forming applied to each of the linear microphone arrays 1 1 12, 1 1 14, 1 1 16. The beamforming may also be omitted, i.e. the directional patterns of the directional microphones may be used as the only means to obtain segmental microphone signals 1 15 that show the desired spatial selectivity for each sector (Segj). The DOA parameter 6j within each sector can be estimated using common estimation techniques such as the "ESPRIT" algorithm (as described in R. Roy and T. Kailath: ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, no. 7, pp. 984995, Jul 1989). The diffuseness parameter XV, for each sector can. for example, be determined by evaluating the temporal variation of the DOA estimates (as described in J. Ahonen, V. Pulkki: Diffuseness estimation using temporal variation of intensity vectors, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2009. WAS-PAA Ό9. , pp. 285-288, 18-21 Oct. 2009). Alternatively, known relations of the coherence between different microphones and the direct-to-diffuse sound ratio (as described in O. Thiergart, G. Del Galdo, E.A.P. Habets,: Signal-to-reverberant ratio estimation based on the complex spatial coherence between

omnidirectional microphones, IEEE International Conference on Acoustics. Speech and Signal Processing (ICASSP), 2012. pp. 309-3 12. 25-30 March 2012) can be employed.

Fig. 12 shows a schematic illustration 1200 of an example circular array of omnidirectional microphones 1210 for obtaining higher order microphone signals (e.g. the input spatial audio signal 105). In the schematic illustration 1200 of Fig. 12. the circular array of omnidirectional microphones 1210 comprises, for example, 5 equidistant microphones arranged along a circle (dotted line) in a polar diagram. In embodiments, the circular array of omnidirectional microphones 1210 can be used to obtain the higher order (HO) microphone signals, as will be described in the following. In order to compute the example second-order microphone signals U and V from the omnidirectional microphone signals (provided by the omnidirectional microphones 1210), at least 5 independent microphone signals should be used. This can be achieved elegantly, e.g. using a Uniform Circular Array (UCA) as the one exemplarily shown in Fig. 12. The vector obtained from the microphone signals at a certain time and frequency can, for example, be transformed with a DFT (Discrete Fourier transform). The microphone signals W, X, Y, U and V (i.e. the input spatial audio signal 105) can then be obtained by a linear combination of the DFT coefficients. Note that the DFT coefficients represent the coefficients of the Fourier series calculated from the vector of the microphone signals.

Let Tm denote the generalized m-th order microphone signal, defined by the directivity patterns

γ(∞5) => pattern : cos(m &)

(17) T(^in) = pattern : sin(w, )

where θ denotes an azimuth angle so that

x = r cos)

7 = T: sm

^cos)

Then, it can be proven that

cos) A ~m,

m 2j

73574

23

where Am = — -.— Pm + P-m

Jmw)

where j is the imaginary unit, k is the wave number, r and φ are the radius and the azimuth angle defining a polar coordinate system, Jm( ) is the m-order Bessel function of the first kind, and Pm are the coefficients of the Fourier series of the pressure signal measured on the polar coordinates (r, φ).

Note that care has to be taken in the array design and implementation of the calculation of the (higher order) B- format signals to avoid excessive noise amplification due to the numerical properties of the Bessel function.

Mathematical background and derivations related to the described signal transformation can be found, e.g. in A. Kuntz. Wave field analysis using virtual circular microphone arrays, Dr. Hut, 2009. ISBN: 978-3-86853-006-3.

Further embodiments of the present invention relate to a method for generating a plurality of parametric audio streams 125 (Gj, Wj) from an input spatial audio signal 105 obtained from a recording in a recording space. For example, the input spatial audio signal 105 comprises an omnidirectional signal W and a plurality of different directional signals X, Y, Z, U, V. The method comprises providing at least two input segmental audio signals 115 (Wj, X„ Y,, Zi) from the input spatial audio signal 105 (e.g. the omnidirectional signal W and the plurality of different directional signals X, Y, Z, U, V), wherein the at least two input segmental audio signals 1 15 (W;, Xj, Y,;„ ¾) are associated with corresponding segments Scgj of the recording space. Furthermore, the method comprises generating a parametric audio stream for each of the at least two input segmental audio signals 115 (Wj, X„ Yj, Zi ) to obtain the plurality of parametric audio streams 125 (0 l¥ . Wj).

Further embodiments of the present invention relate to a method for generating a plurality of loudspeaker signals 525 (Li, L2, . ..) from a plurality of parametric audio streams 125 (θί, Ψί, Wj) derived from an input spatial audio signal 105 recorded in a recording space. The method comprises providing a plurality of input segmental loudspeaker signals 515 from the plurality of parametric audio streams 125 (6 Ψ\, Wi), wherein the input segmental loudspeaker signals 515 are associated with corresponding segments Segj of the recording space. Furthermore, the method comprises combining the input segmental loudspeaker signals 515 to obtain the plurality of loudspeaker signals 525 (Li, L2, ...).

Although the present invention has been described in the context of block diagrams where the blocks represent actual or logical hardware components, the present invention can also be implemented by a computer-implemented method. In the latter case, the blocks represent corresponding method steps where these steps stand for the functionalities performed by corresponding logical or physical hardware blocks.

The described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the appending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.

Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus like, for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.

The parametric audio streams 125 (0j, V . Wj) can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the internet.

Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a B!u- ay, a CD, a ROM, an

EPROM, an EEPROM or a FLASH memory, having electronically readable control signal stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.

Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.

Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.

Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.

In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

A further embodiment of the inventive method is therefore a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.

A further embodiment f the inventive method is therefore a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example via the internet.

A further embodiment comprises a processing means, for example a computer or a programmable logic device, configured to or adapted to perform one of the methods described herein.

A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may. for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.

In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may operate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.

Embodiments of the present invention provide a high quality, realistic spatial sound recording and reproduction using simple and compact microphone configurations.

Embodiments of the present invention are based on directional audio coding (DirAC) (as described in T. Lokki, J. Merimaa, V. Pulkki: Method for Reproducing Natural or Modified Spatial Impression in Multichannel Listening, U.S. Patent 7,787,638 B2, Aug. 31, 2010 and V. Pulkki: Spatial Sound Reproduction with Directional Audio Coding. J. Audio Eng. Soc. Vol. 55, No. 6, pp. 503-516. 2007). which can be used with different microphone systems, and with arbitrary loudspeaker setups. The benefit of the DirAC is to reproduce the spatial impression of an existing acoustical environment as precisely as possible using a multichannel loudspeaker system. Within the chosen environment, responses (continuous sound or impulse responses) can be measured with an omnidirectional microphone (W) and with a set of microphonics that enables measurin the direction-of- arrival (DO A) of sound and the diffuseness of sound. A possible method is to apply three figure-of-eight microphones (X, Y, Z) aligned with the corresponding Cartesian coordinate axis. A way to do this is to use a "SoundField" microphone, which directly yields all the desired responses. I is interesting to note that the signal of the omnidirectional microphone represents the sound pressure, whereas the ciipole signals are proportionate to the corresponding elements of the particle velocity vector.

Form these signals, the DirAC parameters, i.e. DOA of sound and the diffuseness of the observed sound field can be measured in a suitable time/frequency raster with a resolution corresponding to that of the human auditory system. The actual loudspeaker signals can then be determined from the omnidirectional microphone signal based on the DirAC parameters (as described in V. Pulkki: Spatial Sound Reproduction with Directional Audio Coding. J. Audio Eng. Soc, Vol. 55, No. 6, pp. 503-516. 2007). Direct sound components can be played back by only a small number of loudspeakers (e.g. one or two) using panning techniques, whereas diffuse sound components can be played back from all loudspeakers at the same time.

Embodiments of the present invention based on DirAC represent a simple approach to spatial sound recording with compact microphone configurations. In particular, the present invention prevents some systematic drawbacks which limit the achievable sound quality and experience in practice in the prior art.

In contrast to conventional DirAC, embodiments of the present invention provide a higher quality parametric spatial audio processing. Conventional DirAC relies on a simple global model for the sound field, employing only one DOA and one diffuseness parameter for the entire observation space. It is based on the assumption that the sound field can be represented by only one single direct sound component, such as a plane wave, and one global diffuseness parameter for each time/frequency tile. It turns out in practice, however, that often this simplified assumption about the sound field does not hold. This is especially true in complex, real world acoustics, e.g. where multiple sound sources such as talkers or instruments are active at the same time. On the other hand, embodiments of the present invention do not result in a model mismatch of the observed sound field, and the corresponding parameter estimates are more correct. It can also be prevented that a model mismatch results, especially in cases where direct sound components are rendered diffusely and no direction can be perceived when listening to the loudspeaker outputs. In embodiments, decorrelators can be used for generating uncorrected diffuse sound played back from all loudspeakers (as described in V. Pulkki: Spatial Sound Reproduction with Directional Audio Coding. J. Audio Eng. Soc, Vol. 55, No. 6, pp. 503-516. 2007). In contrast to the prior art, where decorrelators often introduce an undesired added room effect, it is possible with the present invention to more correctly reproduce sound sources which have a certain spatial extent (as opposed to the case of using the simple sound field model of DirAC which is not capable of precisel capturing such sound sources ).

Embodiments of the present invention provide a higher number of degrees of freedom in the assumed signal model, allowing for a better model match in complex sound scenes.

13 073574

28

Furthermore, in case of using directional microphones to generate sectors (or any other time-invariant linear, e.g. physical, means), an increased inherent directivity of icrophones can he obtained. Therefore, there is less need for applying time-variant gains to avoid vague directions, crosstalk, and coloration. This leads to less nonlinear processing in the audio signal path, resulting in higher quality.

In general, more direct sound components can be rendered as direct sound sources (point sources/plane wave sources). As a consequence, less decorrelation artifacts occur, more (correctly) localizable events are perceivable, and a more exact spatial reproduction is achievable.

Embodiments of the present invention provide an increased performance of a manipulation in the parametric domain, e. g. directional filtering (as described in M. Kallinger, H. Ochsenfeld, G. Del Galdo, F. uech, D. Mahne, R. Schultz-Amling, and O. Thiergart: A Spatial Filtering Approach for Directional Audio Coding, 126th AES Convention, Paper 7653, Munich, Germany, 2009), compared to the simple global model, since a larger fraction of the total signal energy is attributed to direct sound events with a correct DOA associated to it, and a larger amount of information is available. The provision of more (parametric) information allows, for example, to separate multiple direct sound components or also direct sound components from early reflections impinging from different directions.

Specifically, embodiments provide the following features. In the 2D case, the full azimuthal angle range can be split into sectors covering reduced azimuthal angle ranges. In the 3D case, the full solid angle range can be split into sectors covering reduced solid angle ranges. Each sector can be associated with a preferred angle range. For each sector, segmental microphone signals can be determined from the received microphone signals, which predominantly consist of sound arriving from directions that arc assigned to/covered by the particular sector. These microphone signals may also be determined artificially by simulated virtual recordings. For each sector, a parametric sound field analysis can be performed to determine directional parameters such as DOA and diffuseness. For each sector, the parametric directional information (DOA and diffuseness) predominantly describes the spatial properties of the angular range of the sound field that is associated to the particular sector. In case of playback, for each sector, loudspeaker signals can be determined based on the directional parameters and the segmental microphone signals. The overall output is then obtained by combining the outputs of all sectors. In case f manipulation, before computing the loudspeaker signals for playback, the estimated

parameters and/or segmental audio signals may also be modified to achieve a manipulation of the sound scene.
Claims

An apparatus (100) for generating a plurality of parametric audio streams (125) (0j,

Ψί, Wi) from an input spatial audio signal (105) obtained from a recording in a recording space, the apparatus (100) comprising:

a segmentor (1 10) for providing at least two input segmental audio signals (1 15) (Wi, Xj, Yj, Zi) from the input spatial audio signal (105), wherein the at least two input segmental audio signals (1 15) (Wj, Xj, Yj, ¾) are associated with corresponding segments (Segi) of the recording space; and

a generator (120) for generating a parametric audio stream for each of the at least two input segmental audio signals (1 15) (Wj, Xj, Yj, Zj) to obtain the plurality of parametric audio streams (125) (θ;, Ψ¾, Wj).

The apparatus (100) according to claim 1 ,

wherein the segments (Segi) of the recording space each represent a subset of directions within a two-dimensional (2D) plane or within a three-dimensional (3D) space.

The apparatus (100) according to claim 1 or 2,

wherein the segments (Segi) of the recording space each are characterized by an associated directional measure.

The apparatus (100) according to one of the claims 1 to 3,

wherein the apparatus (100) is configured for performing a sound field recording to obtain the input spatial audio signal (105);

wherein the segmentor (1 10) is configured to divide a full angle range of interest into the segments (Segi) of the recording space;

wherein the segments (Segi) of the recording space each cover a reduced angle range compared to the full angle range of interest.

5. The apparatus (100) according to one of the claims 1 to 4,

wherein the input spatial audio signal (105) comprises an omnidirectional signal (W) and a plurality of different directional signals (X, Y, Z, U, V).

The apparatus (100) accordin to one of the claims 1 to 5,

wherein the segmentor (1 10) is configured to generate the at least two input segmental audio signals (115) (Wj, Xj, Yj, ¾) from the omnidirectional signal (W) and the plurality of the different directional signals (X, Y, Z, U, V) using a mixing operation which depends on the segments (Segj) of the recording space.

The apparatus (100) according to one of the claims 1 to 6,

wherein the segmentor (110) is configured to use a directivity pattern (305) (¾(&)) for each of the segments (Segj) of the recording space;

wherein the directivity pattern (305) (φ(θ)) indicates a directivity of the at least two input segmental audio signals (1 15) (Wi, Xj, Yj, Zj).

The apparatus (100) according to claim 7,

wherein the directivity pattern (305) (qi(Q ) is given by

φ(θ) = a + b cos(S + Θ,),

wherein a and b denote multipliers which are modified to obtain a desired directivity pattern (305) (¾( ));

wherein θ denotes an azimuthal angle and ©j indicates a preferred direction o the i'th segment of the recording space.

The apparatus (100) according to one of the claims 1 to 8,

wherein the generator (120) is configured for obtaining the plurality of parametric audio streams (125) (0j, Ψ , Wj);

wherein the plurality of parametric audio streams (125) (0j, Ψ¾, Wj) each comprise a component (Wj) of the at least two input segmental audio signals (1 15) (Wj, Xj, Yj, Zj) and a corresponding parametric spatial information (0j, ¥j).

The apparatus (100) according to claim 9,

wherein the generator (120) is configured for performing a parametric spatial analysis for each of the at least two input segmental audio signals (1 15) (Wj, Xj, Yj, Zj) to obtain the corresponding parametric spatial information (0j, Ψ¾).

The apparatus (100) according to claim 9 or 10,

wherein the parametric spatial information (0j, Ψ() of each of the parametric audio steams (125) (0j, Ψί, Wj) comprises direction-of-arrival (DO A) parameter (0j) and/or a diffuseness parameter (Ψ;).

The apparatus (100) according to one of the claims 1 to 1 1 , further comprising:

a modifier (910) for modifying the plurality of parametric audio streams (125) (0j, Ψί, Wj) in a parametric signal representation domain;

wherein the modifier (910) is configured to modify at least one of the parametric audio streams (125) (0;, M Wj) using a corresponding modification control parameter (905).

An apparatus (500) for generating a plurality of loudspeaker signals (525) (Li, L2, ...) from a plurality of parametric audio streams ( 125 ) (0j, lPj. Wj) derived from an input spatial audio signal (105) recorded in a recording space, the apparatus (500) comprising:

a Tenderer (510) for providing a plurality of input segmental loudspeaker signals (515) from the plurality of parametric audio streams (125) (0j, Ψ,. Wj), wherein the input segmental loudspeaker signals (515) are associated with corresponding segments (Seg;) of the recording space; and

a combiner (520) for combining the input segmental loudspeaker signals (515) to obtain the plurality of loudspeaker signals (525) (Li, L2, ...)·

The apparatus (500) according to claim 13,

wherein the renderer (510) is configured for receiving the plurality of parametric audio streams (125) (θ¾, W,);

wherein the plurality of parametric audio streams ( 1 25) (0j, Wj) each comprises a segmental audio component (Wj) and a corresponding parametric spatial information (0j, Ψ;);

wherein the renderer (510) is configured for rendering each of the segmental audio components (Wj) using the corresponding parametric spatial information (505) (6j, Ψ,) to obtain the plurality of input segmental loudspeaker signals (515).

A method for generating a plurality of parametric audio streams (125) (θ,, Ψί, Wj) from an input spatial audio signal (105) obtained from a recording in a recording space, the method comprising:

providing at least two input segmental audio signals (115) (Wj, Xj, Yj: ¾) from the input spatial audio signal (105), wherein the at least two input segmental audio signals (115) (Wj, Xj, Yj, Ζ;) are associated with corresponding segments (Segj) of the recording space; and

generating a parametric audio stream for each of the at least two input segmental audio signals ( 1 15) (Wj, Xj, Yj, Zj) to obtain the plurality of parametric audio streams (125) (Gis Ψ„ Wj).

A method for generating a plurality of loudspeaker signals (525) (Li, L2, ...) from a plurality of parametric audio streams (125) (9j, Wj) derived from an input spatial audio signal (105) recorded in a recording space, the method comprising:

providing a plurality of input segmental loudspeaker signals (515) from the plurality of parametric audio streams (125) (0j, Tj, Wj), wherein the input segmental loudspeaker signals (515) are associated with corresponding segments (Segj) of the recording space; and

combining the input segmental loudspeaker signals (515) to obtain the plurality of loudspeaker signals (525) (Lj, L2, ...).

17. A computer program having a program code for performing the method according to claim 15 when the computer program is executed on a computer.

18. A computer program having a program code for performing the method according to claim 16 when the computer program is executed on a computer.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 1415-KOLNP-2015-IntimationOfGrant30-10-2023.pdf 2023-10-30
1 FOA.pdf 2015-05-18
2 1415-KOLNP-2015-PatentCertificate30-10-2023.pdf 2023-10-30
2 F5.pdf 2015-05-18
3 F3.pdf 2015-05-18
3 1415-KOLNP-2015-FORM 3 [28-08-2023(online)].pdf 2023-08-28
4 F2.pdf 2015-05-18
4 1415-KOLNP-2015-Written submissions and relevant documents [22-08-2023(online)].pdf 2023-08-22
5 DW.pdf 2015-05-18
5 1415-KOLNP-2015-FORM-26 [19-07-2023(online)].pdf 2023-07-19
6 1415-KOLNP-2015.pdf 2015-05-25
6 1415-KOLNP-2015-Correspondence to notify the Controller [17-07-2023(online)].pdf 2023-07-17
7 1415-KOLNP-2015-FORM-26 [17-07-2023(online)].pdf 2023-07-17
7 1415-KOLNP-2015-(03-11-2015)-CORRESPONDENCE.pdf 2015-11-03
8 1415-KOLNP-2015-US(14)-ExtendedHearingNotice-(HearingDate-09-08-2023).pdf 2023-07-11
8 1415-KOLNP-2015-(03-11-2015)-ANNEXURE TO FORM 3.pdf 2015-11-03
9 1415-KOLNP-2015-(26-11-2015)-PA.pdf 2015-11-26
9 1415-KOLNP-2015-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [07-07-2023(online)].pdf 2023-07-07
10 1415-KOLNP-2015-(26-11-2015)-CORRESPONDENCE.pdf 2015-11-26
10 1415-KOLNP-2015-US(14)-HearingNotice-(HearingDate-11-07-2023).pdf 2023-06-28
11 1415-KOLNP-2015-(26-11-2015)-ASSIGNMENT.pdf 2015-11-26
11 1415-KOLNP-2015-FORM 3 [03-02-2023(online)].pdf 2023-02-03
12 1415-KOLNP-2015-(31-03-2016)-OTHERS.pdf 2016-03-31
12 1415-KOLNP-2015-FORM 3 [03-08-2022(online)].pdf 2022-08-03
13 1415-KOLNP-2015-(31-03-2016)-CORRESPONDENCE.pdf 2016-03-31
13 1415-KOLNP-2015-Information under section 8(2) [02-02-2022(online)].pdf 2022-02-02
14 1415-KOLNP-2015-Information under section 8(2) [04-08-2021(online)].pdf 2021-08-04
14 Other Patent Document [30-07-2016(online)].pdf 2016-07-30
15 1415-KOLNP-2015-Information under section 8(2) [22-02-2021(online)].pdf 2021-02-22
15 Other Patent Document [04-11-2016(online)].pdf 2016-11-04
16 1415-KOLNP-2015-FORM 3 [05-08-2020(online)].pdf 2020-08-05
16 Other Patent Document [28-01-2017(online)].pdf 2017-01-28
17 Other Patent Document [22-05-2017(online)].pdf 2017-05-22
17 1415-KOLNP-2015-CLAIMS [27-02-2020(online)].pdf 2020-02-27
18 1415-KOLNP-2015-DRAWING [27-02-2020(online)].pdf 2020-02-27
18 Information under section 8(2) [14-07-2017(online)].pdf 2017-07-14
19 1415-KOLNP-2015-FER_SER_REPLY [27-02-2020(online)].pdf 2020-02-27
19 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [30-11-2017(online)].pdf 2017-11-30
20 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [16-01-2018(online)].pdf 2018-01-16
20 1415-KOLNP-2015-OTHERS [27-02-2020(online)].pdf 2020-02-27
21 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [16-07-2018(online)].pdf 2018-07-16
21 1415-KOLNP-2015-PETITION UNDER RULE 137 [27-02-2020(online)]-1.pdf 2020-02-27
22 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [30-08-2018(online)].pdf 2018-08-30
22 1415-KOLNP-2015-PETITION UNDER RULE 137 [27-02-2020(online)].pdf 2020-02-27
23 1415-KOLNP-2015-FORM 4(ii) [27-11-2019(online)].pdf 2019-11-27
23 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [15-01-2019(online)].pdf 2019-01-15
24 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [07-08-2019(online)].pdf 2019-08-07
24 1415-KOLNP-2015-FER.pdf 2019-05-30
25 1415-KOLNP-2015-Certified Copy of Priority Document (MANDATORY) [10-06-2019(online)].pdf 2019-06-10
25 1415-KOLNP-2015-Verified English translation (MANDATORY) [10-06-2019(online)].pdf 2019-06-10
26 1415-KOLNP-2015-Certified Copy of Priority Document (MANDATORY) [10-06-2019(online)].pdf 2019-06-10
26 1415-KOLNP-2015-Verified English translation (MANDATORY) [10-06-2019(online)].pdf 2019-06-10
27 1415-KOLNP-2015-FER.pdf 2019-05-30
27 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [07-08-2019(online)].pdf 2019-08-07
28 1415-KOLNP-2015-FORM 4(ii) [27-11-2019(online)].pdf 2019-11-27
28 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [15-01-2019(online)].pdf 2019-01-15
29 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [30-08-2018(online)].pdf 2018-08-30
29 1415-KOLNP-2015-PETITION UNDER RULE 137 [27-02-2020(online)].pdf 2020-02-27
30 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [16-07-2018(online)].pdf 2018-07-16
30 1415-KOLNP-2015-PETITION UNDER RULE 137 [27-02-2020(online)]-1.pdf 2020-02-27
31 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [16-01-2018(online)].pdf 2018-01-16
31 1415-KOLNP-2015-OTHERS [27-02-2020(online)].pdf 2020-02-27
32 1415-KOLNP-2015-FER_SER_REPLY [27-02-2020(online)].pdf 2020-02-27
32 1415-KOLNP-2015-Information under section 8(2) (MANDATORY) [30-11-2017(online)].pdf 2017-11-30
33 1415-KOLNP-2015-DRAWING [27-02-2020(online)].pdf 2020-02-27
33 Information under section 8(2) [14-07-2017(online)].pdf 2017-07-14
34 1415-KOLNP-2015-CLAIMS [27-02-2020(online)].pdf 2020-02-27
34 Other Patent Document [22-05-2017(online)].pdf 2017-05-22
35 Other Patent Document [28-01-2017(online)].pdf 2017-01-28
35 1415-KOLNP-2015-FORM 3 [05-08-2020(online)].pdf 2020-08-05
36 1415-KOLNP-2015-Information under section 8(2) [22-02-2021(online)].pdf 2021-02-22
36 Other Patent Document [04-11-2016(online)].pdf 2016-11-04
37 1415-KOLNP-2015-Information under section 8(2) [04-08-2021(online)].pdf 2021-08-04
37 Other Patent Document [30-07-2016(online)].pdf 2016-07-30
38 1415-KOLNP-2015-(31-03-2016)-CORRESPONDENCE.pdf 2016-03-31
38 1415-KOLNP-2015-Information under section 8(2) [02-02-2022(online)].pdf 2022-02-02
39 1415-KOLNP-2015-(31-03-2016)-OTHERS.pdf 2016-03-31
39 1415-KOLNP-2015-FORM 3 [03-08-2022(online)].pdf 2022-08-03
40 1415-KOLNP-2015-(26-11-2015)-ASSIGNMENT.pdf 2015-11-26
40 1415-KOLNP-2015-FORM 3 [03-02-2023(online)].pdf 2023-02-03
41 1415-KOLNP-2015-(26-11-2015)-CORRESPONDENCE.pdf 2015-11-26
41 1415-KOLNP-2015-US(14)-HearingNotice-(HearingDate-11-07-2023).pdf 2023-06-28
42 1415-KOLNP-2015-(26-11-2015)-PA.pdf 2015-11-26
42 1415-KOLNP-2015-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [07-07-2023(online)].pdf 2023-07-07
43 1415-KOLNP-2015-(03-11-2015)-ANNEXURE TO FORM 3.pdf 2015-11-03
43 1415-KOLNP-2015-US(14)-ExtendedHearingNotice-(HearingDate-09-08-2023).pdf 2023-07-11
44 1415-KOLNP-2015-(03-11-2015)-CORRESPONDENCE.pdf 2015-11-03
44 1415-KOLNP-2015-FORM-26 [17-07-2023(online)].pdf 2023-07-17
45 1415-KOLNP-2015.pdf 2015-05-25
45 1415-KOLNP-2015-Correspondence to notify the Controller [17-07-2023(online)].pdf 2023-07-17
46 DW.pdf 2015-05-18
46 1415-KOLNP-2015-FORM-26 [19-07-2023(online)].pdf 2023-07-19
47 F2.pdf 2015-05-18
47 1415-KOLNP-2015-Written submissions and relevant documents [22-08-2023(online)].pdf 2023-08-22
48 F3.pdf 2015-05-18
48 1415-KOLNP-2015-FORM 3 [28-08-2023(online)].pdf 2023-08-28
49 F5.pdf 2015-05-18
49 1415-KOLNP-2015-PatentCertificate30-10-2023.pdf 2023-10-30
50 1415-KOLNP-2015-IntimationOfGrant30-10-2023.pdf 2023-10-30
50 FOA.pdf 2015-05-18

Search Strategy

1 search_29-05-2019.pdf

ERegister / Renewals

3rd: 25 Nov 2023

From 12/11/2015 - To 12/11/2016

4th: 25 Nov 2023

From 12/11/2016 - To 12/11/2017

5th: 25 Nov 2023

From 12/11/2017 - To 12/11/2018

6th: 25 Nov 2023

From 12/11/2018 - To 12/11/2019

7th: 25 Nov 2023

From 12/11/2019 - To 12/11/2020

8th: 25 Nov 2023

From 12/11/2020 - To 12/11/2021

9th: 25 Nov 2023

From 12/11/2021 - To 12/11/2022

10th: 25 Nov 2023

From 12/11/2022 - To 12/11/2023

11th: 25 Nov 2023

From 12/11/2023 - To 12/11/2024

12th: 23 Oct 2024

From 12/11/2024 - To 12/11/2025

13th: 12 Nov 2025

From 12/11/2025 - To 12/11/2026