Specification
Claims:Claims
1. A system configured to receive at least one first Audio Stream (116, 316) associated to an Audio scene to be reproduced,
wherein the system comprises:
at least one media Audio decoder (112) configured to decode at least one Audio signal from the at least one first Audio Stream (116, 316) for the representation of the Audio scene to the user;
at least one processor (120, 132), configured to:
decide, based at least on the user’s current movement data (122) and/or selection and/or setting, whether an Audio information message is to be reproduced, wherein the Audio information message is independent of the at least one Audio signal; and
cause, at the decision that the Audio information message is to be reproduced, the reproduction of the Audio information message,
wherein the at least one processor is configured to control a muxer or multiplexer (412) to merge packets of the Audio information message Stream (140) with packets of the at least one first Audio Stream (116) in one Stream (414) to obtain an addition of the Audio information message to the at least one Audio Stream (116).
2. The system of claim 1, wherein the at least one processor is configured to control the muxer or multiplexer (412) to merge, on the basis of Audio information message metadata, the packets of the Audio information message Stream (140) with the packets of the at least one first Audio Stream (116) in one Stream (414) to obtain the addition of the Audio information message to the at least one Audio Stream (116).
3. The system of any of the preceding claims, wherein the Audio information message inputted to the muxer or multiplexer (412) is uncompressed.
4. The system of any of the preceding claims, wherein the at least one media Audio decoder (112) is inputted by the one Stream (414) obtained from the at least one muxer or multiplexer (412) and a modified version of the Audio information message metadata (141, 234).
5. The system of any of the preceding claims, wherein the at least one media Audio decoder (112) is inputted by user’s current movement data (122).
6. The system of any of the preceding claims, wherein the Audio information message metadata (141) is obtained from a remote entity.
7. The system of any of claims 1-5, configured to generate the Audio information message metadata.
8. The system of any of the preceding claims, configured to generate the Audio information message.
9. The system of any of the preceding claims, wherein the at least one media Audio decoder (112) is inputted by one Stream (414) obtained by merging, multiplexing, or muxing the at least one first Audio Stream (116, 316) and the Audio information message metadata (141, 234).
10. The system of any of the preceding claims, wherein:
the at least one processor (120, 132) is configured to receive and/or process and/or manipulate the Audio information message metadata (141) so as to cause, at the decision that the Audio information message is to be reproduced, the synthesis and/or reproduction of the Audio information message according to the Audio information message metadata (141).
11. The system of any of the preceding claims, wherein the at least one processor (120, 132) is configured to:
receive a user’s current movement data and/or selection and/or setting; and
decide, based on at least one of the user’s current movement data (122) and/or selection and/or setting, whether an Audio information message is to be synthesized and/or reproduced.
12. The system of any of the preceding claims, wherein:
the at least one processor (120, 132) is configured to receive and/or process and/or manipulate Audio information message metadata (141) describing the Audio information message, so as to cause the reproduction of the Audio information message according to at least the Audio information message metadata (141).
13. The system of any of the preceding claims, wherein the at least one processor is configured to:
receive at least one Audio metadata (236) describing the at least one Audio signal encoded in the at least one first Audio Stream (116);
receive Audio information message metadata (141) associated with at least one Audio information message from at least one first Audio Stream (116);
at the decision that the Audio information message is to be reproduced, modify the Audio information message metadata (141) to enable the reproduction of the Audio information message, in addition to the reproduction of the at least one Audio signal.
14. The system of any of the preceding claims, wherein the at least one processor (120, 132) is configured to:
receive at least one Audio metadata (236) describing the at least one Audio signal encoded in the at least one first Audio Stream (116);
receive Audio information message metadata (141) associated with at least one Audio information message;
at the decision that the Audio information message is to be reproduced, enable the reproduction of the Audio information message, in addition to the reproduction of the at least one Audio signal; and
modify the Audio metadata (236) describing the at least one Audio signal to allow a merge of the at least one first Audio Stream (116) and at least one additional Audio Stream (140) in which the Audio information message is encoded.
15. The system of any of the preceding claims, further configured to store, for future use, the Audio information message metadata (141).
16. The system of any of the preceding claims, further configured to store, for future use, the Audio information message (140).
17. The system of any of the preceding claims, wherein the at least one processor (120, 132) is configured to control a muxer or multiplexer (412) to merge, on the basis of the Audio information message metadata (141), packets of the Audio information message (140) with packets of the at least one first Audio Stream (116) in one Stream (414).
18. The system of any of the preceding claims, wherein the Audio information message metadata (141) is encoded in a configuration frame and/or in a data frame including at least one of:
an identification tag,
an integer uniquely identifying the reproduction of the Audio information message metadata,
a type of the message,
a status,
an indication of dependency/non-dependency from the scene,
positional data,
gain data,
an indication of the presence of associated text label,
number of available languages,
language of the Audio information message,
data text length,
data text of the associated text label, and/or
description of the Audio information message.
19. The system of any of the preceding claims, wherein the at least one processor (120, 132) is configured to perform at least one of the following operations:
extract Audio information message metadata from a Stream;
modify Audio information message metadata to activate the Audio information message and/or set/change its position;
embed metadata back in a Stream;
feed the Stream to an additional media decoder;
extract Audio metadata from the least one first Audio Stream (116);
extract Audio information message metadata from an additional Stream;
modify Audio information message metadata to activate the Audio information message and/or set/change its position;
modify Audio metadata of the least one first Audio Stream (116) so as to take into consideration the existence of the Audio information message and allow merging;
feed a Stream to the multiplexer or muxer to multiplex or mux them based on the information received from the at least one processor.
20. The system of any of the preceding claims, wherein the Audio Streams are formatted in the MPEG-H 3D Audio Stream format.
21. The system of any of the preceding claims, wherein at least one if its elements comprises a Dynamic Adaptive Streaming over HTTP, DASH, client and/or is configured to retrieve the data for each of the adaptation set using the ISO Base Media File Format, ISO BMFF, or MPEG-2 Transport Stream, MPEG-2 TS.
22. The system of any of the preceding claims, configured to receive, from a remote entity (202), the at least one Audio stream (106) associated to the Audio scene.
23. The system of any of the preceding claims, wherein the at least one first Audio stream is independent of the user’s current movement data (122) and/or selection and/or setting in the current Audio scene.
24. The system of any of the preceding claims, for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree Video environment
receive at least one Video Stream (106) associated to an audio and video scene to be reproduced;
wherein the system comprises:
at least one media Video decoder (102) configured to decode at least one Video signal from the at least one Video Stream (106) for the representation of the audio and video scene to a user.
25. The system of any of the preceding claims, further comprising at least one loudspeaker.
26. The system of any of claims 1-24, connected to at least one loudspeaker.
27. A method comprising:
decoding at least one Audio signal from an Audio scene to be reproduced;
deciding, based on the user’s current movement data (122) and/or selection and/or setting, whether an Audio information message is to be reproduced, wherein the Audio information message is independent on the at least one Audio signal; and
causing, at the decision that the Audio information message is to be reproduced, the reproduction of the Audio information message,
wherein the method includes controlling a muxer or multiplexer (412) to merge packets of the Audio information message Stream (140) with packets of the at least one first Audio Stream (116) in one Stream (414) to obtain an addition of the Audio information message to the at least one Audio Stream (116).
28. The method of claim 27, further comprising:
processing and/or manipulating metadata (141) so as to cause, at the decision that the Audio information message is to be reproduced, the reproduction of the Audio information message according to the metadata (141) in such a way that the Audio information message is part of the Audio scene.
29. The method of any of claims 27-28, further comprising:
reproducing the Audio scene; and
deciding to further reproduce the Audio information message on the basis of the user’s movement data (122) and/or selection and/or setting.
30. A non-transitable storage unit comprising instructions which, when executed by a processor, cause the processor to perform a method according to any of claims 27-29.
, Description:Description
1. Introduction
5 In many applications, delivery of audible messages can improve the user experience during media consumption. One of the most relevant application of such messages is given by Virtual Reality (VR) content. In a VR environment, or similarly in an Augmented Reality (AR) or Mixed Reality (MR) or 360-degree Video environments, the user can usually visualise full
360-degree content using for example a Head Mounted Display (HMD) and listen to it over
10 headphones (or similarly over loudspeakers, including correct rendering dependent to its position). The user can usually move in the VR/AR space, or at least change the viewing direction - the so-called "viewport" for Video. In 360-degree Video environments, that use classic reproduction systems (wide display screen) instead of HMDs, remote control devices can be used for emulating the user’s movement in the scene and similar principles apply. It
15 should be noted that 360-degree content may refer to any type of content that comprises in more than one viewing angle at the same moment in time, that the user can chose from (for example by his head orientation, or using a remote control device)
Compared with classic content consumption, for VR the content creators cannot any-longer control what the user visualises at various moments in time - the current viewport. The user
20 has the freedom to choose different viewports at each instance of time, out of the allowed or available viewports.
A common issue of VR content consumption is the risk that the user will miss the important events in the Video scene due to wrong viewport selection. For addressing this issue, the notion of Region Of Interest (ROI) was introduced and several concepts for signaling the ROI
25 are considered. Although, the ROI is commonly used to indicate to the user the region containing the recommended viewport, it can also be used with other purposes, such as: indicating the presence of a new character/object in the scene, indicating accessibility features associated with objects in the scene, basically any feature that can be associated with an element composing the video scene. For example, visual messages (e.g., "Turn your
30 head to left") can be used and overlaid over the current viewport. Alternatively, audible sounds can be used, either natural or synthetic sounds, by playing them back at the position of the ROI. These Audio messages are known as "Earcons".
In the context of this application the notion of Earcon will be used to characterise Audio messages conveyed for signaling the ROIs, but the signaling and the processing proposed
can be used also for generic Audio messages with other purpose than signaling ROIs. One example of such Audio messages is given by Audio messages for conveying information/indication of various options the user has in an interactive
AR/VR/MR environment (e.g., "jump over the box to your left for entering room X").
5 Additionally, the VR example will be used, but the mechanisms described in this document apply to any media consumption environment.
2. Terminology and Definitions
The following terminology is used in the technical field:
10 • Audio Elements: Audio signals that can be represented for example as Audio
objects, Audio channels, scene based Audio (Higher Order Ambisonics - HOA), or
combination of all.
•
Region-of-Interest (ROI): One region of the video content (or of the environment
displayed or simulated) that is of interest to the user at one moment in time. This can
be commonly a region on a sphere for example, or a polygonal selection from a 2D
15 map. The ROI identifies a specific region for a particular purpose, defining the borders of an object under consideration.
•
User position information: location information (e.g., x, y, z coordinates), orientation
information (yow, pitch, roll), direction and speed of movement, etc.
•
Viewport: Part of the spherical Video that is currently displayed and viewed by the
20 user.
•
Viewpoint: the center point of the Viewport.
•
360-degree video (also known as immersive video or spherical video): represents in
the context of this document a video content that contains more than one view (i.e.,
viewport) in one direction at the same moment in time. Such content can be created,
25 for example, using an omnidirectional camera or a collection of cameras. During playback the viewer has control of the viewing direction.
•
Adaptation Sets contain a media stream or set of media streams. In the simplest
case, one Adaptation Set contains all audio and video for the content, but to reduce
bandwidth, each stream can be split into a different Adaptation Set. A common case
30 is to have one video Adaptation Set, and multiple audio Adaptation Sets (one for each
supported language). Adaptation Sets can also contain subtitles or arbitrary
metadata.
• Representations allow an Adaptation Set to contain the same content encoded in different ways. In most cases, Representations will be provided in multiple bitrates. This allows clients to request the highest quality content that they can play without waiting to buffer. Representations can also be encoded with different codecs,
5 allowing support for clients with different supported codecs.
• Media Presentation Description (MPD) is an XML syntax containing information about media segments, their relationships and information necessary to choose
between them.
In the context of this application the notions of the Adaptation Sets are used more generic,
10 sometimes referring actually to the Representations. Also, the media streams (audio/video streams) are generally encapsulated first into Media segments that are the actual media files played by the client (e.g., DASH client). Various formats can be used for the Media segments, such as ISO Base Media File Format (ISOBMFF), which is similar to the MPEG-4 container format, and MPEG-TS. The encapsulation into Media Segments and in different
15 Representations/Adaptation Sets is independent of the methods described in here, the methods apply to all various options.
Additionally, the description of the methods in this document may be centered around a DASH Server-Client communication, but the methods are generic enough to work with other delivery environments, such as MMT, MPEG-2 Transport Stream, DASH-ROUTE, File
20 Format for fileplayback etc.
3. Current solutions
Current solutions are:
[1]. ISO/IEC 23008-3:2015, Information technology -- High efficiency coding and media delivery in heterogeneous environments -- Part 3: 3D Audio
25 [2]. N16950, Study of ISO/IEC DIS 23000-20 Omnidirectional Media Format
[3]. M41184, Use of Earcons for ROI Identification in 360-degree Video.
A delivery mechanisms for 360-degree content is given by the ISO/IEC 23000-20, Omnidirectional Media Format [2]. This standard specifies the media format for coding, storage, delivery, and rendering of omnidirectional images, Video and the associated Audio.
30 It provides information about the media codecs to be used for Audio and Video compression and additional metadata information for correct consumption of the 360-degree A/V content.
It also specifies constrains and requirements on the delivery channels, such as Streaming
over DASH/MMT or file-based playback.
The Earcon concept was first introduced in M41184, "Use of Earcons for ROI Identification in
360-degree Video" [3], which provides a mechanism for signaling of the Earcon Audio data to
5 the user.
However, some users have reported disappointing comments of these systems. Often, a great quantity of Earcons has resulted annoying. When the designers have reduced the number of Earcons, some users have lost important information. Notably, each user has his/her own knowledge and level of experience, and would prefer a system suitable for
10 himself/herself. Just to give an example, each user would prefer to have Earcons reproduced
at a preferred volume (independent, for example, from the volume used for the other Audio signals). It has been proven difficult, for the system designer, to obtain a system which provides a good level of satisfaction for all the possible users. A solution has therefore been searched for permitting an increase of satisfaction for almost all the users.
15 Further, it has been proven difficult to reconfigure the systems even for the designers. For example, they have experienced difficulty in preparing new releases of the Audio Streams and to update the Earcons.
Further, a restricted system imposes certain limitations on the functionality, such as the
Earcons cannot be accurately identified into one Audio Stream. Moreover, the Earcons have
20 to be always active and can become annoying to the user if played back when they are not needed.
Further, the Earcon spatial information cannot be signaled nor modified by, for example, a DASH Client. Easy access to this information on the Systems level can enable additional feature for better user experience.
25 Moreover, there is no flexibility in addressing various types of Earcons (e.g., natural sound, synthetic sound, sound generated in the DASH Client etc).
All these issues lead to a poor user Quality of Experience. A more flexible architecture would therefore be preferable.
4. The present invention
30 In accordance to examples, there is provided a system for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree Video environment configured to:
receive at least one Video Stream associated to an audio and video scene to be reproduced; and
receive at least one first Audio Stream associated to the audio and video scene to be reproduced,
5 wherein the system comprises:
at least one media Video decoder configured to decode at least one Video signal from the at least one Video Stream for the representation of the audio and video scene to a
user; and
at least one media Audio decoder configured to decode at least one Audio signal
10 from the at least one first Audio Stream for the representation of the audio and video scene to the user;
a region of interest, ROI, processor, configured to:
decide, based at least on the user’s current viewport and/or head orientation
and/or movement data and/or viewport metadata and/or audio information message
15 metadata, whether an Audio information message associated to the at least one ROI is to be reproduced, wherein the audio information message is independent of the at least one Video signal and the at least one Audio signal; and
cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message.
20 In accordance to examples, there is provided a system for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree Video environment configured to:
receive at least one Video Stream; and receive at least one first Audio Stream, wherein the system comprises:
25 at least one media Video decoder configured to decode at least one Video signal from the at least one Video Stream for the representation of a VR, AR, MR or 360-degree Video environment scene to a user; and
at least one media Audio decoder configured to decode at least one Audio signal from the at least one first Audio Stream for the representation of an Audio scene to the user;
a region of interest, ROI, processor, configured to:
decide, based on the user’s current viewport and/or head orientation and/or movement data and/or viewport metadata and/or audio information message metadata, whether an Audio information message associated to the at least one ROI
5 is to be reproduced, wherein the audio information message is an earcon; and
cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message.
The system may be comprising:
a metadata processor configured to receive and/or process and/or manipulate audio
10 information message metadata so as to cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message according to the audio information message metadata.
The ROI processor may be configured to:
receive a user’s current viewport and/or position and/or head orientation and/or
15 movement data and/or other user related data; and
receive viewport metadata associated with at least one Video signal from the at least
one Video Stream, the viewport metadata defining at least one ROI; and
decide, based on at least one of the user’s current viewport and/or position and/or
head orientation and/or movement data and the viewport metadata and/or other criteria,
20 whether an Audio information message associated to the at least one ROI is to be reproduced.
The system may be comprising:
a metadata processor configured to receive and/or process and/or manipulate Audio information message metadata describing the Audio information message and/or Audio
25 metadata describing the at least one Audio signal encoded in the at least one Audio Stream and/or the viewport metadata, so as to cause the reproduction of the Audio information message according to the Audio information message metadata and/or Audio metadata describing the at least one Audio signal encoded in the at least one Audio Stream and/or the viewport metadata.
30 The ROI processor may be configured to:
in case the at least one ROI is outside the user’s current viewport and/or position and/or head orientation and/or movement data, cause the reproduction of an Audio information message associated to the at least one ROI, in addition to the reproduction of the
at least one Audio signal; and
5 in case the at least one ROI is within the user’s current viewport and/or position and/or head orientation and/or movement data, disallow and/or deactivate the reproduction of the Audio information message associated to the at least one ROI.
The system may be configured to:
receive the at least one additional Audio Stream in which the at least one Audio
10 information message is encoded,
wherein the system further comprises:
at least one muxer or multiplexer to merge, under the control of the metadata processor and/or the ROI processor and/or another processor, packets of the at least one additional Audio Stream with packets of the at least one first Audio Stream in one Stream,
15 based on the decision provided by the ROI processor that the at least one Audio information message is to be reproduced, to cause the reproduction of the Audio information message in addition to the Audio scene.
The system may be configured to:
receive at least one Audio metadata describing the at least one Audio signal encoded
20 in the at least one Audio Stream;
receive Audio information message metadata associated with at least one Audio
information message from at least one Audio Stream;
at the decision that the information message is to be reproduced, modify the Audio information message metadata to enable the reproduction of the Audio information message,
25 in addition to the reproduction of the at least one Audio signal.
The system may be configured to:
receive at least one Audio metadata describing the at least one Audio signal encoded in the at least one Audio Stream;
receive Audio information message metadata associated with at least one Audio
30 information message from the at least one Audio Stream;
at the decision that the Audio information message is to be reproduced, modify the Audio information message metadata to enable the reproduction of an Audio information message in association with the at least one ROI, in addition to the reproduction of the at
least one Audio signal; and
5 modify the Audio metadata describing the at least one Audio signal to allow a merge of the at least one first Audio Stream and the at least one additional Audio Stream.
The system may be configured to:
receive at least one Audio metadata describing the at least one Audio signal encoded in the at least one Audio Stream;
10 receive Audio information message metadata associated with at least one Audio information message from at least one Audio Stream;
at the decision that the Audio information message is to be reproduced, providing the Audio information message metadata to a synthetic Audio generator to create a synthetic Audio Stream, so as to associate the Audio information message metadata to the synthetic
15 Audio Stream, and to provide the synthetic Audio Stream and the Audio information message metadata to a multiplexer or muxer to allow a merge the at least one Audio Stream and the synthetic Audio Stream.
The system may be configured to:
obtain the Audio information message metadata from the at least one additional Audio
20 Stream in which the Audio information message is encoded.
The system may be comprising:
an Audio information message metadata generator configured to generate Audio information message metadata on the basis of the decision that Audio information message associated to the at least one ROI is to be reproduced.
25 The system may be configured to:
store, for future use, the Audio information message metadata and/or the Audio information message Stream.
The system may be comprising:
a synthetic Audio generator configured to synthesize an Audio information message
on the basis of Audio information message metadata associated to the at least one ROI.
The metadata processor may be configured to control a muxer or multiplexer to merge, on the basis of the Audio metadata and/or Audio information message metadata, packets of the
5 Audio information message Stream with packets of the at least one first Audio Stream in one Stream to obtain an addition of the Audio information message to the at least one Audio Stream.
The Audio information message metadata may be encoded in a configuration frame and/or in a data frame including at least one of:
10 an identification tag,
an integer uniquely identifying the reproduction of the Audio information message metadata,
a type of the message,
a status,
15 an indication of dependency/non-dependency from the scene,
positional data, gain data,
an indication of the presence of associated text label,
number of available languages,
20 language of the Audio information message,
data text length,
data text of the associated text label, and/or description of the Audio information message.
The metadata processor and/or the ROI processor may be configured to perform at least one
25 of the following operations:
extract Audio information message metadata from a Stream;
modify Audio information message metadata to activate the Audio information message and/or set/change its position;
embed metadata back in a Stream;
feed the Stream to an additional media decoder;
5 extract Audio metadata from the least one first Audio Stream;
extract Audio information message metadata from an additional Stream;
modify Audio information message metadata to activate the Audio information message and/or set/change its position;
modify Audio metadata of the least one first Audio Stream so as to take into
10 consideration the existence of the Audio information message and allow merging;
feed a Stream to the multiplexer or muxer to multiplex or mux them based on the
information received from the ROI processor.
The ROI processor may be configured to perform a local search for an additional Audio
Stream in which the Audio information message is encoded and/or Audio information
15 message metadata and, in case of non-retrieval, request the additional Audio Stream and/or
Audio information message metadata to a remote entity.
The ROI processor may be configured to perform a local search for an additional Audio
Stream and/or an Audio information message metadata and, in case of non-retrieval, cause
a synthetic Audio generator to generate the Audio information message Stream and/or Audio
20 information message metadata.
The system may be configured to:
receive the at least one additional Audio Stream in which at least one Audio information message associated to the at least one ROI is included; and
decode the at least one additional Audio Stream if the ROI processor decides that an
25 Audio information message associated to the at least one ROI is to be reproduced.
The system may be comprising:
at least one first Audio decoder for decoding the at least one Audio signal from at least one first Audio Stream;
at least one additional Audio decoder for decoding the at least one Audio information
message from an additional Audio Stream; and
at least one mixer and/or renderer for mixing and/or superimposing the Audio information message from the at least one additional Audio Stream with the at least one
5 Audio signal from the at least one first Audio Stream.
The system may be configured to keep track of metrics associated to historical and/or statistical data associated to the reproduction of the Audio information message, so as to disable the Audio information message’s reproduction if the metrics is over a predetermined
threshold.
10 The ROI processor’s decision may be based on a prediction of user’s current viewport and/or position and/or head orientation and/or movement data in relationship to the position of the ROI.
The system may be configured to receive the at least one first Audio Stream and, at the decision that the information message is to be reproduced, to request an Audio message
15 information Stream from a remote entity.
The system may be configured to establish whether to reproduce two Audio information messages at the same time or whether to select a higher-priority Audio information message
to be reproduced with priority with respect to a lower-priority Audio information message.
The system may be configured to identify an Audio information message among a plurality of
20 Audio information messages encoded in one additional Audio Stream on the basis of the address and/or position of the Audio information messages in an Audio Stream.
The Audio Streams may be formatted in the MPEG-H 3D Audio Stream format. The system may be configured to:
receive data about availability of a plurality of adaptation sets, the available
25 adaptation sets including at least one Audio scene adaptation set for the at least one first Audio Stream and at least one Audio message adaptation set for the at least one additional Audio Stream containing at least one Audio information message;
create, based on the ROI processor’s decision, selection data identifying which of the
adaptation sets are to be retrieved, the available adaptation sets including at least one Audio
30 scene adaptation set and/or at least one Audio message adaptation set; and
data,
request and/or retrieve the data for the adaptation sets identified by the selection
wherein each adaptation set groups different encodings for different bitrates.
The system may be such that at least one if its elements comprises a Dynamic Adaptive
5 Streaming over HTTP, DASH, client and/or is configured to retrieve the data for each of the adaptation set using the ISO Base Media File Format, ISO BMFF, or MPEG-2 Transport Stream, MPEG-2 TS.
The ROI processor may be configured to check correspondences between the ROI and the current viewport and/or position and/or head orientation and/or movement data so as to
10 check whether the ROI is represented in the current viewport, and, in case the ROI is outside the current viewport and/or position and/or head orientation and/or movement data, to
audibly signal the presence of the ROI to the user.
The ROI processor may be configured to check correspondences between the ROI and the current viewport and/or position and/or head orientation and/or movement data so as to
15 check whether the ROI is represented in the current viewport, and, in case the ROI is within the current viewport and/or position and/or head orientation and/or movement data, to refrain from audibly signal the presence of the ROI to the user.
The system may be configured to receive, from a remote entity, the at least one video stream associated to the video environment scene and the at least one audio stream associated to
20 the audio scene, wherein the audio scene is associated to the video environment scene.
The ROI processor may be configured to choose, among a plurality of audio information messages to be reproduced, the reproduction of one first audio information message before
a second audio information message.
The system may be comprising a cache memory to store an audio information message
25 received from a remote entity or generated synthetically, to reuse the audio information message at different instances of time.
The audio information message may an earcon.
The at least one video stream and/or the at least one first audio stream may be part of the current video environment scene and/or video audio scene, respectively, and independent of
30 the user’s current viewport and/or head orientation and/or movement data in the current video environment scene and/or video audio scene.
The system may be configured to request the at least one first audio stream and/or at least one video stream to a remote entity in association to the audio stream and/or video environment stream, respectively, and to reproduce the at least one audio information message on the basis of the user’s current viewport and/or head orientation and/or
5 movement data.
The system may be configured to request the at least one first audio stream and/or at least one video stream to a remote entity in association to the audio stream and/or video environment stream, respectively, and to request, to the remote entity, the at least one audio information message on the basis of the user’s current viewport and/or head orientation
10 and/or movement data.
The system may be configured to request the at least one first audio stream and/or at least one video stream to a remote entity in association to the audio stream and/or video environment stream, respectively, and to synthesize the at least one audio information message on the basis of the user’s current viewport and/or head orientation and/or
15 movement data.
The system may be configured to check at least one of additional criteria for the reproduction
of the audio information message, the criteria further including a user’s selection and/or a user’s setting.
The system may be configured to check at least one of additional criteria for the reproduction
20 of the audio information message, the criteria further including the state of the system.
The system may be configured to check at least one of additional criteria for the reproduction of the audio information message, the criteria further including the number of audio
information message reproductions that have already been performed.
The system may be configured to check at least one of additional criteria for the reproduction
25 of the audio information message, the criteria further including a flag in a datastream obtained from a remote entity.
In accordance to an aspect, there is provided a system comprising a client configured as the system of any of the examples above and/or below, and a remote entity configured as a server for delivering the at least one Video Stream and the at least one Audio Stream.
30 The remote entity may be configured to search, in a database, intranet, internet, and/or geographical network, the at least one additional Audio Stream and/or Audio information
message metadata and, in case of retrieval, delivery the at least one additional Audio Stream and/or the Audio information message metadata.
The remote entity may be configured to synthesize the at least one additional Audio Stream and/or generate the Audio information message metadata.
5 In accordance to an aspect, there may be provided a method for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360 degree video environment comprising:
decoding at least one Video signal from the at least one video and audio scene to be reproduced to a user;
decoding at least one Audio signal from the video and audio scene to be reproduced;
10 deciding, based on the user’s current viewport and/or head orientation and/or
movement data and/or metadata, whether an Audio information message associated to the
at least one ROI is to be reproduced, wherein the Audio information message is independent on the at least one Video signal and the at least one Audio signal; and
causing, at the decision that the information message is to be reproduced, the
15 reproduction of the Audio information message.
In accordance to an aspect, there may be provided a method for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360 degree video environment comprising:
decoding at least one Video signal from the at least one Video Streamfor the representation of a VR, AR, MR or 360-degree Video environment scene to a user;
20 decoding at least one Audio signal from the at least one first Audio Stream for the representation of an Audio scene to the user;
deciding, based on the user’s current viewport and/or head orientation and/or movement data and/or metadata, whether an Audio information message associated to the at least one ROI is to be reproduced, wherein the Audio information message is an earcon;
25 and
causing, at the decision that the information message is to be reproduced, the reproduction of the Audio information message.
The methods above and/or below may be comprising:
receiving and/or processing and/or manipulating metadata so as to cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message according to the metadata in such a way that the Audio information
message is part of the Audio scene.
5 The methods above and/or below may be comprising:
reproducing the audio and video scene; and
deciding to further reproduce the audio information message on the basis of the
user’s current viewport and/or head orientation and/or movement data and/or metadata.
The methods above and/or below may be comprising:
10 reproducing the audio and video scene; and
in case the at least one ROI is outside the user’s current viewport and/or position and/or head orientation and/or movement data, cause the reproduction of an Audio information message associated to the at least one ROI, in addition to the reproduction of the
at least one Audio signal; and/or
15 in case the at least one ROI is within the user’s current viewport and/or position and/or head orientation and/or movement data, disallow and/or deactivate the reproduction of the Audio information message associated to the at least one ROI.
In accordance to examples, there is provided a system for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree Video environment configured to:
20 receive at least one Video Stream; and
receive at least one first Audio Stream,
wherein the system comprises:
at least one media Video decoder configured to decode at least one Video signal from the at least one Video Stream for the representation of a VR, AR, MR or 360-degree
25 Video environment scene to a user; and
at least one media Audio decoder configured to decode at least one Audio signal from the at least one first Audio Stream for the representation of an Audio scene to the user;
a region of interest, ROI, processor, configured to:
decide, based on the user’s current viewport and/or head orientation and/or
movement data and/or the metadata, whether an Audio information message associated to the at least one ROI is to be reproduced; and
cause, at the decision that the information message is to be reproduced, the
5 reproduction of the Audio information message.
In examples, there is provided a system for a virtual reality, VR, augmented reality, AR, mixed reality, MR, or 360-degree Video environment configured to:
receive at least one Video Stream; and
receive at least one first Audio Stream,
10 wherein the system comprises:
at least one media Video decoder configured to decode at least one Video signal from the at least one Video Stream for the representation of a VR, AR, MR or 360-degree
Video environment scene to a user; and
at least one media Audio decoder configured to decode at least one Audio signal
15 from the at least one first Audio Stream for the representation of an Audio scene to a user;
a region of interest, ROI, processor, configured to decide, based on the user’s current viewport and/or position and/or head orientation and/or movement data and/or metadata and/or other criteria, whether an Audio information message associated to the at
least one ROI is to be reproduced; and
20 a metadata processor configured to receive and/or process and/or manipulate metadata so as to cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message according to the metadata in such a way that the Audio information message is part of the Audio scene.
According to an aspect, there is provided a non-transitable storage unit comprising
25 instructions which, when executed by a processor, cause the processor to perform a method as above and/or below.
5. Description of the drawings
Figs. 1-5, 5a, and 6 show examples of implementations; Fig. 7 shows a method according to an example;
Fig. 8 shows an example of an implementation.
6. Examples
6.1 General examples
Fig. 1 shows an example of a system 100 for a virtual reality, VR, augmented reality, AR,
5 mixed reality, MR, or 360-degree Video environment. The system 100 may be associated, for example, to a content consumption device (e.g., Head-Mounted Display or the like), which reproduces visual data in a spherical or hemispherical display intimately associated to the head of the user.
The system 100 may comprise at least one media Video decoder 102 and at least one media
10 Audio decoder 112. The system 100 may receive at least one Video Stream 106 in which a Video signal is encoded for the representation of a VR, AR, MR or 360-degree Video environment scene 118a to a user. The system 100 may receive at least one first Audio Stream 116, in which an Audio signal is encoded for the representation of an Audio scene
118b to a user.
15 The system 100 may also comprise a region of interest, ROI, processor 120. The ROI processor 120 may process data associated to a ROI. In general terms, the presence of the ROI may be signalled in viewport metadata 131. The viewport metadata 131 may be encoded in the Video Stream 106 (in other examples, the viewport metadata 131 may be encoded in other Streams). The viewport metadata 131 may comprise, for example,
20 positional information (e.g., coordinate information) associated to the ROI. For example, the ROI may, in examples, be understood as a rectangle (identified by coordinates such as the position of one of the four vertexes of the rectangles in the spherical Video and the length of the sides of the rectangle). The ROI is normally projected in the spherical Video. The ROI is normally associated to a visible element which is believed (according to a particular
25 configuration) to be of interest of the user. For example, the ROI may be associated to a rectangular area displayed by the content consumption device (or somehow visible to the user).
The ROI processor 120 may, inter alia, control operations of the media Audio decoder 112.
The ROI processor 120 may obtain data 122 associated to the user’s current viewport and/or
30 position and/or head orientation and/or movement (also virtual data associated to the virtual position may understood, in some examples, as being part of data 122). These data 122 may be provided at least partially, for example, by the content consumption device, or by positioning/detecting units.
The ROI processor 120 may check correspondences between the ROI and the user’s current viewport and/or position (actual or virtual) and/or head orientation and/or movement data 122 (in examples, other criteria may be used). For example, the ROI processor may check if the ROI is represented in the current viewport. In case a ROI is only partially represented in the
5 viewport (e.g., on the basis of the user’s head movements), it may determined, for example, if a minimum percentage of the ROI is displayed in the screen. In any case, the ROI processor 120 is capable of recognizing if the ROI is not represented or visible to the user.
In case the ROI is considered to be outside the user’s current viewport and/or position and/or
head orientation and/or movement data 122, the ROI processor 120 may audibly signal the
10 presence of the ROI to the user. For example, the ROI processor 120 may request the reproduction of an Audio information message (Earcon) in addition to the Audio signal decoded from the at least one first Audio Stream 116.
In case the ROI is considered to be within the user’s current viewport and/or position and/or
head orientation and/or movement data 122, the ROI processor may decide to avoid the
15 reproduction of the Audio information message.
The Audio information message may be encoded in an Audio Stream 140 (Audio information message Stream), which may be the same of the Audio Stream 116 or a different Stream. The Audio Stream 140 may be generated by the system 100 or may be obtained from an external entity (e.g., server). Audio Metadata, such as Audio information message metadata
20 141, may be defined for describing properties of the Audio information Stream 140.
The Audio information message may be superposed (or mixed or muxed or merged or combined or composed) to the signal encoded in the Audio Stream 116 or may not be selected, e.g., simply on the basis of a decision of the ROI processor 120. The ROI processor 120 may base its decision on the viewport and/or position and/or head orientation
25 and/or movement data 122, metadata (such as the viewport metadata 131 or other metadata) and /or other criteria (e.g., selections, state of the system, number of Audio information message reproductions that have been already performed, particular functions and/or operations, user preferred settings that can disable the usage of Earcons and so on).
A metadata processor 132 may be implemented. The metadata processor 132 may be
30 interposed, for example, between the ROI processor 120 (by which it may be controlled) and the media Audio decoder 112 (which may be controlled from the metadata processor). In examples, the metadata processor is a section of the ROI processor 120. The metadata processor 132 may receive, generate, process and/or manipulate the Audio information message metadata 141. The metadata processor 132 may also process and/or manipulate
metadata of the Audio Stream 116, for example for muxing the Audio Stream 116 with the
Audio information message Stream 140. In addition or alternative, the metadata processor
132 may receive metadata of the Audio Stream 116, for example from a server (e.g., a remote entity).
5 The metadata processor 132 may therefore change the Audio scene reproduction and adapt the Audio information message to particular situations and/or selections and/or states.
Some of the advantages of some implementations are here discussed.
The Audio information messages can be accurately identified, e.g., using the Audio information message metadata 141.
10 The Audio information messages may be easily activated/deactivated, e.g., by modifying the metadata (e.g., by the metadata processor 132). The Audio information messages may be, for example, enabled/disabled based on the current viewport and the ROI information (and also special functions or effects that are to be achieved).
Audio information message (containing for example status, type, spatial information and so
15 on) can be easily signalled and modified by common equipment, such as a Dynamic
Adaptive Streaming over HTTP (DASH) Client, for example.
Easy access to the Audio information message (containing for example status, type, spatial information and so on) on the systems level can therefore enable additional feature for better user experience. Hence, the system 100 may be easily customized and permit further
20 implementations (e.g., specific applications) which may be performed by personnel which is independent from the designers of the system 100.
Moreover, flexibility is achieved in addressing various types of Audio information messages
(e.g., natural sound, synthetic sound, sound generated in the DASH Client etc.). Other advantages (which will be also apparent with the following examples):
25 • Usage of text labels in the metadata (as the basis for displaying something or generating the Earcon)
• Adaptation of the Earcon position based on the device (if is an HMD I want an accurate location, if is loudspeaker maybe a better way is to use a different location – direct into one loudspeaker).
30 • Different device classes:
o The Earcon metadata can be created in such a way that the Earcon is signaled to be active
o Some devices will know only how to parse the metadata and reproduce the
Earcon
5 o Some newer devices that additionally have a better ROI processor can decide to deactivate it in case is not needed
• More information and an additional figure about the adaptation sets.
Therefore, in a VR/AR environment the user can usually visualize full 360-degree content using for example a Head Mounted Display (HMD) and listen to it over headphones. The
10 user can usually move in the VR/AR space or at least change the viewing direction - the so- called “viewport“ for video. Compared with classic content consumption, for VR the content creators cannot any-longer control what the user visualizes at various moments in time - the current viewport. The user has the freedom to choose different viewports at each instance of time, out of the allowed or available viewports. In order to indicate to the user the Region Of
15 Interest (ROI), audible sounds can be used, either natural or synthetic sounds, by playing them back at the position of the ROI. These audio messages are known as “Earcons“. This invention proposes a solution for efficient delivery of such messages and proposes an optimized receiver behaviour for making use of the Earcons without affecting the user experience and the content consumption. This leads to an increased Quality of Experience.
20 This can be achieved by using dedicated metadata and metadata manipulation mechanisms on systems level for enabling or disabling the Earcons in the final scene.
The metadata processor 132 may be configured to receive and/or process and/or manipulate metadata 141 so as to cause, at the decision that the information message is to be reproduced, the reproduction of the Audio information message according to the metadata
25 141.Audio signals (e.g., those for representing the scene) may be understood as being part of the audio scene (e.g., an audio scene downloaded from a remote server). Audio signals may be in general semantically meaningful for the audio scene and all audio signals present together construct the audio scene. Audio signals may be encoded together in one audio bitstream. Audio signals may be created by the content creator and/or may be associated to
30 the particular scene and/or may be independent from the ROI.
The audio information message (e.g., earcon) may be understood as not semantically meaningful to the audio scene. It may be understood as an independent sound that can be generated artificially, such as recorded sound, a recorder voice of a person, etc. It can be also device-dependent (a system-sound generated at the press of a button on the remote
control, for example). The audio information message (e.g., earcon) may be understood as
being meant to guide the user in the scene, without being part of the scene.
The audio information message may be independent of the audio signals as above. According to different examples, it may be either included in the same bitstream, or
5 transmitted in a separate bitstream, or generated by the system 100.
An example of an audio scene composed of multiple audio signals may be:
-- Audio Scene a concert room which contains 5 audio signals:
--- Audio Signal 1: The sound of a piano
--- Audio Signal 2: The voice of the singer
10 --- Audio Signal 3: The voice of Person 1 part of the audience
--- Audio Signal 4: The voice of Person 2 part of the audience
--- Audio Signal 5: The sound created by the clock on the wall
The audio information message may be, for example, a recorded sound like “look to the piano player” (the piano being the ROI). If the user is already looking at the piano player, the
15 audio message will not be playedback.
Another example: a door (e.g., a virtual door) is opened behind the user and a new person enters the room; the user is not looking there. The Earcon can be triggered, based on this (information regarding the VR environment, such as virtual position), to announce the user
that something happens behind him.
20 In examples, each scene (e.g., with the related audio and video streams) is transmitted from the server to the client when the user changes the environment.
The audio information message may be flexible. In particular:
- the audio information message can be located in the same audio stream associated to the scene to be reproduced;
25 - the audio information message can be located in an additional audio stream;
- the audio information message can be completely missing, but only the metadata describing the earcon can be present in the stream and the audio information message can be generated in the system;
- the audio information message can be completely missing as well as the metadata describing the audio information message, in which case the system generates both (the
earcon and the metadata) based on other information about the ROI in the stream.
The Audio information message is in general independent of any Audio Signal part of the
5 Audio Scene and not is not used for the representation of the Audio Scene.
Examples of systems embodying or including parts which embody system 100 are provided below.
6.2 The example of Fig. 2
Fig. 2 shows a system 200 (which may contain at least a part embodying system 100) which
10 is here represented as being subdivided into a server side 202, a media delivery side 203, a client side 204, and/or a media consumption device side 206. Each of the sides 202, 203,
204, and 206 is a system itself and may be combined with any other system to obtain another system. Here, the Audio information messages are referred to as Earcons, even if it is possible to generalize them to any kind of Audio information messages.
15 The client side 204 may receive the at least one Video Stream 106 and/or the at least one
Audio Stream 116 from the server side 202 though a media delivery side 203.
The delivery side 203 may be, for example, based on a communication system such as a cloud system, a network system, a geographical communication network or well-known media transport formats (MPEG-2 TS Transport Stream, DASH, MMT, DASH ROUTE etc) or
20 even a file based storage. The delivery side 203 may be capable of performing communications in form of electric signals (e.g., on cable, wireless etc) and/or by distributing data packets (e.g., according to a particular communication protocol) with bitStreams in which Audio and Video signals are encoded. The delivery side 203 may however be
embodied by a point-to-point link, a serial or parallel connection, and so on. The delivery side
25 203 may perform a wireless connection e.g., according to protocols such as WiFi, Bluetooth, and so on.
The client side 204 may be associated to a media consumption device, e.g., a HND, for example, into which the user’s head may be inserted (other devices may be used, however). Therefore, the user may experience a Video and Audio scene (e.g., a VR scene) prepared by
30 the client side 204 on the basis of Video and Audio data provided by the server side 202.
Other implementations are, however, possible.
The server side 202 is here represented as having a media encoder 240 (that can cover Video encoders, Audio encoders, subtitle encoders, etc). This encoder 240 may be associated, for example, to an Audio and Video scene to be represented. The Audio scene may be, for example, for recreating an environment and is associated to the at least one
5 Audio and Video data Streams 106, 116, which may be encoded on the basis of the position (or virtual position) reached by the user in the VR, AR, MR environment. In general terms, the Video Stream 106 encodes spherical images, only a part of which (viewports) will be seen by the user in accordance to its position and movements. The Audio Stream 116 contains Audio data which participates to the Audio scene representation and is meant at
10 being heard by a user. According to examples, the Audio Stream 116 may comprise Audio metadata 236 (which refer to the at least one Audio signal that is intended to participate to the Audio scene representation) and/or Earcon metadata 141 (which may describe Earcons to be reproduced only in some cases).
The system 100 is here represented as being at the client side 204. For simplicity, the media
15 Video decoder 112 is not represented in Fig. 2.
In order to prepare the reproduction of the Earcon (or other Audio information messages), Earcon metadata 141 may be used. The Earcon metadata 141 may be understood as metadata (which may be encoded in an Audio Stream) which describe and provide attributes associated to the Earcon. Hence, the Earcon (if to be reproduced) may be based on the
20 attributes of the Earcon metadata 141.
Advantageously, the metadata processor 132 may specifically be implemented for
processing the Earcon metadata 141. For example, the metadata processor 132 may control the reception, processing, manipulation, and/or the generation of the Earcon metadata 141. When processed, the Earcon metadata may be represented as modified Earcon metadata
25 234. For example, it is possible to manipulate the Earcon metadata for obtaining a particular effect, and/or for performing Audio processing operations, such as multiplexing or muxing, for adding the Earcon to the Audio signal to be represented in the Audio scene.
The metadata processor 132 may control the reception, processing, manipulation of the
Audio metadata 236 associated to the at least one Stream 116. When processed, the Audio
30 metadata 236 may be represented as modified Audio metadata 238.
The modified metadata 234 and 238 may be provided to the media Audio decoder 112 (or a plurality of decoders in some examples) for the reproduction of the Audio scene 118b to the user.
In examples, there may be provided, as an optional component, a synthetic Audio generator and/or storing device 246. The generator may synthesize an Audio Stream (e.g., for generating an Earcon which is not encoded in a Stream). The storing device permits to store (e.g., in a cache memory) Earcon Streams (e.g., for future use) which have been generated
5 by the generator and/or obtained in a received Audio Stream.
Hence, the ROI processor 120 may decide for the representation of an Earcon on the basis
of the user’s current viewport and/or position and/or head orientation and/or movement data
122. However, the ROI processor 120 may also base its decision on criteria which involve other aspects.
10 For example, the ROI processor may enable/disable the Earcon reproduction on the basis of other conditions, such as, for example, user’s selections or higher layer selections, e.g., on the basis of the particular application that is intended to be consumed. For a Video game application, for example, Earcons or other Audio information messages may be avoided for high-Videogame-levels. This may be simply obtained, by the metadata processor, by
15 disabling the Earcons in the Earcon metadata.
Further, it is possible to disable the Earcons on the basis of the state of the system: if, for example, the Earcon has already been reproduced, its repetition may be inhibited. A timer
may be used, for example, for avoiding too quick repetitions.
The ROI processor 120 may also request the controlled reproduction of a sequence of
20 Earcons (e.g., the Earcons associated to all the ROIs in the scene), e.g., for instructing the user on the elements which he/she may see. The metadata processor 132 may control this operation.
The ROI processor 120 may also modify the Earcon position (i.e., the spatial location in the scene) or the Earcon type. For example, some users may prefer to have as Earcon one
25 specific sound play back at the exact location/position of the ROI, while other users can prefer to have the Earcon always played-back at one fixed location (e.g., center, or top position “voice of God” etc) as a vocal sound indication the position where the ROI is located.
It is possible to modify the gain (e.g., to obtain a different volume) of the Earcon’s reproduction. This decision may follow a user’s selection, for example. Notably, on the basis
30 of the ROI processor's decision, the metadata processor 132 will perform the gain modification by modifying, among the Earcon metadata associated to the Earcon, the particular attribute associated to the gain.
The original designer of the VR, AR, MR environment may also be unaware of how the Earcons will be actually reproduced. For example, user’s selections may modify the final rendering of the Earcons. Such an operation may be controlled, for example, by the
metadata processor 132 which may modify the Earcon metadata 141 on the basis of the ROI
5 processor’s decisions.
Thus, the operations performed on the Audio data associated to the Earcon are therefore in principle independent of the at least one Audio Stream 116 used for representing the Audio scene and may be differently managed. The Earcons may even be generated independently of the Audio and Video Streams 106 and 116 which constitute the Audio and Video scene
10 and may be produced by different and independent entrepreneurial groups.
Hence, the examples permit to increase the satisfaction for users. For example, a user may perform his/her own selections, e.g., by modifying the volume of the Audio information messages, by disabling the Audio information messages, and so on. Therefore, each user may have the experience more suited to his/her preference. Further, the obtained
15 architecture is more flexible. The Audio information messages may be easily updated, for example, by modifying the metadata, independently of the Audio Streams, and/or by modifying the Audio information message Streams independently of the metadata and of the main Audio Streams.
The obtained architecture is also compatible with legacy systems: legacy Audio information
20 message Streams may be associated to new Audio information message metadata, for example. In case of absence of a suitable Audio information message Stream, in examples the latter may be easily synthesized (and, for example, stored for subsequent use).
The ROI processor may keep track of metrics associated to historical and/or statistical data associated to the reproduction of the Audio information message, so as to disable the Audio
25 information message’s reproduction if the metrics is over a predetermined threshold (this may be used as criteria).
The ROI processor’s decision may be based, as a criteria, on a prediction of user’s current viewport and/or position and/or head orientation and/or movement data 122 in relationship to the position of the ROI.
30 The ROI processor may be further configured to receive the at least one first Audio Stream
116 and, at the decision that the information message is to be reproduced, to request an
Audio message information Stream from a remote entity.
The ROI processor and/or the metadata generator may be further configured to establish whether to reproduce two Audio information messages at the same time or whether to select a higher-priority Audio information message to be reproduced with priority with respect to a lower-priority Audio information message. In order to perform this decision, Audio information
5 metadata may be used. A priority may be, for example, obtained by the metadata processor
132 on the basis of the values in the audio information message metadata.
In some examples, the media encoder 240 may be configured to search, in a database, intranet, internet, and/or geographical network, an additional Audio Stream and/or Audio information message metadata and, in case of retrieval, delivery the additional Audio Stream
10 and/or the Audio information message metadata. For example, the search may be performed on the request of the client side.
As explained above, a solution is here proposed for efficient delivery of Earcon messages together with the Audio content. An optimised receiver behaviour is obtained, for making use of the Audio information messages (e.g., Earcons) without affecting the user experience and
15 the content consumption. This will lead to an increased Quality of Experience.
This can be achieved by using dedicated metadata and metadata manipulation mechanisms on systems level for enabling or disabling of the Audio information messages in the final Audio scenes. The metadata can be used together with any Audio codecs and complements in a nice fashion the Next Generation Audio codecs metadata (e.g., MPEG-H Audio
20 metadata).
The delivery mechanisms can be various (e.g., Streaming over DASH/HLS, broadcast over
DASH-ROUTE/MMT/MPEG-2 TS, file playback etc). In this application DASH delivery is considered, but all concepts are valid for the other delivery options.
In most of the cases the Audio information messages will not overlap in time domain, i.e., at
25 a specific point in time only one ROI is defined. But, considering more advanced use cases, for example in an interactive environment where the user can change the content based on his selections/movements, there could be also use cases which require multiple ROIs. For this purpose, more than one Audio information message can be required at one moment in time. Therefore, a generic solution is described for supporting all different use cases.
30 The delivery and processing of the Audio information messages should complement the existing delivery methods for Next Generation Audio.
One way of conveying multiple Audio information messages for several ROIs, which are
independent in time domain, is to mix together all Audio information messages into one
Audio element (e.g., Audio object) with associated metadata describing the spatial position of each Audio information message at different instances of time. Because the Audio information messages don't overlap in time, they can be independently addressed in the one, shared Audio element. This Audio element could contain silence (or no Audio data) in-
5 between the Audio information messages, i.e., whenever there is no Audio information
message. The following mechanisms may apply in this case:
• The common Audio information message Audio element can be delivered in
the same elementary Stream (ES) with the Audio scene to which it relates, or it can be delivered in one auxiliary Stream (dependent or not-dependent on the main
10 Stream).
• If the Earcon Audio element is delivered in an auxiliary Stream dependent on the main Stream, the Client can request the additional Stream whenever a new ROI is present in the visual scene.
•
The Client (e.g., the system 100) can, in examples, request the Stream in
15 advance of the scene requiring the Earcon.
•
The Client can, in examples, request the Stream based on the current
viewport, i.e., if the current viewport is matching the ROI the Client can
decide not to request the additional Earcon Stream.
•
If the Earcon Audio element may be delivered in an auxiliary
20 Stream independent of the main Stream, the Client can request, as before,
the additional Stream whenever a new ROIs is present in the visual scene.
Additionally, the two (or more) Streams can be processed using two Media
Decoders and a common Rendering/Mixing step for mixing the decoded
Earcon Audio data into the final Audio scene. Alternatively, a Metadata
25 Processor can be used for modifying the metadata of the two Streams and a
"Stream Merger" for merging the two Streams. A possible implementation of
such Metadata Processor and Stream Merger is described in the following.
In alternative examples, multiple Earcons for several ROIs, independent in time domain or overlapping in time domain, can be delivered in multiple Audio elements (e.g., Audio
30 objects) and embedded either in one elementary Stream together with the main Audio scene or in multiple auxiliary Streams, e.g., each Earcon in one ES or a group of Earcons in one ES based on a shared property (e.g., all Earcons located on the left side share one Stream).
5 • If all Earcon Audio elements are delivered in several auxiliary
Streams dependent on the main Stream (e.g., one Earcon per Stream or a group of Earcons per Stream), the Client can, in examples, request one additional Stream, which contains the desired Earcon, whenever the associated ROI with that Earcon is present in the visual scene.
•
The Client can, in examples, request the Stream with the Earcon in
advance of the scene requiring that Earcon (e.g., on the basis of the
movements of the user, the ROI processor 120 may perform the decision
even if the ROI is not part of the scene yet).
10
• The Client, in examples, can request the Stream based on the current viewport, if the current viewport is matching the ROI the Client can decide not
to request the additional Earcon Stream
•
If one Earcon Audio element (or a group of Earcons) is delivered in an
auxiliary Stream independent on the main Stream, the Client can, in
15 examples, request, as before, the additional Stream whenever a new ROI is
present in the visual scene. Additionally, the two (or more) Streams can be
processed using two Media Decoders and a common Rendering/Mixing
step for mixing the decoded Earcon Audio data into the final Audio scene.
Alternatively, a Metadata Processor can be used for modifying the
20 metadata of the two Streams and a "Stream Merger" for merging the two
Streams. A possible implementation of such Metadata Processor and Stream
Merger is described in the following.
Alternatively, one common (generic) Earcon can be used for signaling all the ROIs in one Audio scene. This can be achieved by using the same Audio content with different
25 spatial information associated with the Audio content at different instances of time. In this case, the ROI processor 120 may request the metadata processor 132 to gather the Earcons associated to the ROIs in the scene, and to control the reproduction of the Earcons in sequence (e.g., at a user’s selection or at a higher-layer application request).
Alternatively, one Earcon can be transmitted only once and cached in the Client. The
30 Client can re-use it for all the ROIs in one Audio scene with different spatial information associated with the Audio content at different instances of time.
Alternatively, the Earcon Audio content can be generated synthetically in the
Client. Together with that, a Metadata Generator can be used for creating the necessary metadata for signaling the spatial information of the Earcon. For example, the Earcon Audio
content can be compressed and fed into one Media decoder together with the main Audio content and the new metadata or it can be mixed into the final Audio scene after the Media
Decoder, or several Media Decoders can be used.
Alternatively, the Earcon Audio content can, in examples, be generated synthetically in
5 the Client (e.g., under the control of the metadata processor 132), while the Metadata describing the Earcon is embedded already in the Stream. Using specific signaling of the Earcon type in the encoder, the metadata can contain the spatial information of the Earcon, the specific singling for a "Decoder generated Earcon" but no Audio data for the Earcon.
Alternatively, the Earcon Audio content can be generated synthetically in the Client,
10 and a Metadata Generator can be used for creating the necessary metadata for signaling the spatial information of the Earcon. For example, the Earcon Audio content can be
• compressed and fed into one Media decoder together with the main Audio content and the new metadata;
• or it can be mixed into the final Audio scene after the Media Decoder;
15 • or several Media Decoders can be used.
6.3 Examples of metadata for Audio information messages (e.g., Earcons)
An example of Audio information message (Earcons) metadata 141, as described above, is provided here.
One structure for describing the Earcon properties and offer possibility to easily adjust these
20 values:
Syntax
No. of bits
Mnemonic
EarconInfo()
{
numEarcons
7
uimsbf
for ( i=0; i< numEarcons; i++ ) {
Earcon_isIndependent[i]; /* independent of the Audio Scene
*/
1
uimsbf
Earcon_id[i]; /* map to group_id */
7
uimsbf
EarconType[i]; /* natural vs sythetic sound; generic vs individual */
4
uimsbf
EarconActive[i]; /* default disabled */
1
bslbf
EarconPosition[i]; /* position change */
1
bslbf
if ( EarconPosition[i] ) {
Earcon_azimuth[i];
8
uimsbf
Earcon_elevation[i];
6
uimsbf
Earcon_radius[i];
4
uimsbf
}
EarconHasGain; /* gain change */
1
bslbf
if ( EarconHasGain ) {
Earcon_gain[i];
7
uimsbf
}
EarconHasTextLabel; /*Text Label */
1
bslbf
if (EarconHasTextLabel) {
Earcon_numLanguages[i];
4
uimsbf
for ( n=0; n< Earcon_numLanguages[i]; n++ ) {
Earcon_Language[i][n];
24
uimsbf
Earcon_TextDataLength[i][n];
8
uimsbf
for ( c=0; c< Earcon_TextDataLength[i][n]; c++ ) {
Earcon_TextData[i][n][c];
8
uimsbf
}
}
}
}
}
Each identifier in the table may be intended as being associated to an attribute of the Earcon metadata 132.
The Semantics is here discussed.
numEarcons - This field specifies the number of Earcons Audio Elements available in the
Stream.
Earcon_isIndependent - This flag defines if the Earcon Audio Element is independent from
5 any Audio Scene. If Earcon_isIndependent == 1 the Earcon Audio Element is independent from the Audio Scene. If Earcon_isIndependent == 0 the Earcon Audio Element is part of the Audio Scene and the Earcon_id shall have the same value as the mae_groupID associated with the Audio Element.
EarconType - This field defines the type of the Earcon. The following table specifies the
10 allowed values
EarconType
description
0
undefined
1
natural sound
2
synthetic sound
3
spoken text
4
generic Earcon
5
/* reserved */
6
/* reserved */
7
/* reserved */
8
/* reserved */
9
/* reserved */
10
/* reserved */
11
/* reserved */
12
/* reserved */
13
/* reserved */
14
/* reserved */
15
other
EarconActive This flag defines if the Earcon is active. If EarconActive == 1 the
Earcon Audio element shall be decoded and rendered into the Audio scene.
EarconPosition This flag defines if the Earcon has position information available. If
5 Earcon_isIndependent == 0, this position information shall be used instead of the Audio object metadata specified in the dynamic_object_metadata() or intracoded_object_metadata_efficient() structures.
Earcon_azimuth the absolute value of the azimuth angle.
10 Earcon_elevation the absolute value of the elevation angle.
Earcon_radius the absolute value of the radius.
EarconHasGain This flag defines if the Earcon has a different Gain value. Earcon_gain This field defines the absolute value for the gain for the Earcon. EarconHasTextLabel This flag defines if the Earcon has a text label associated.
15 Earcon_numLanguages This field specifies the number of available languages for the description text label.
Earcon_Language This 24-bit field identifies the language of the description text of an Earcon. It contains a 3-character code as specified by ISO 639-2. Both ISO 639-2/B and ISO 639-2/T may be used. Each character is coded
20 into 8 bits according to ISO/IEC 8859-1 and inserted in order into the
24-bit field. EXAMPLE: French has 3-character code “fre”, which is coded as: “0110 0110 0111 0010 0110 0101”.
Earcon_TextDataLength This field defines the length of the following group description in the bit Stream.
25 Earcon_TextData This field contains a description of an Earcon, i.e. a string describing the content by a high-level description. The format shall follow UTF-8 according to ISO/IEC 10646.
One structure for identifying the Earcons on system level and associate them with existing viewports. The following two tables offer two ways of implementing such structure that can
30 be used in different implementations:
aligned(8) class EarconSample() extends SphereRegionSample {
for (i = 0; i < num_regions; i++) { unsigned int(7) reserved; unsigned int(1) hasEarcon;
5 if (hasEarcon == 1) {
unsigned int(8) numRegionEarcons;
for (n=0; n
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202238018746-TRANSLATIOIN OF PRIOIRTY DOCUMENTS ETC. [30-03-2022(online)].pdf |
2022-03-30 |
| 2 |
202238018746-STATEMENT OF UNDERTAKING (FORM 3) [30-03-2022(online)].pdf |
2022-03-30 |
| 3 |
202238018746-PROOF OF RIGHT [30-03-2022(online)].pdf |
2022-03-30 |
| 4 |
202238018746-PRIORITY DOCUMENTS [30-03-2022(online)].pdf |
2022-03-30 |
| 5 |
202238018746-FORM 1 [30-03-2022(online)].pdf |
2022-03-30 |
| 6 |
202238018746-FIGURE OF ABSTRACT [30-03-2022(online)].jpg |
2022-03-30 |
| 7 |
202238018746-DRAWINGS [30-03-2022(online)].pdf |
2022-03-30 |
| 8 |
202238018746-DECLARATION OF INVENTORSHIP (FORM 5) [30-03-2022(online)].pdf |
2022-03-30 |
| 9 |
202238018746-COMPLETE SPECIFICATION [30-03-2022(online)].pdf |
2022-03-30 |
| 10 |
202238018746-FORM 18 [31-03-2022(online)].pdf |
2022-03-31 |
| 11 |
202238018746-FORM-26 [01-06-2022(online)].pdf |
2022-06-01 |
| 12 |
202238018746-FER.pdf |
2022-08-22 |
| 13 |
202238018746-Information under section 8(2) [26-08-2022(online)].pdf |
2022-08-26 |
| 14 |
202238018746-FORM 3 [26-08-2022(online)].pdf |
2022-08-26 |
| 15 |
202238018746-FER_SER_REPLY [07-11-2022(online)].pdf |
2022-11-07 |
| 16 |
202238018746-CLAIMS [07-11-2022(online)].pdf |
2022-11-07 |
| 17 |
202238018746-Information under section 8(2) [10-03-2023(online)].pdf |
2023-03-10 |
| 18 |
202238018746-FORM 3 [10-03-2023(online)].pdf |
2023-03-10 |
| 19 |
202238018746-Information under section 8(2) [12-07-2023(online)].pdf |
2023-07-12 |
| 20 |
202238018746-Information under section 8(2) [24-08-2023(online)].pdf |
2023-08-24 |
| 21 |
202238018746-FORM 3 [24-08-2023(online)].pdf |
2023-08-24 |
| 22 |
202238018746-Information under section 8(2) [21-09-2023(online)].pdf |
2023-09-21 |
| 23 |
202238018746-FORM 3 [21-09-2023(online)].pdf |
2023-09-21 |
| 24 |
202238018746-Information under section 8(2) [18-10-2023(online)].pdf |
2023-10-18 |
| 25 |
202238018746-Information under section 8(2) [14-02-2024(online)].pdf |
2024-02-14 |
| 26 |
202238018746-FORM 3 [15-02-2024(online)].pdf |
2024-02-15 |
Search Strategy
| 1 |
Search018746E_22-08-2022.pdf |