This document collects use cases and requirements for improved support for timed events related to audio or video media on the web, where synchronization to a playing audio or video media stream is needed, and makes recommendations for new or changed web APIs to realize these requirements. The goal is to extend the existing support in HTML for text track cues to add support for dynamic content replacement cues and generic data cues that drive synchronized interactive media experiences, and improve the timing accuracy of rendering of web content intended to be synchronized with audio or video media playback.

The Media & Entertainment Interest Group may update these use cases and requirements over time. Development of new web APIs based on the requirements described here, for example, DataCue, will proceed in the Web Platform Incubator Community Group (WICG), with the goal of eventual standardization within a W3C Working Group. Contributors to this document are encouraged to participate in the WICG. Where the requirements described here affect the HTML specification, contributors will follow up with WHATWG. The Interest Group will continue to track these developments and provide input and review feedback on how any proposed API meets these requirements.

Introduction

There is a need in the media industry for an API to support arbitrary data associated with points in time or periods of time in a continuous media (audio or video) presentation. This data may include:

For the purpose of this document, we refer to these collectively as media timed events. These events can be used to carry information intended to be synchronized with the media stream, used to support use cases such as dynamic content replacement, ad insertion, presentation of supplemental content alongside the audio or video, or more generally, making changes to a web page, or executing application code triggered at specific points on the media timeline of an audio or video media stream.

Media timed events may be carried either in-band, meaning that they are delivered within the audio or video media container or multiplexed with the media stream, or out-of-band, meaning that they are delivered externally to the media container or media stream.

This document describes use cases and requirements that go beyond the existing support for timed text, using TextTrack and related APIs.

Terminology

The following terms are used in this document:

The following terms are defined in [[HTML]]:

The following term is defined in [[HR-TIME]]:

The following term is defined in [[WEBVTT]]:

Use cases

Media timed events carry information that is related to points in time or periods of time on the media timeline, which can be used to trigger retrieval and/or rendering of web resources synchronized with media playback. Such resources can be used to enhance user experience in the context of media that is being rendered. Some examples include display of social media feeds corresponding to a live video stream such as a sporting event, banner advertisements for sponsored content, accessibility-related assets such as large print rendering of captions.

The following sections describe a few use cases in more detail.

Dynamic content insertion

A media content provider wants to allow insertion of content, such as personalised video, local news, or advertisements, into a video media stream that contains the main program content. To achieve this, media timed events can be used to describe the points on the media timeline, known as splice points, where switching playback to inserted content is possible.

The Society for Cable and Televison Engineers (SCTE) specification "Digital Program Insertion Cueing for Cable" [[SCTE35]] defines a data cue format for describing such insertion points. Use of these cues in MPEG-DASH and HLS streams is described in [[SCTE35]], sections 12.1 and 12.2.

This use case typically requires frame accuracy, so that inserted content is played at the right time, and continuous playback is maintained.

Audio stream with titles and images

A media content provider wants to provide visual information alongside an audio stream, such as an image of the artist and title of the current playing track, to give users live information about the content they are listening to.

HLS timed metadata [[HLS-TIMED-METADATA]] uses in-band ID3 metadata to carry the artist and title information, and image content. RadioVIS in DVB ([[DVB-DASH]], section 9.1.7) defines in-band event messages that contain image URLs and text messages to be displayed, with information about when the content should be displayed in relation to the media timeline.

The visual information should be rendered within a hundred milliseconds or so to maintain good synchronization with the audio content.

Control messages for media streaming clients

MPEG-DASH defines a number of control messages for media streaming clients (e.g., libraries such as dash.js). These messages are carried in-band in the media container files. Use cases include:

Reference: M&E IG call 1 Feb 2018: Minutes, [[DASH-EVENTING]].

Subtitle and caption rendering synchronization

A subtitle or caption author wants ensure that subtitle changes are aligned as closely as possible to shot changes in the video. The BBC Subtitle Guidelines [[BBC-SUBTITLE]] describes authoring best practices. In particular, in section 6.1 authors are advised:

"[...] it is likely to be less tiring for the viewer if shot changes and subtitle changes occur at the same time. Many subtitles therefore start on the first frame of the shot and end on the last frame."

The NorDig technical specifications for DVB receivers for the Nordic and Irish markets [[NORDIG]], section 7.3.1, mandates that receivers support TTML in MPEG-2 Transport Streams. The presentation timing precision for subtitles is specified as being within 2 frames.

Another important use case is maintaining synchronization of subtitles during program content with fast dialog. The BBC Subtitle Guidelines, section 5.1 says:

"Impaired viewers make use of visual cues from the faces of television speakers. Therefore subtitle appearance should coincide with speech onset. [...] When two or more people are speaking, it is particularly important to keep in sync. Subtitles for new speakers must, as far as possible, come up as the new speaker starts to speak. Whether this is possible will depend on the action on screen and rate of speech."

A very fast word rate, for example, 240 words per minute, corresponds on average to one word every 250 milliseconds.

Synchronized map animations

A user records footage with metadata, including geolocation, on a mobile video device, e.g., drone or dashcam, to share on the web alongside a map, e.g., OpenStreetMap.

[[WEBVMT]] is an open format for metadata cues, synchronized with a timed media file, that can be used to drive an online map rendered in a separate HTML element alongside the media element on the web page. The media playhead position controls presentation and animation of the map, e.g., pan and zoom, and allows annotations to be added and removed, e.g., markers, at specified times during media playback. Control can also be overridden by the user with the usual interactive features of the map at any time, e.g., zoom. The rendering of the map animation and annotations should usually be to within a hundred milliseconds or so to maintain good synchronization with the video. However, a shot change which instantly moves to a different location would require the map to be updated simultaneously, ideally with frame accuracy.

Concrete examples are provided by the tech demos at the WebVMT website.

Media stream with video and synchronized graphics

A content provider wants to provide synchronized graphical elements that may be rendered next to or on top of a video.

For example, in a talk show this could be a banner, shown in the lower third of the video, that displays the name of the guest. In a sports event, the graphics could show the latest lap times or current score, or highlight the location of the current active player. It could even be a full-screen overlay, to blend from one part of the program to another.

The graphical elements are described in a stream or file containing media timed events for start and end time of each graphical element, similar to a subtitle stream or file. A graphic renderer takes this data as input and renders it on top of the video image according to the media timed events.

The purpose of rendering the graphical elements on the client device, rather than rendering them directly into the video image, is to allow the graphics to be optimized for the device's display parameters, such as aspect ratio and orientation. Another use case is adapting to user preferences, for localization or to improve accessibility.

This use case requires frame accurate synchronization of the content being rendered over the video.

Live event coverage

Media content providers often cover live events where the timing of particular segments, although often pre-scheduled, can be subject to last minute change, or may not be known ahead of time.

The media content provider uses media timed events together with their video stream to add metadata to annotate the start and (where known) end times of each of these segments. This metadata drives a user interface that allows users to see information about the current playing and upcoming segments.

Examples of the dynamic nature of the timing include:

Presentation of auxiliary content in live media

During a live media presentation, dynamic and unpredictable events may occur which cause temporary suspension of the media presentation. During that suspension interval, auxiliary content such as the presentation of UI controls and media files, may be unavailable. Depending on the specific user engagement (or not) with the UI controls and the time at which any such engagement occurs, specific web resources may be rendered at defined times in a synchronized manner. For example, a multimedia A/V clip along with subtitles corresponding to an advertisement, and which were previously downloaded and cached by the UA, are played out.

Related industry specifications

This section describes existing media industry specifications and standards that specify carriage of media timed events, or otherwise provide requirements for web APIs related to the triggering of DOM events synchronized with the media timeline.

MPEG Common Media Application Format (CMAF)

MPEG Common Media Application Format (CMAF) [[MPEGCMAF]] is a media container format optimized for large scale delivery of a single encrypted, adaptable multimedia presentation to a wide range of devices and adaptive streaming methods, including HTTP Live Streaming [[RFC8216]] and MPEG-DASH [[MPEGDASH]]. It is based on the ISO BMFF [[ISOBMFF]] and supports the AVC, AAC, HEVC codecs, Common Encryption (CENC), and subtitles using IMSC1 and WebVTT. Its goal is to reduce media storage and delivery costs by using a single common media format across different client devices.

CMAF media may contain in-band media timed events in the form of Event Message (emsg) boxes in ISO BMFF files. emsg is specified in [[MPEGDASH]], section 5.10.3.3, and described in more detail in the following section of this document.

MPEG-DASH

MPEG-DASH is an adaptive bitrate streaming technique in which the audio and video media is partitioned into segments. The Media Presentation Description (MPD) is an XML document that contains metadata required by a DASH client to access the media segments and to provide the streaming service to the user. The media segments can use any codec, typically within a fragmented MP4 (ISO BMFF) container or MPEG-2 transport stream.

In MPEG-DASH, media timed events may be delivered either in-band or out-of-band:

An emsg event contains the following information, as specified in [[MPEGDASH]], section 5.10.3.3:

HTTP Live Streaming

HTTP Live Streaming (HLS) allows for delivery of timed metadata events, both in-band and out-of-band:

An EXT-X-DATERANGE tag contains the following information, as specified in [[RFC8216]], section 4.3.2.7:

For interoperability between HLS and CMAF, The Alliance for Open Media has published [[ID3-EMSG]], which specifies how to include ID3 metadata in emsg boxes.

HbbTV

HbbTV is an interactive TV application standard that supports both broadcast (DVB) media delivery, and internet streaming using MPEG-DASH. The HbbTV application environment is based on HTML and JavaScript. MPEG-DASH streaming is implemented natively by the user agent, rather than through a JavaScript web application using Media Source Extensions.

HbbTV includes support for emsg events ([[DVB-DASH]], section 9.1) and requires this be mapped to HTML5 DataCue ([[HBBTV]], section 9.3.2). The revision of HTML5 referenced by [[HBBTV]] is [[html51-20151008]]. This feature is included in user agents shipping in connected TVs across Europe from 2017.

The HbbTV device test suite includes test pages and streams that cover emsg support. HbbTV has a reference application and content for DASH+DRM which includes emsg support.

DASH Industry Forum APIs for Interactivity

The DASH-IF InterOp Working Group has an ongoing work item, DAInty, "DASH APIs for Interactivity", which aims to specify a set of APIs between the DASH client/player and interactivity-capable applications, for both web and native applications [[DASHIFIOP]]. The origin of this work is a related 3GPP work item on Service Interactivity [[3GPP-INTERACTIVITY]]. The objective is to provide service enablers for user engagement with auxiliary content and UIs on mobile device during live or time-shifted viewing of streaming content delivered over 3GPP broadcast or unicast bearers, and the measurement and reporting of such interactive consumption.

Two APIs are being developed that are relevant to the scope of the present document:

Two modes for dispatching events are defined [[DASHIF-EVENTS]]. In on-receive mode, events are dispatched at the time the event arrives, and in on-start mode, events are dispatched at the given time on the media timeline. The "arrival" of events from the DASH client perspective may be either static or pre-provisioned, in the case MPD Events, or dynamic in the case of in-band events carried in emsg boxes. The application can register with the DASH client which mode to use.

SCTE-35

The Society for Cable and Television Engineers (SCTE) has produced the SCTE-35 specification "Digital Program Insertion Cueing for Cable" [[SCTE35]], which defines a data cue format for describing insertion points, to support the dynamic content insertion use case.

[[SCTE214-1]] section 6.7 describes the carriage of SCTE-35 events as out-of-band events in a MPEG-DASH MPD document. [[SCTE214-2]] section 9 and [[SCTE214-3]] section 7.3 describe the carriage of SCTE-35 events as in-band events in MPEG-DASH using MPEG2-TS and ISO BMFF respectively, using emsg.

[[RFC8216]] section 4.3.2.7.1 specifies how to map SCTE-35 events into HLS timed metadata, using the EXT-X-DATERANGE tag, with SCTE35-CMD, SCTE35-OUT, and SCTE35-IN attributes.

[[SCTE35]] section 9.1 describes the requirements for content splicing: "In order to give advance warning of the impending splice (a pre-roll function), the splice_insert() command could be sent multiple times before the splice point. For example, the splice_insert() command could be sent at 8, 5, 4 and 2 seconds prior to the packet containing the related splice point. In order to meet other splicing deadlines in the system, any message received with less than 4 seconds of advance notice may not create the desired result."

This places an implicit requirement on the user agent in handling of event synchronization related to insertion cues. The content originator may provide the cue in advance with as little as 2 seconds of the insertion time. Therefore the propagation of the event data associated with the insertion cue to the application by the user agent should be considerably less than 2 seconds.

MPEG Carriage of Web Resources in ISO BMFF

MPEG Carriage of Web Resources in ISO BMFF [[iso23001-15]] specifies the use of the ISO BMFF container format for the storage and delivery of web content. The goal is to allow web resources (HTML, CSS, etc.) to be parsed from the storage and processed by a user agent at specific presentation times on the media timeline, and so be synchronized with other tracks within the container, such as audio, video, and subtitles.

The Media & Entertainment Interest Group is actively tracking this work and is open to discussing specific requirements for synchronized rendering of in-band delivered web resources, as development progresses.

WebVTT

[[WEBVTT]] is a W3C specification that provides a format for web video text tracks. A VTTCue is a text track cue, and may have attributes that affect rendering of the cue text on a web page. WebVTT metadata cues are text that is aligned to the media timeline. Web applications can use VTTCue to carry arbitrary data by serializing the data to a string format (JSON, for example) when creating the cue, and deserializing the data when the cue's onenter DOM event is fired.

Web applications can also use VTTCue to trigger rendering of out-of-band delivered timed text cues, such as TTML or IMSC format captions.

Gap analysis

This section describes gaps in existing existing web platform capabilities needed to support the use cases and requirements described in this document. Where applicable, this section also describes how existing web platform features can be used as workarounds, and any associated limitations.

MPEG-DASH and ISO BMFF emsg events

The DataCue API has been previously discussed as a means to deliver in-band media timed event data to web applications, but this is not implemented in all of the main browser engines. It is included in the 18 October 2018 HTML 5.3 draft [[HTML53-20181018]], but is not included in [[HTML]]. See discussion here and notes on implementation status here.

WebKit supports a DataCue interface that extends HTML5 DataCue with two attributes to support non-text metadata, type and value.

          interface DataCue : TextTrackCue {
            attribute ArrayBuffer data; // Always empty

            // Proposed extensions.
            attribute any value;
            readonly attribute DOMString type;
          };
        

type is a string identifying the type of metadata:

WebKit DataCue metadata types
"com.apple.quicktime.udta" QuickTime User Data
"com.apple.quicktime.mdta" QuickTime Metadata
"com.apple.itunes" iTunes metadata
"org.mp4ra" MPEG-4 metadata
"org.id3" ID3 metadata

and value is an object with the metadata item key, data, and optionally a locale:

          value = {
            key: String
            data: String | Number | Array | ArrayBuffer | Object
            locale: String
          }
        

Neither [[MSE-BYTE-STREAM-FORMAT-ISOBMFF]] nor [[INBANDTRACKS]] describe handling of emsg boxes.

On resource constrained devices such as smart TVs and streaming sticks, parsing media segments to extract event information leads to a significant performance penalty, which can have an impact on UI rendering updates if this is done on the UI thread. There can also be an impact on the battery life of mobile devices. Given that the media segments will be parsed anyway by the user agent, parsing in JavaScript is an expensive overhead that could be avoided.

Avoiding parsing in JavaScript is also important for low latency video streaming applications, where minimizing the time taken to pass media content through to the media element's playback buffer is essential.

[[HBBTV]] section 9.3.2 describes a mapping between the emsg fields described above and the TextTrack and DataCue APIs. A TextTrack instance is created for each event stream signalled in the MPD document (as identified by the schemeIdUri and value), and the inBandMetadataTrackDispatchType TextTrack attribute contains the scheme_id_uri and value values. Because HbbTV devices include a native DASH client, parsing of the MPD document and creation of the TextTracks is done by the user agent, rather than by application JavaScript code.

TextTrackCues with unbounded duration

It is not currently possible to create a TextTrackCue that extends from a given start time to the end of a live media stream. If the stream duration is known, the content author can set the cue's endTime equal to the media duration. However, for live media streams, where the duration is unbounded, it would be useful to allow content authors to specify that the TextTrackCue duration is also unbounded, e.g., by allowing the endTime to be set to Infinity. This would be consistent with the media element's duration property, which can be Infinity for unbounded streams.

Synchronized rendering of web resources

In browsers, non media web rendering is handled through repaint operations at a rate that generally matches the display refresh rate (e.g., 60 times per second), following the user's wall clock. A web application can schedule actions and render web content at specific points on the user's wall clock, notably through Performance.now(), setTimeout(), setInterval(), and requestAnimationFrame().

In most cases, media rendering follows a different path, be it because it gets handled by a dedicated background process or by dedicated hardware circuitry. As a result, progress along the media timeline may follow a clock different from the user's wall clock. [[HTML]] recommends that the media clock approximate the user's wall clock but does not require it to match the user's wall clock.

To synchronize rendering of web content to a video with frame accuracy, a web application needs:

The following sub-sections discusses mechanisms currently available to web applications to track progress on the media timeline and render content at frame boundaries.

Using cues to track progress on the media timeline

Cues (e.g., TextTrackCue and VTTCue) are units of time-sensitive data on a media timeline [[HTML]]. The time marches on steps in [[HTML]] control the firing of cue DOM events during media playback. Time marches on is specified to run "when the current playback position of a media element changes" but how often this should happen is unspecified. In practice it has been found that the timing varies between browser implementations, in some cases with a delay up to 250 milliseconds (which corresponds to the lowest rate at which timeupdate events are expected to be fired).

There are two methods a web application can use to handle cues:

  • Add an oncuechange handler function to the TextTrack and inspect the track's activeCues list. Because activeCues contains the list of cues that are active at the time that time marches on is run, it is possible for cues to be missed by a web application using this method, where cues appear on the media timeline between successive executions of time marches on during media playback. This may occur if the cues have short duration, or by a long-running event handler function.
  • Add onenter and onexit handler functions to each cue. The time marches on steps guarantee that enter and exit events will be fired for all cues, including those that appear on the media timeline between successive executions of time marches on during media playback. The timing accuracy of these events varies between browser implementations, as the firing of the events is controlled by the rate of execution of time marches on.

An issue with handling of text track and data cue events in HbbTV was reported in 2013. HbbTV requires the user agent to implement an MPEG-DASH client, and so applications must use the first of the above methods for cue handling, which means that applications can miss cues as described above. A similar issue has been filed against the HTML specification.

Using timeupdate events from the media element

Another approach to synchronizing rendering of web content to media playback is to use the timeupdate DOM event, and for the web application to manage the media timed event data to be triggered, rather than use the text track cue APIs in [[HTML]]. This approach has the same synchronization limitations as described above due to the 250 millisecond update rate specified in time marches on, and so is explicitly discouraged in [[HTML]]. In addition, the timing variability of timeupdate events between browser engines makes them unreliable for the purpose of synchronized rendering of web content.

Polling the current position on the media timeline

Synchronization accuracy can be improved by polling the media element's currentTime property from a setInterval() callback, or by using requestAnimationFrame() for greater accuracy. This technique can be useful in where content should be animated smoothly in synchronicity with the media, for example, rendering a playhead position marker in an audio waveform visualization, or displaying web content at specific points on the media timeline. However, the use of setInterval() or requestAnimationFrame() for media synchronized rendering is CPU intensive.

Detecting when the next media frame will be rendered

[[HTML]] does not expose any precise mechanism to assess the time, from a user's wall clock perspective, at which a particular media frame is going to be rendered. A web application may only infer this information by looking at the media element's currentTime property to infer the frame being rendered and the time at which the user will see the next frame. This has several limitations:

  • currentTime is represented as a double value, which does not allow to identify individual frames due to rounding errors. This is a known issue.
  • currentTime is updated at a user-agent defined rate (typically the rate at which time marches on runs), and is kept stable while scripts are running. When a web application reads currentTime, it cannot tell when this property was last updated, and thus cannot reliably assess whether this property still represents the frame currently being rendered.

Recommendations

This section describes recommendations from the Media & Entertainment Interest Group for the development of a generic media timed event API, and associated synchronization considerations.

Subscribing to receive media timed event cues

The API should allow web applications to subscribe to receive specific types of media timed event cue. For example, to support MPEG-DASH emsg and MPD events, the cue type is identified by a combination of the scheme_id_uri and (optional) value. The purpose of this is to make receiving cues of each type opt-in from the application's point of view. The user agent should deliver only those cues to a web application for which the application has subscribed. The API should also allow web applications to unsubscribe from specific cue types.

Out-of-band events

To be able to handle out-of-band media timed event cues, including MPEG-DASH MPD events, the API should allow web applications to create and add timed data cues to the media timeline, to be triggered by the user agent. The API should allow the web application to provide all necessary parameters to define the cue, including start and end times, cue type identifier, and data payload. The payload should be any data type (e.g., the set of types supported by the WebKit DataCue).

Event triggering

For those events that the application has subscribed to receive, the API should:

The API should provide guarantees that no media timed event cues can be missed during linear playback of the media.

In-band media timed event processing

We recommend updating [[INBANDTRACKS]] to describe handling of in-band media timed events supported on the web platform, possibly following a registry approach with one specification per media format that describes the details of how media timed events are carried in that format.

MPEG-DASH events

We recommend that browser engines support MPEG-DASH emsg in-band events and MPD out-of-band events, as part of their support for the MPEG Common Media Application Format (CMAF) [[MPEGCMAF]].

Cues with unbounded duration

To support cues with unknown end time, where the cue is active from its start time to the end of the media stream, we recommend that the TextTrackCue interface be modified to allow the cue duration to be unbounded.

Updating media timed events

We recommend that the API allows media timed event information to be updated, such as an event's position on the media timeline, and its data payload. Where the media timed event is updated by the user agent, such as for in-band events, we recommend that the API allows the web application to be notified of any changes.

Synchronization

In order to achieve greater synchronization accuracy between media playback and web content rendered by an application, the time marches on steps in [[HTML]] should be modified to allow delivery of cue onenter and onexit DOM events within 20 milliseconds of their positions on the media timeline.

Additionally, to allow such synchronization to happen at frame boundaries, we recommend introducing a mechanism that would allow a web application to accurately predict, using the user's wall clock, when the next frame will be rendered (e.g., as done in the Web Audio API).

Acknowledgments

Thanks to François Daoust, Charles Lo, Nigel Megitt, Jon Piesing, Rob Smith, Peter tho Pesch, and Mark Vickers for their contributions and feedback on this document.