Well-deployed technologies

Audio/Video playback

HTML5 adds two tags that dramatically improve the integration of multimedia content on the Web: the <video> and <audio> tags. Respectively, these tags allow embedding video and audio content, and make it possible for Web developers to interact much more freely with that content than they would through plug-ins. They make multimedia content first-class citizens of the Web, the same way images have been for the past 20 years.

Generation of media content

The playback content can be streamed, augmented and completed via Media Source Extensions that lets developers buffer and generate media content in JavaScript, thus allowing Web application developers to create libraries that can handle adaptive streaming formats and protocols.

Protected content playback

For the distribution of media whose content needs specific protection from copy, Encrypted Media Extensions (EME) enables Web applications to render encrypted media streams based on Content Decryption Modules (CDM).

Capturing audio/video

While the new HTML5 tags allow to play multimedia content, the HTML Media Capture defines a markup-based mechanism to access captured multimedia content using attached camera and microphones, a very common feature on mobile devices. Direct manipulation of streams from camera and microphones is possible through the Media Capture and Streams API.

Image/Video edition

The Canvas 2D Context API enables modifying images, which in turn opens up the possibility of video editing, thus bringing multimedia manipulation capabilities to the Web platform.

FeatureSpecification / GroupMaturityCurrent implementations
Select browsers…
Audio/Video playbackvideo element in HTML Standard
WHATWG
Living Standard
audio element in HTML Standard
WHATWG
Living Standard
Generation of media contentMedia Source Extensions™
Media Working Group
Working Draft
Protected content playbackEncrypted Media Extensions
Media Working Group
Recommendation
Capturing audio/videoHTML Media Capture
Devices and Sensors Working Group
Recommendation
Media Capture and Streams
WebRTC Working Group
Candidate Recommendation
Image/Video editionThe 2D rendering context in HTML Standard
WHATWG
Living Standard

Technologies in progress

Audio playback

Beyond the declarative approach enabled by the <audio> element, the Web Audio API provides a full-fledged audio processing API, which includes support for low-latency playback of audio content.

Distributed rendering

As users increasingly own more and more connected devices, the need to get these devices to work together increases as well:

  • The Presentation API offers the possibility for a Web page to open and control a page located on another screen from a mobile device, opening the road for multi-screen Web applications.
  • The Remote Playback API focuses more specifically on controlling the rendering of media on a separate device.
  • The Open Screen Protocol is a suite of network protocols that allow controlling and receiving devices to implement the Presentation API and Remote Playback API in an interoperable fashion.
  • The Picture-in-Picture specification allows applications to initiate and control the rendering of a video in a separate miniature window that is viewable above all other activities.
  • The Audio Output Devices API offers similar functionality for audio streams, enabling a Web application to pick on which audio output devices a given sound should be played on.

Capabilities and quality

Mobile devices have widely heterogeneous decoding (and encoding) capabilities. To improve the user experience and take advantage of advanced device capabilities when they are available, media providers e.g. need to know whether the user's device can decode a particular codec at a given resolution, bitrate and framerate. Will the playback be smooth and power efficient? Can the display render HDR and wide color gamut content? The Media Capabilities specification defines an API to expose that information, with a view to replacing the more basic and vague isTypeSupported() and canPlayType() functions defined in HTML.

Media providers also need some mechanism to assess the user's perceived playback quality to alter the quality of content transmitted using adaptive streaming. The Media Playback Quality specification, initially part of Media Source Extensions, exposes metrics on the number of frames that were displayed or dropped.

Media focus

Mobile devices often expose shortcuts to handle the audio output of a main application (e.g. a music player) from a lock screen or the notification areas. The underlying operating system is in charge of determining which of these applications should have the media focus. The Media session specification exposes these changes of focus to Web applications.

Autoplay

To preserve bandwidth, memory and battery on mobile, and prevent possibly unwanted media playback, browsers have put autoplay policies into place and may deny automated playback of media content. The Autoplay Policy Detection specification is an early proposal to let applications know whether autoplay will succeed for a given media element.

Rendering in VR/AR headsets

The WebXR Device API specification is a low-level API that allows applications to access and control head-mounted displays (HMD) using JavaScript and create compelling Virtual Reality (VR) / Augmented Reality (AR) experiences. It is a critical enabler to render 360° video content in Virtual Reality headsets and in mobile devices used as such. A few modules for the core specification are also being developed, including the AR module and the Gamepads module.

Capturing audio/video

The Web Real-Time Communications Working Group is building an API to record streams from camera and microphones into files, and another API to use access to cameras to take photos programatically.

P2P and audio/video streams

The Web Real-Time Communications Working Group is the host of specifications for a wider set of communication opportunities:

  • Peer-to-peer connection across devices,
  • Content Hints allowing Web applications to advertise the type of media content that is being consumed (e.g. speech or music, movie or screencast) so that user agents may optimize encoding or processing parameters,
  • Scalable Video Coding (SVC) allowing Web applications to configure encoding parameters to leverage SVC (whereby subset video streams can be derived from the larger video stream by dropping packets to reduce bandwidth consumption), making providing video at different qualities to multiple destinations with the same initial video stream easier,
  • P2P Audio and video streams allowing for real-time communications between users.
FeatureSpecification / GroupMaturityCurrent implementations
Select browsers…
Audio playbackWeb Audio API
Audio Working Group
Recommendation
Distributed renderingPresentation API
Second Screen Working Group
Candidate Recommendation
Remote Playback API
Second Screen Working Group
Candidate Recommendation
Open Screen Protocol
Second Screen Working Group
Working Draft
Picture-in-Picture
Media Working Group
Working Draft
Audio Output Devices API
WebRTC Working Group
Candidate Recommendation
Capabilities and qualityMedia Capabilities
Media Working Group
Working Draft
Media Playback Quality
Media Working Group
Editor's Draft
Media focusMedia Session Standard
Media Working Group
Working Draft
AutoplayAutoplay Policy Detection
Media Working Group
Editor's Draft
Rendering in VR/AR headsetsWebXR Device API
Immersive Web Working Group
Candidate Recommendation
WebXR Augmented Reality Module - Level 1
Immersive Web Working Group
Candidate Recommendation
WebXR Gamepads Module - Level 1
Immersive Web Working Group
Working Draft
Capturing audio/videoMediaStream Recording
WebRTC Working Group
Working Draft
MediaStream Image Capture
WebRTC Working Group
Working Draft
P2P and audio/video streamsWebRTC 1.0: Real-Time Communication Between Browsers
WebRTC Working Group
Recommendation
MediaStreamTrack Content Hints
WebRTC Working Group
Working Draft
Scalable Video Coding (SVC) Extension for WebRTC
WebRTC Working Group
Working Draft

Exploratory work

Distributed rendering

The Multi-Device Timing Community Group is exploring another aspect of multi-device media rendering: its Timing Object specification enables to keep video, audio and other data streams in close synchrony, across devices and independently of the network topology. This effort needs support from interested parties to progress.

Rendering in different color spaces

New mobile screens can render content in high resolution using a broader color space beyond the classical sRGB color space. To adapt to wide-gamut displays, all the graphical systems of the Web will need to adapt to these broader color spaces. CSS Colors Level 4 is proposing to define CSS colors in color spaces beyond the classical sRGB. Similarly, work on making canvas color-managed should enhance the support for colors in HTML Canvas.

More generally, the High Dynamic Range and Wide Gamut Color on the Web note, developed by the Color on the Web Community Group, analyzes gaps and candidate next steps for enabling support for High Dynamic Range (HDR) and Wide Color Gamut (WCG) on the Web, such as mechanisms to allow color and luminance matching between HDR video content and surrounding or overlaid graphic and textual content in Web pages.

Video processing

The WebCodecs proposal provides efficient, low-level access to built-in (software and hardware) media encoders and decoders, to better support specific encoding/decoding scenarios, such as peer-to-peer audio/video conferencing, low-latency game streaming or client-side media effects and transcoding, without having to rely on custom JavaScript or WebAssembly codec implementations that are more costly in terms of CPU, memory, battery and bandwidth usage.

Video processing using the Canvas API is very CPU-intensive. Beyond traditional video processing, modern GPUs often provide advanced vision processing capabilities (e.g. face and objects recognition) that would have direct applicability e.g. in augmented reality applications. The Shape Detection API is exploring this space.

Audio playback

Even with the introduction of audio worklets, low-level audio processing remains confined by the boundary or the Web Audio API's graph rendering mechanism. The Audio Device Client proposal, which functions as an intermediate layer between Web Audio API and actual audio devices used by the browser, provides closer access to audio hardware with configurable parameters such as sample rate, callback buffer size and channel count, while allowing processing to take place in dedicated thread.

FeatureSpecification / GroupImplementation intents
Select browsers…
Distributed renderingTiming Object
Multi-Device Timing Community Group
Rendering in different color spacesprofiled device-dependent colors in CSS Color Module Level 4
CSS Working Group
Color managing canvas contents
High Dynamic Range and Wide Gamut Color on the Web
Color on the Web Community Group
Video processingWebCodecs
Web Platform Incubator Community Group
Accelerated Shape Detection in Images
Web Platform Incubator Community Group
Audio playbackAudio Device Client
Audio Community Group

Features not covered by ongoing work

Native support for 360° video rendering
While it is already possible to render 360° videos within a <video> element, integrated support for the rendering of 360° videos would allow to hide the complexity of the underlying adaptive streaming logic to applications, letting Web browsers optimize streaming and rendering on their own.
The Canvas API provide capabilities to do image and video processing, but these capabilities are limited by their reliance on the CPU for execution; modern GPUs provide hardware-acceleration for a wide range of operations, but the browsers don't provide hooks to these. The GPU for the Web Community Group is discussing solutions to expose GPU computation functionality to Web applications, which could eventually allow web applications to process video streams efficiently, taking advantage of the GPU power.

Discontinued features

Network service discovery
The Network Service Discovery API was to offer a lower-level approach to the establishment of multi-device operations, by providing integration with local network-based media renderers, such as those enabled by DLNA, UPnP, etc. This effort was discontinued out of privacy concerns and lack of interest from implementers. The current approach is to let the user agent handle network discovery under the hoods, as done in the Presentation API and Remote Playback API.
WebVR
Development of the WebVR specification that allowed access and control of Virtual Reality (VR) devices, and which is supported in some browsers, has halted in favor of the WebXR Device API, which extends the scope of the work to Augmented Reality (AR) devices.