Audio Session

Editor’s Draft,

More details about this document
This version:
https://w3c.github.io/audio-session/
Latest published version:
https://www.w3.org/TR/audio-session/
Feedback:
GitHub
Editors:
(Apple)
(Mozilla)

Abstract

This API defines an API surface for controlling how audio is rendered and interacts with other audio playing applications.

Status of this document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

Feedback and comments on this specification are welcome. GitHub Issues are preferred for discussion on this specification. Alternatively, you can send comments to the Media Working Group’s mailing-list, public-media-wg@w3.org (archives). This draft highlights some of the pending issues that are still to be discussed in the working group. No decision has been taken on the outcome of these issues including whether they are valid.

This document was published by the Media Working Group as an Editor’s Draft. This document is intended to become a W3C Recommendation.

Publication as an Editor’s Draft does not imply endorsement by W3C and its Members.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 03 November 2023 W3C Process Document.

1. Introduction

People increasingly consume media (audio/video) through the Web, which has become a primary channel for accessing this type of content. However, media on the Web often lacks seamless integration with underlying platforms. The Audio Session API addresses this gap by enhancing media handling across platforms that support audio session management or similar audio focus features. This API improves how web-based audio interacts with other apps, allowing for better audio mixing or exclusive playback, depending on the context, to provide a more consistent and integrated media experience across devices.

Additionally, some platforms automatically manage a site’s audio session based on media playback and the APIs used to play audio. However, this behavior might not always align with user expectations. This API allows developers to override the default behavior and gain more control over an audio session.

2. Concepts

A web page can do audio processing in various ways, combining different APIs like HTMLMediaElement or AudioContext. This audio processing has a start and a stop, which aggregates all the different audio APIs being used. An audio session represents this aggregated audio processing. It allows web pages to express the general nature of the audio processing done by the web page.

An audio session can be of a particular type, and be in a particular state. An audio session manages the audio for a set of individual sources (microphone recording) and sinks (audio rendering), named audio session elements.

An audio session's element has a number of properties:

An audio session element is an audible element if its audible flag is true.

Additionaly, an audio session element has associated steps for dealing with various state changes. By default, each of these steps is empty list of steps:

This specification is defining these steps, the default type and the audible flag for some audio session's elements in section § 6 Audio source and sink integration. Specifications defining other elements need to define these steps and properties.

A top-level browsing context has a selected audio session. In case of a change to any audio session, the user agent will update which audio session becomes the selected audio session. A top-level browsing context is said to have audio focus if its selected audio session is not null and its state is active.

User agents can decide whether to allow several top-level browsing context to have audio focus, or to enforce that only a single top-level browsing context has audio focus at any given time.

3. The AudioSession interface

AudioSession is the main interface for this API. It is accessed through the Navigator interface (see § 4 Extensions to the Navigator interface).

[Exposed=Window]
interface AudioSession : EventTarget {
  attribute AudioSessionType type;

  readonly attribute AudioSessionState state;
  attribute EventHandler onstatechange;
};

To create an AudioSession object in realm, run the following steps:

  1. Let audioSession be a new AudioSession object in realm, initialized with the following internal slots:

    1. [[type]] to store the audio session type, initialized to auto.

    2. [[state]] to store the audio session state, initialized to inactive.

    3. [[elements]] to store the audio session elements, initialized to an empty list.

    4. [[interruptedElements]] to store the audio session elements that where interrupted while being audible, initialized to an empty list.

    5. [[appliedType]] to store the type applied to the audio session, initialized to auto.

    6. [[isTypeBeingApplied]] flag to store whether the type is being applied to the audio session, initialized to false.

  2. Return audioSession.

Each AudioSession object is uniquely tied to its underlying audio session.

The AudioSession state attribute reflects its audio session state. On getting, it MUST return the AudioSession [[state]] value.

The AudioSession type attribute reflects its audio session type, except for auto.

On getting, it MUST return the AudioSession [[type]] value.

On setting, it MUST run the following steps with newValue being the new value being set on audioSession:

  1. If audioSession.[[type]] is equal to newValue, abort these steps.

  2. Set audioSession.[[type]] to newValue.

  3. Update the type of audioSession.

3.1. Audio session types

By convention, there are several different audio session types for different purposes. In the API, these are represented by the AudioSessionType enum:

playback
Playback audio, which is used for video or music playback, podcasts, etc. They should not mix with other playback audio. (Maybe) they should pause all other audio indefinitely.
transient
Transient audio, such as a notification ping. They usually should play on top of playback audio (and maybe also "duck" persistent audio).
transient-solo
Transient solo audio, such as driving directions. They should pause/mute all other audio and play exclusively. When a transient-solo audio ended, it should resume the paused/muted audio.
ambient
Ambient audio, which is mixable with other types of audio. This is useful in some special cases such as when the user wants to mix audios from multiple pages.
play-and-record
Play and record audio, which is used for recording audio. This is useful in cases microphone is being used or in video conferencing applications.
auto
Auto lets the user agent choose the best audio session type according the use of audio by the web page. This is the default type of AudioSession.
enum AudioSessionType {
  "auto",
  "playback",
  "transient",
  "transient-solo",
  "ambient",
  "play-and-record"
};

An AudioSessionType is an exclusive type if it is playback, play-and-record or transient-solo.

3.2. Audio session states

An audio session can be in one of the following state , which are represented in the API by the AudioSessionState enum:

active
the audio session is playing sound or recording microphone.
interrupted
the audio session is not playing sound nor recording microphone, but can resume when it will get uninterrupted.
inactive
the audio session is not playing sound nor recording microphone.
enum AudioSessionState {
  "inactive",
  "active",
  "interrupted"
};

The audio session's state may change, which will automatically be reflected on its AudioSession object via the steps to notify the state’s change.

4. Extensions to the Navigator interface

Each Window has an associated AudioSession, which is an AudioSession object. It represents the default audio session that is used by the user agent to automatically set up the audio session parameters. The user agent will request or abandon audio focus when audio session elements start or finish playing. Upon creation of the Window object, its associated AudioSession MUST be set to a newly created AudioSession object with the Window object’s relevant realm.

The associated AudioSession list of elements is updated dynamically as audio sources and sinks of the Window object are created or removed.

[Exposed=Window]
partial interface Navigator {
  // The default audio session that the user agent will use when media elements start/stop playing.
  readonly attribute AudioSession audioSession;
};

5. Audio session algorithms

5.1. Update AudioSession’s type

To update the type of audioSession, the user agent MUST run the following steps:

  1. If audioSession.[[isTypeBeingApplied]] is true, abort these steps.

  2. Set audioSession.[[isTypeBeingApplied]] to true.

  3. Queue a task to run the following steps:

    1. Set audioSession.[[isTypeBeingApplied]] to false.

    2. If audioSession.[[type]] is the same as audioSession.[[appliedType]], abort these steps.

    3. Set audioSession.[[appliedType]] to audioSession.[[type]].

    4. Update all AudioSession states of audioSession’s top-level browsing context with audioSession.

    5. For each element of audioSession.[[elements]], update element.

    6. Let newType be the result of computing the type of audioSession.

    7. In parallel, set the type of audioSession’s audio session to newType.

5.2. Update AudioSession’s state

When an audio session element is starting or stopping, the user agent will run steps that set the state of an audio session, via the inactivate and try activating algorithms. Setting an audio session's state to active has consequences, especially if the audio session's type is an exclusive type:

Conversely, an audio session state can be modified outside of audio session element changes. When the user agent observes such a modification, the user agent MUST queue a task to notify the state’s change with audioSession, the AudioSession object tied to the modified audio session and with newState being the new audio session state.

An active playback audio session can be interrupted by an incoming phone call, or by another playback session that is going to start playing a new media content in another tab.

To notify the state’s change with audioSession and newState, the user agent MUST run the following steps:

  1. Let isMutatingState be true if audioSession.[[state]] is not newState and false otherwise.

  2. Set audioSession.[[state]] to newState.

  3. If newState is inactive, set audioSession.[[interruptedElements]] to an empty list.

  4. For each element of audioSession.[[elements]], update element.

  5. If isMutatingState is false, abort these steps.

  6. Update all AudioSession states of audioSession’s top-level browsing context with audioSession.

  7. Fire an event named statechange at audioSession.

To inactivate an AudioSession named audioSession, the user agent MUST run the following steps:

  1. If audioSession.[[state]] is inactive, abort these steps.

  2. Run the following steps in parallel:

    1. Set the state of audioSession’s audio session to inactive.

    2. Assert: audioSession’s audio session's state is inactive.

    3. Queue a task to notify the state’s change with audioSession and with its audio session's state.

To try activating an AudioSession named audioSession, the user agent MUST run the following steps:

  1. If audioSession.[[state]] is active, abort these steps.

  2. Run the following steps in parallel:

    1. Set the state of audioSession’s audio session to active. Setting the state to active can fail, in which case the audio session's state will either be inactive or interrupted.

    2. Queue a task to notify the state’s change with audioSession and with its audio session's state.

Activating an audio session can fail for various reasons. For instance, a web application may try to start playing some audio while a higher privilege application, like a phone call application, is already playing audio.

5.3. Update the selected audio session

To update the selected audio session of a top-level browsing context named context, the user agent MUST run the following steps:

  1. Let activeAudioSessions be the list of all the audio sessions tied to AudioSession objects of context and its children in a breadth-first order, that match both the following constraints:

    1. Its state is active.

    2. The result of computing the type of the AudioSession object is an exclusive type.

  2. If activeAudioSessions is empty, abort these steps.

  3. If there is only one audio session in activeAudioSessions, set the selected audio session to this audio session and abort these steps.

  4. Assert: for any AudioSession object tied to an audio session in activeAudioSessions’s named audioSession, audioSession.[[type]] is auto.

    It is expected that only one audio session with an explicit exclusive type can be active at any point in time. If there are multiple active audio sessions in activeAudioSessions, their [[type]] can only be auto.
  5. The user agent MAY apply specific heuristics to reorder activeAudioSessions.

  6. Set the selected audio session to the first audio session in activeAudioSessions.

5.4. Other algorithms

To update all AudioSession states of a top-level browsing context named context with updatedAudioSession, run the following steps:

  1. Update the selected audio session of context.

  2. Let updatedType be the result of computing the type of updatedAudioSession.

  3. If updatedType is not an exclusive type or updatedAudioSession.[[state]] is not active, abort these steps.

  4. Let audioSessions be the list of all the AudioSession objects of context and its children in a breadth-first order.

  5. For each audioSession of audioSessions except for updatedAudioSession, run the following steps:

    1. If audioSession.[[state]] is not active, abort these steps.

    2. Let type be the result of computing the type of audioSession.

    3. If type is not an exclusive type, abort these steps.

    4. If type and updatedType are both auto, abort these steps.

    5. Inactivate audioSession.

To compute the audio session type of audioSession, the user agent MUST run the following steps:

  1. If audioSession.[[type]] is not auto, return audioSession.[[type]].

  2. If any element of audioSession.[[elements]] has a default type of play-and-record and its state is active, return play-and-record.

  3. If any element of audioSession.[[elements]] has a default type of playback and its state is active, return playback.

  4. If any element of audioSession.[[elements]] has a default type of transient-solo and its state is active, return transient-solo.

  5. If any element of audioSession.[[elements]] has a default type of transient and its state is active, return transient.

  6. Return ambient.

6. Audio source and sink integration

This section describes audio session element's steps and properties for AudioContext, HTMLMediaElement and microphone MediaStreamTrack.

An element state is:

To update an element named element, the user agent MUST run the following steps:

  1. Let audioSession be element’s AudioSession.

  2. Run element’s update steps.

  3. If element is an audible element and audioSession.[[state]] is interrupted, run the following steps:

    1. Add element to audioSession.[[interruptedElements]].

    2. Run element’s suspend steps.

  4. If element is in audioSession.[[interruptedElements]], and audioSession.[[state]] is active, run the following steps:

    1. Remove element from audioSession.[[interruptedElements]].

    2. Run element’s resume steps.

When the audible flag of one of audioSession’s elements is changing, the user agent MUST run the following steps:

  1. If the audible flag is changing to true, try activating audioSession.

  2. Otherwise, if any element of audioSession.[[elements]] has a state of interrupted, abort these steps.

  3. Otherwise, inactivate audioSession.

6.1. AudioContext

An AudioContext is an element with the following properties:

When an AudioContext is created, the user agent MUST run the following steps:

  1. Let audioContext be the newly created AudioContext.

  2. Let audioSession be the AudioSession's object of the Window object in which is created audioContext.

  3. Add audioContext to audioSession.[[elements]].

6.2. HTMLMediaElement

A HTMLMediaElement is an element with the following properties:

When an HTMLMediaElement's node document is changing, the user agent MUST run the following steps:

  1. Let mediaElement be the HTMLMediaElement whose node document is changing.

  2. Let previousWindow be the Window object associated to mediaElement’s previous node document, if any or null otherwise.

  3. If previousWindow is not null, remove mediaElement from previousWindow’s associated AudioSession.[[elements]].

  4. Let newWindow be the Window object associated to mediaElement’s new node document, if any or null otherwise.

  5. If newWindow is not null, add mediaElement to newWindow’s associated AudioSession.[[elements]].

6.3. Microphone MediaStreamtrack

A microphone capture MediaStreamTrack is an element with the following properties:

When a microphone capture MediaStreamTrack is created, the user agent MUST run the following steps:

  1. Let track be the newly created MediaStreamTrack.

  2. Let audioSession be the AudioSession's object of the Window object in which is created track.

  3. Add track to audioSession.[[elements]].

FIXME: We should be hooking to the audio track’s sources stored in the Window’s mediaDevices’s mediaStreamTrackSources, instead of MediaStreamTrack. This should handle the case of transferred’s microphone tracks.

7. Privacy considerations

8. Security considerations

9. Examples

9.1. A site sets its audio session type proactively to "play-and-record"

navigator.audioSession.type = 'play-and-record';
// From now on, volume might be set based on 'play-and-record'.
...
// Start playing remote media
remoteVideo.srcObject = remoteMediaStream;
remoteVideo.play();
// Start capturing
navigator.mediaDevices
  .getUserMedia({ audio: true, video: true })
  .then((stream) => {
    localVideo.srcObject = stream;
  });

9.2. A site reacts upon interruption

navigator.audioSession.type = "play-and-record";
// From now on, volume might be set based on 'play-and-record'.
...
// Start playing remote media
remoteVideo.srcObject = remoteMediaStream;
remoteVideo.play();
// Start capturing
navigator.mediaDevices
  .getUserMedia({ audio: true, video: true })
  .then((stream) => {
    localVideo.srcObject = stream;
  });

navigator.audioSession.onstatechange = async () => {
  if (navigator.audioSession.state === "interrupted") {
    localVideo.pause();
    remoteVideo.pause();
    // Make it clear to the user that the call is interrupted.
    showInterruptedBanner();
    for (const track of localVideo.srcObject.getTracks()) {
      track.enabled = false;
    }
  } else {
    // Let user decide when to restart the call.
    const shouldRestart = await showOptionalRestartBanner();
    if (!shouldRestart) {
      return;
    }
    for (const track of localVideo.srcObject.getTracks()) {
      track.enabled = true;
    }
    localVideo.play();
    remoteVideo.play();
  }
};

10. Acknowledgements

The Working Group acknowledges the following people for their invaluable contributions to this specification:

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[MEDIACAPTURE-STREAMS]
Cullen Jennings; et al. Media Capture and Streams. URL: https://w3c.github.io/mediacapture-main/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[WEBAUDIO]
Paul Adenot; Hongchan Choi. Web Audio API. URL: https://webaudio.github.io/web-audio-api/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/

IDL Index

[Exposed=Window]
interface AudioSession : EventTarget {
  attribute AudioSessionType type;

  readonly attribute AudioSessionState state;
  attribute EventHandler onstatechange;
};

enum AudioSessionType {
  "auto",
  "playback",
  "transient",
  "transient-solo",
  "ambient",
  "play-and-record"
};

enum AudioSessionState {
  "inactive",
  "active",
  "interrupted"
};

[Exposed=Window]
partial interface Navigator {
  // The default audio session that the user agent will use when media elements start/stop playing.
  readonly attribute AudioSession audioSession;
};