This document defines a set of ECMAScript APIs in WebIDL to extend the WebRTC 1.0 API to enable user agents to support scalable video coding (SVC).

The API is based on preliminary work done in the W3C ORTC Community Group.

Introduction

This specification extends the WebRTC specification [[WEBRTC]] to enable configuration of encoding parameters for scalable video coding (SVC). Since SVC bitstreams are self-describing and SVC-capable codecs implemented in browsers require that compliant decoders be capable of decoding any legal encoding sent by an encoder, this specification does not support decoder configuration. However, it is possible for decoders that cannot decode any legal bitstream to describe the supported scalability modes.

This specification defines conformance criteria that apply to a single product: the user agent that implements the interfaces that it contains.

Conformance requirements phrased as algorithms or specific steps may be implemented in any manner, so long as the end result is equivalent. (In particular, the algorithms defined in this specification are intended to be easy to follow, and not intended to be performant.)

Implementations that use ECMAScript to implement the APIs defined in this specification MUST implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [[WEBIDL]], as this specification uses that specification and terminology.

Terminology

The term simulcast envelope refers to the maximum number of simulcast streams and the order of the encoding parameters.

This specification references objects, methods, internal slots and dictionaries defined in [[!WEBRTC]]

For Scalable Video Coding (SVC), the terms single-session transmission (SST) and multi-session transmission (MST) are defined in [[RFC6190]]. This specification only supports SST but not MST.

The term Single Real-time Transport Protocol (RTP) stream Single Transport (SRST), defined in [[RFC7656]] Section 3.7, refers to SVC implementations that transmit all layers within a single transport, using a single RTP stream and synchronization source (SSRC). The term Multiple RTP stream Single Transport (MRST), also defined in [[RFC7656]] Section 3.7, refers to implementations that transmit all layers within a single transport, using multiple RTP streams with a distinct SSRC for each layer. This specification only supports SRST transport, not MRST. Codecs with RTP payload specifications supporting SRST transport include VP8 [[RFC7741]], VP9 [[VP9-PAYLOAD]], AV1 [[AV1-RTP]] and H.264/SVC [[RFC6190]].

The term "S mode" refers to a scalability mode in which multiple encodings are sent on the same SSRC. This includes the "S2T1", "S2T1h", "S2T2", "S2T2h", "S2T3", "S2T3h", "S3T1", "S3T1h", "S3T2", "S3T2h", "S3T3" and "S3T3h" {{RTCRtpEncodingParameters/scalabilityMode}} values.

Operational model

This specification extends [[!WEBRTC]] to enable configuration of encoding parameters for Scalable Video Coding (SVC), as well as discovery of the SVC capabilities of both an encoder and decoder, by extending the {{RTCRtpEncodingParameters}} and {{RTCRtpCodecCapability}} dictionaries.

Since this specification does not change the behavior of WebRTC objects and methods, restrictions relating to Offer/Answer negotiation and encoding parameters remain, as described in [[!WEBRTC]] Section 5.2: "{{RTCRtpSender/setParameters()}} does not cause SDP renegotiation and can only be used to change what the media stack is sending or receiving within the envelope negotiated by Offer/Answer."

The configuration of SVC-capable codecs implemented in browsers fits within this restriction. Codecs such as VP8 [[RFC6386]], VP9 [[VP9]] and AV1 [[AV1]] mandate support for SVC and require a compliant decoder to be able to decode any compliant encoding that an encoder can send. Therefore, for these codecs there is no need to configure the decoder or to negotiate SVC support within Offer/Answer, enabling encoding parameters to be used for SVC configuration.

Error handling

[[!WEBRTC]] Section 5.2 describes error handling in {{RTCRtpSender/setParameters()}}, including use of {{RTCError}} to indicate a {{RTCErrorDetailType/"hardware-encoder-error"}} due to an unsupported encoding parameter, as well as {{OperationError}} for other errors. Implementations of this specification utilize {{RTCError}} and {{OperationError}} in the prescribed manner when an invalid {{RTCRtpEncodingParameters/scalabilityMode}} value is provided to {{RTCRtpSender/setParameters()}} or {{RTCPeerConnection/addTransceiver()}}.

When the {{RTCPeerConnection/addTransceiver()}} and {{RTCRtpTransceiver/setCodecPreferences()}} methods are called prior to conclusion of the Offer/Answer negotiation, the negotiated codec and its capabilities may not be known. In this situation the {{RTCRtpEncodingParameters/scalabilityMode}} values configured in {{RTCRtpTransceiverInit/sendEncodings}} may not be supported by the eventually negotiated codec. However, an error will result only if the requested {{RTCRtpEncodingParameters/scalabilityMode}} value is invalid for any supported codec. To determine whether the requested {{RTCRtpEncodingParameters/scalabilityMode}} values have been applied, an application can call the {{RTCRtpSender.getParameters()}} method after negotiation has completed and the sending codec has been determined. If the configuration is not satisfactory, the {{RTCRtpSender/setParameters()}} method can be used to change it.

Note that where SVC support is negotiated in SDP Offer/Answer, {{RTCRtpSender/setParameters()}} can only change {{RTCRtpEncodingParameters/scalabilityMode}} values within the envelope negotiated by Offer/Answer, resulting in an error if the requested {{RTCRtpEncodingParameters/scalabilityMode}} value is outside this envelope. When {{RTCRtpTransceiverInit/sendEncodings}} is used to request the sending of multiple simulcast streams using {{RTCPeerConnection/addTransceiver()}}, it is not possible to configure the sending of "S" scalability modes. The browser may only be configured to send simulcast encodings with multiple SSRCs and RIDs, or alternatively, to send all simulcast encodings on a single RTP stream. Attempting to simultaneously utilize both simulcast transport techniques MUST return {{OperationError}} in {{RTCRtpSender/setParameters()}} or {{RTCPeerConnection/addTransceiver()}}.

Negotiation

So as to ensure that the desired {{RTCRtpEncodingParameters/scalabilityMode}} values can be applied, {{RTCRtpTransceiver/setCodecPreferences()}} can be used to limit the negotiated codecs to those supporting the desired configuration. For example, if temporal scalability is desired along with spatial simulcast, when {{RTCPeerConnection/addTransceiver()}} is called, {{RTCRtpTransceiverInit/sendEncodings}} can be configured to send multiple simulcast streams with different resolutions, with each stream utilizing temporal scalability. If only the VP8, VP9 and AV1 codec implementations support temporal scalability, {{RTCRtpTransceiver/setCodecPreferences()}} can be used to remove the H.264/AVC codec from the Offer, guaranteeing that a codec supporting temporal scalability is negotiated.

There are situations where a peer may only support reception of a subset of codecs and scalability modes. For example, an SFU that parses codec payloads may only support the H.264/AVC codec without scalability and the H.264/SVC codec with temporal scalability. A browser that can decode any VP8 or VP9 scalability mode may not support H.264/SVC or AV1. In these situations, the {{RTCRtpReceiver}}'s getCapabilities method can be used to determine the scalability modes supported by the {{RTCRtpReceiver}}, and the {{RTCRtpSender}}'s getCapabilities method can be used to determine the scalability modes supported by the {{RTCRtpSender}}. After exchanging capabilities, the application can compute which codecs and {{RTCRtpEncodingParameters/scalabilityMode}} values are supported by both the browser and SFU. The intersection of codecs and scalability modes supported by the browser's {{RTCRtpSender}} and the SFU's receiver can then be used to determine the arguments passed to the browser's {{RTCPeerConnection/addTransceiver()}} and {{RTCRtpSender/setParameters()}} methods.

Since sending simulcast encodings on a single stream is not negotiated within Offer/Answer, an application using SDP signaling needs to determine whether single stream simulcast transport is supported prior to the Offer/Answer negotiation. This can be handled by having the SFU send it's receiver capabilities to the application prior to Offer/Answer. This allows the application to determine whether single stream simulcast is supported, and if so, what scalability modes the SFU can handle. For example, an SFU that can only support reception of a maximum of 2 simulcast encodings on a single SSRC with the AV1 codec would only indicate support for the "S2T1" and "S2T1h" scalability modes in its receiver capabilities.

Dictionary extensions

RTCRtpEncodingParameters Dictionary Extensions

partial dictionary RTCRtpEncodingParameters {
             DOMString scalabilityMode;
};

Dictionary {{RTCRtpEncodingParameters}} Members

scalabilityMode of type {{DOMString}}

A case-sensitive identifier of the scalability mode to be used for this stream. The {{RTCRtpEncodingParameters/scalabilityMode}} selected MUST be one of the scalability modes supported for the codec, as indicated in {{RTCRtpCodecCapability}}. Scalability modes are defined in Section 6.

{{RTCRtpCodecCapability}} Dictionary Extensions

partial dictionary RTCRtpCodecCapability {
             sequence<DOMString> scalabilityModes;
};

Dictionary {{RTCRtpCodecCapability}} Members

scalabilityModes of type sequence<{{DOMString}}>

An sequence of the scalability modes (defined in Section 6) supported by the encoder implementation.

In response to a call to {{RTCRtpSender}}.getCapabilities(kind), conformant implementations of this specification MUST return a sequence of scalability modes supported by each codec of that kind. If a codec does not support encoding of any scalability modes, then the {{scalabilityModes}} member is not provided.

In response to a call to {{RTCRtpReceiver}}.getCapabilities(kind), decoders that do not support decoding of scalability modes or that are required to decode any scalability mode (such as compliant VP8, VP9 and AV1 decoders) omit the {{scalabilityModes}} member. However, decoders that only support decoding of a subset of scalability modes MUST return a sequence of the scalability modes supported by that codec.

The {{scalabilityModes}} sequence represents the scalability modes supported by a user agent. For a Selective Forwarding Unit (SFU), the supported {{scalabilityModes}} may depend on the negotiated RTP header extensions. For example, if the SFU cannot parse codec payloads (either because it is not designed to do so, or because the payloads are encrypted), then negotiation of an RTP header extension (such as the AV1 Descriptor defined in Appendix A of [[AV1-RTP]]) may be required to enable the SFU to forward {{scalabilityModes}}. As a result, the {{scalabilityModes}} supported by an SFU may not be known until completion of the Offer/Answer negotiation.

Scalability modes

The scalability modes supported in this specification, as well as their associated identifiers and characteristics, are provided in the table below. The names of the scability modes (which are case sensitive) are provided, along with the scalability mode identifiers assigned in [[AV1]] Section 6.7.5, and links to dependency diagrams provided in Section 9.

While [[AV1]] and VP9 [[VP9-PAYLOAD]] implementations can support all the modes defined in the table, other codecs cannot. For example, VP8 [[RFC7741]] only supports temporal scalability (e.g. "L1T2", "L1T3"). H.264/SVC [[RFC6190]], which supports both temporal and spatial scalability, only permits transport of simulcast on distinct SSRCs, so that it does not support the "S" modes, where multiple encodings are transported on a single RTP stream.

Scalability Mode Identifier Spatial Layers Resolution Ratio Temporal Layers Inter-layer dependency AV1 scalability_mode_idc
"L1T2" 1 2 SCALABILITY_L1T2
"L1T3" 1 3 SCALABILITY_L1T3
"L2T1" 2 2:1 1 Yes SCALABILITY_L2T1
"L2T2" 2 2:1 2 Yes SCALABILITY_L2T2
"L2T3" 2 2:1 3 Yes SCALABILITY_L2T3
"L3T1" 3 2:1 1 Yes SCALABILITY_L3T1
"L3T2" 3 2:1 2 Yes SCALABILITY_L3T2
"L3T3" 3 2:1 3 Yes SCALABILITY_L3T3
"L2T1h" 2 1.5:1 1 Yes SCALABILITY_L2T1h
"L2T2h" 2 1.5:1 2 Yes SCALABILITY_L2T2h
"L2T3h" 2 1.5:1 3 Yes SCALABILITY_L2T3h
"S2T1" 2 2:1 1 No SCALABILITY_S2T1
"S2T2" 2 2:1 2 No SCALABILITY_S2T2
"S2T3" 2 2:1 3 No SCALABILITY_S2T3
"S2T1h" 2 1.5:1 1 No SCALABILITY_S2T1h
"S2T2h" 2 1.5:1 2 No SCALABILITY_S2T2h
"S2T3h" 2 1.5:1 3 No SCALABILITY_S2T3h
"S3T1" 3 2:1 1 No SCALABILITY_S3T1
"S3T2" 3 2:1 2 No SCALABILITY_S3T2
"S3T3" 3 2:1 3 No SCALABILITY_S3T3
"S3T1h" 3 1.5:1 1 No SCALABILITY_S3T1h
"S3T2h" 3 1.5:1 2 No SCALABILITY_S3T2h
"S3T3h" 3 1.5:1 3 No SCALABILITY_S3T3h
"L2T2_KEY" 2 2:1 2 Yes SCALABILITY_L3T2_KEY
"L2T2_KEY_SHIFT" 2 2:1 2 Yes SCALABILITY_L3T2_KEY_SHIFT
"L2T3_KEY" 2 2:1 3 Yes SCALABILITY_L3T3_KEY
"L2T3_KEY_SHIFT" 2 2:1 3 Yes SCALABILITY_L3T3_KEY_SHIFT
"L3T2_KEY" 3 2:1 2 Yes SCALABILITY_L4T5_KEY
"L3T2_KEY_SHIFT" 3 2:1 2 Yes SCALABILITY_L4T5_KEY_SHIFT
"L3T3_KEY" 3 2:1 3 Yes SCALABILITY_L4T7_KEY
"L3T3_KEY_SHIFT" 3 2:1 3 Yes SCALABILITY_L4T7_KEY_SHIFT

Guidelines for addition of {{RTCRtpEncodingParameters/scalabilityMode}} values

When proposing a {{RTCRtpEncodingParameters/scalabilityMode}} value, the following principles should be followed:

  1. The proposed {{RTCRtpEncodingParameters/scalabilityMode}} MUST define entries to the table in Section 6, including values for the Scalabilty Mode Identifier, spatial and temporal layers, Resolution Ratio, Inter-layer dependency and the corresponding AV1 scalability_mode_idc value (if assigned).
  2. The Scalability Mode Identifier SHOULD be consistent with the existing naming scheme, which utilizes LxTy to denote a {{RTCRtpEncodingParameters/scalabilityMode}} with x spatial layers using a 2:1 resolution ratio and y temporal layers. LxTyh denotes x spatial layers with a 1.5:1 resolution ratio and y temporal layers. SxTy denotes a {{RTCRtpEncodingParameters/scalabilityMode}} with x simulcast encodings with a 2:1 resolution ratio, with each simulcast encoding containing y temporal layers. SxTyh denotes a 1.5:1 resolution ratio. LxTy_KEY denotes a {{RTCRtpEncodingParameters/scalabilityMode}} with x spatial layers using a 2:1 resolution ratio and y temporal layers in which spatial layers only depend on lower spatial layers at a key frame. LxTy_KEY_SHIFT modes denotes a {{RTCRtpEncodingParameters/scalabilityMode}} with x spatial layers using a 2:1 resolution ratio and y temporal layers in which spatial layers only depend on lower spatial layers at a key frame and subsequent frames have their temporal identifier shifted upward.
  3. A dependency diagram MUST be supplied, in the format provided in Section 9.

Examples

Spatial Simulcast and Temporal Scalability

This example extends [[WEBRTC]] Section 7.1 (Example 1) to demonstrate sending three spatial simulcast layers each with three temporal layers, using an SSRC and RID for each simulcast layer. Only the "sendEncodings" attribute is changed from the original example.

const signaling = new SignalingChannel(); // handles JSON.stringify/parse
const constraints = {audio: true, video: true};
const configuration = {'iceServers': [{'urls': 'stun:stun.example.org'}]};
let pc;

// call start() to initiate
async function start() {
  pc = new RTCPeerConnection(configuration);

  // let the "negotiationneeded" event trigger offer generation
  pc.onnegotiationneeded = async () => {
    try {
      await pc.setLocalDescription();
      // send the offer to the other peer
      signaling.send({description: pc.localDescription});
    } catch (err) {
      console.error(err);
    }
  };

  try {
    // get a local stream, show it in a self-view and add it to be sent
    const stream = await navigator.mediaDevices.getUserMedia(constraints);
    selfView.srcObject = stream;
    pc.addTransceiver(stream.getAudioTracks()[0], {direction: 'sendonly'});
    pc.addTransceiver(stream.getVideoTracks()[0], {
      direction: 'sendonly',
      sendEncodings: [
        {rid: 'q', scaleResolutionDownBy: 4.0, scalabilityMode: 'L1T3'}
        {rid: 'h', scaleResolutionDownBy: 2.0, scalabilityMode: 'L1T3'},
        {rid: 'f', scalabilityMode: 'L1T3'},
      ]    
    });
  } catch (err) {
    console.error(err);
  }
}

signaling.onmessage = async ({data: {description, candidate}}) => {
  try {
    if (description) {
      await pc.setRemoteDescription(description);
      // if we got an offer, we need to reply with an answer
      if (description.type == 'offer') {
        await pc.setLocalDescription();
        signaling.send({description: pc.localDescription});
      }
    } else if (candidate) {
      await pc.addIceCandidate(candidate);
    }
  } catch (err) {
    console.error(err);
  }
};

This is an example with two spatial layers (with a 2:1 ratio) and three temporal layers.

let sendEncodings = [
  {scalabilityMode: 'L2T3'}
];

This is an example with three spatial simulcast layers each with three temporal layers on a single SSRC.

let sendEncodings = [
  {scalabilityMode: 'S3T3'}
]    

SVC Encoder Capabilities

This is an example of {{RTCRtpSender}}.getCapabilities}}('video').codecs[] returned by a browser implementing [[WEBRTC]]. Only the scalabilityModes attribute is defined in this specification.

  "codecs": [
    {
      "clockRate": 90000,
      "mimeType": "video/VP8",
      "scalabilityModes": [
        "L1T2",
        "L1T3"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=96"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/VP9",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L2T1",
        "L2T2",
        "L2T3",
        "L3T1",
        "L3T2",
        "L3T3",
        "L1T2h",
        "L1T3h",
        "L2T1h",
        "L2T2h",
        "L2T3h"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=98"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=1;profile-level-id=42001f;level-asymmetry-allowed=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=100"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=0;profile-level-id=42001f;level-asymmetry-allowed=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=102"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=42e01f;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=104"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=42e01f;packetization-mode=0"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=106"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=4d0032;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=108"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "level-asymmetry-allowed=1;profile-level-id=640032;packetization-mode=1"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=110"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/red"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=112"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/ulpfec"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/AV1",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L2T1",
        "L2T2",
        "L2T3",
        "L3T1",
        "L3T2",
        "L3T3",
        "L1T2h",
        "L1T3h",
        "L2T1h",
        "L2T2h",
        "L2T3h",
        "S2T1",
        "S2T2",
        "S2T3",
        "S3T1",
        "S3T2",
        "S3T3",
        "S2T1h",
        "S2T2h",
        "S2T3h",
        "S3T1h",
        "S3T2h",
        "S3T3h"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=113"
    }
]

SFU Capabilities

This is an example of {{RTCRtpReceiver}}.getCapabilities('video').codecs[] returned by a Selective Forwarding Unit (SFU) that only supports forwarding of VP8, VP9 and AV1 temporal scalability modes.

 "codecs": [
    {
      "clockRate": 90000,
      "mimeType": "video/VP8",
      "scalabilityModes": [
        "L1T2",
        "L1T3"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/VP9",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L1T2h",
        "L1T3h"
      ]
    },
    {
      "clockRate": 90000,
      "mimeType": "video/AV1",
      "scalabilityModes": [
        "L1T2",
        "L1T3",
        "L1T2h",
        "L1T3h"
      ]
    }
]

SVC Decoder Capabilities

This is an example of {{RTCRtpReceiver}}.getCapabilities('video').codecs[] returned by a browser that can support all scalability modes of the VP8 and VP9 codecs.

  "codecs": [
    { 
      "clockRate": 90000,
      "mimeType": "video/VP8"
    },
    { 
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=96"
    },
    { 
      "clockRate": 90000,
      "mimeType": "video/VP9"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/rtx",
      "sdpFmtpLine": "apt=98"
    },
    {
      "clockRate": 90000,
      "mimeType": "video/H264",
      "sdpFmtpLine": "packetization-mode=1;profile-level-id=42001f;level-asymmetry-allowed=1"
    },

    ...
]

Privacy and Security Considerations

This section is non-normative; it specifies no new behaviour, but instead summarizes information already present in other parts of the specification. WebRTC protocol security considerations are described in [[RTCWEB-SECURITY-ARCH]] and the security and privacy considerations for the WebRTC APIs are described in [[WEBRTC]] Section 13.

Persistent information

The WebRTC API exposes information about the underlying media system via the {{RTCRtpSender.getCapabilities()}} and {{RTCRtpReceiver}}.getCapabilities methods, including detailed and ordered information about the codecs that the system is able to produce and consume. The WebRTC-SVC extension adds the {{RTCRtpCodecCapability/scalabilityModes}} supported by the {{RTCRtpSender}} to that information, which is persistent across time, therefore increasing the fingerprint surface. Since for SVC codecs implemented in WebRTC browsers compliant decoders are required to be able to decode all scalability modes, additional information is not provided relating to the {{RTCRtpReceiver}}.

Since for SVC codecs implemented in WebRTC the use of scalable coding tools is not negotiated and is independent of the supported profiles, and since SVC is rarely supported in hardware encoders, knowledge of the {{RTCRtpCodecCapability/scalabilityModes}} supported by the {{RTCRtpSender}} does not provide additional information on the underlying hardware. However, since browsers may differ in their support for SVC modes, the supported {{RTCRtpCodecCapability/scalabilityModes}} may permit differentiation between browsers. This additional fingerprint surface is expected to decrease over time as this specification is more widely implemented.

Scalability Mode Dependency Diagrams

Dependency diagrams for the scability modes defined in this specification are provided below.

L1T2 and L1T2h

L1T2 and L1T2h: 2-layer temporal scalability encoding
L1T2 and L1T2h: 1-layer spatial and 2-layer temporal scalability encoding

L1T3 and L1T3h

L1T3 and L1T3h: 3-layer temporal scalability encoding
L1T3 and L1T3h: 1-layer spatial and 3-layer temporal scalability encoding

L2T1 and L2T1h

L2T1 and L2T1h: 2-layer spatial and 1-layer temporal scalability encoding
L2T1 and L2T1h: 2-layer spatial and 1-layer temporal scalability encoding

L2T1_KEY

L2T1_KEY: 2-layer spatial and 1-layer temporal scalability K-SVC encoding
L2T1_KEY: 2-layer spatial and 1-layer temporal scalability K-SVC encoding

L2T2 and L2T2h

L2T2 and L2T2h: 2-layer spatial and 2-layer temporal scalability encoding
L2T2 and L2T2h: 2-layer spatial and 2-layer temporal scalability encoding

L2T2_KEY

L2T2_KEY: 2-layer spatial and 2-layer temporal scalability K-SVC encoding
L2T2_KEY: 2-layer spatial and 2-layer temporal scalability K-SVC encoding

L2T2_KEY_SHIFT

L2T2_KEY_SHIFT: 2-layer spatial and 2-layer temporal scalability K-SVC shifted encoding with temporal shift
L2T2_KEY_SHIFT: 2-layer spatial and 2-layer temporal scalability K-SVC encoding with temporal shift

L2T3 and L2T3h

L2T3 and L2T3h: 2-layer spatial and 3-layer temporal scalability encoding
L2T3 and L2T3h: 2-layer spatial and 3-layer temporal scalability encoding

L2T3_KEY

L2T3_KEY: 2-layer spatial and 3-layer temporal scalability K-SVC encoding
L2T3_KEY: 2-layer spatial and 3-layer temporal scalability K-SVC encoding

L2T3_KEY_SHIFT

L2T3_KEY_SHIFT: 2-layer spatial and 3-layer temporal scalability K-SVC shifted encoding with temporal shift
L2T3_KEY_SHIFT: 2-layer spatial and 3-layer temporal scalability K-SVC encoding with temporal shift

L3T1 and L3T1h

L3T1 and L3T1h: 3-layer spatial and 1-layer temporal scalability encoding
L3T1 and L3T1h: 3-layer spatial and 1-layer temporal scalability encoding

L3T1_KEY

L3T1_KEY: 3-layer spatial and 1-layer temporal scalability K-SVC encoding
L3T1_KEY: 3-layer spatial and 1-layer temporal scalability K-SVC encoding

L3T2 and L3T2h

L3T2 and L3T2h: 3-layer spatial and 2-layer temporal scalability encoding
L3T2 and L3T2h: 3-layer spatial and 2-layer temporal scalability encoding

L3T2_KEY

L3T2_KEY: 3-layer spatial and 2-layer temporal scalability K-SVC encoding
L3T2_KEY: 3-layer spatial and 2-layer temporal scalability K-SVC encoding

L3T2_KEY_SHIFT

L3T2_KEY_SHIFT: 3-layer spatial and 2-layer temporal scalability K-SVC with temporal shift
L3T2_KEY_SHIFT: 3-layer spatial and 2-layer temporal scalability K-SVC with temporal shift

L3T3 and L3T3h

L3T3 and L3T3h: 3-layer spatial and 3-layer temporal scalability encoding
L3T3 and L3T3h: 3-layer spatial and 3-layer temporal scalability encoding

L3T3_KEY

L3T3_KEY: 3-layer spatial and 3-layer temporal scalability K-SVC encoding
L3T3_KEY: 3-layer spatial and 3-layer temporal scalability K-SVC encoding

L3T3_KEY_SHIFT

L3T3_KEY_SHIFT: 3-layer spatial and 3-layer temporal scalability K-SVC with temporal shift
L3T3_KEY_SHIFT: 3-layer spatial and 3-layer temporal scalability K-SVC with temporal shift

S2T1 and S2T1h

S2T1 and S2T1h: 2-layer spatial simulcast encoding
S2T1 and S2T1h: 2-layer spatial simulcast encoding

S2T2 and S2T2h

S2T2 and S2T2h: 2-layer spatial simulcast and 2-layer temporal scalability encoding
S2T2 and S2T2h: 2-layer spatial simulcast and 2-layer temporal scalability encoding

S2T3 and S2T3h

S2T3 and S2T3h: 2-layer spatial simulcast and 3-layer temporal scalability encoding
S2T3 and S2T3h: 2-layer spatial simulcast and 3-layer temporal scalability encoding

S3T1 and S3T1h

S3T1 and S3T1h: 3-layer spatial simulcast encoding
S3T1 and S3T1h: 3-layer spatial simulcast encoding

S3T2 and S3T2h

S3T2 and S3T2h: 3-layer spatial simulcast and 2-layer temporal scalability encoding
S3T2 and S3T2h: 3-layer spatial simulcast and 2-layer temporal scalability encoding

S3T3 and S3T3h

S3T3 and S3T3h: 3-layer spatial simulcast and 3-layer temporal scalability encoding
S3T3 and S3T3h: 3-layer spatial simulcast and 3-layer temporal scalability encoding