WebRTC Encoded Transform

Editor’s Draft,

More details about this document
This version:
https://w3c.github.io/webrtc-encoded-transform/
Latest published version:
https://www.w3.org/TR/webrtc-encoded-transform/
Feedback:
public-webrtc@w3.org with subject line “[webrtc-encoded-transform] … message topic …” (archives)
GitHub
Editors:
(Google)
(Google)
(Apple)

Abstract

This API defines an API surface for manipulating the bits on MediaStreamTracks being sent via an RTCPeerConnection.

Status of this document

This is a public copy of the editors’ draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don’t cite this document other than as work in progress.

If you wish to make comments regarding this document, please send them to public-webrtc@w3.org (subscribe, archives). When sending e-mail, please put the text “webrtc-encoded-transform” in the subject, preferably like this: “[webrtc-encoded-transform] …summary of comment…”. All comments are welcome.

This document was produced by the Web Real-Time Communications Working Group.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 03 November 2023 W3C Process Document.

1. Introduction

The [WEBRTC-NV-USE-CASES] document describes the use-case of

which requires that the conferencing server does not have access to the cleartext media (requirement N27).

This specification provides access to encoded media, which is the output of the encoder part of a codec and the input to the decoder part of a codec which allows the user agent to apply encryption locally.

The interface is inspired by [WEBCODECS] to provide access to such functionality while retaining the setup flow of RTCPeerConnection

2. Specification

The Streams definition doesn’t use WebIDL much, but the WebRTC spec does. This specification shows the IDL extensions for WebRTC.

It uses an additional API on RTCRtpSender and RTCRtpReceiver to insert the processing into the pipeline.

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

2.1. Extension operation

At the time when a codec is initialized as part of the encoder, and the corresponding flag is set in the RTCPeerConnection's RTCConfiguration argument, ensure that the codec is disabled and produces no output.

2.1.1. Stream creation

At construction of each RTCRtpSender or RTCRtpReceiver, run the following steps:

  1. Initialize this.[[transform]] to null.

  2. Initialize this.[[readable]] to a new ReadableStream.

  3. Set up this.[[readable]]. this.[[readable]] is provided frames using the readEncodedData algorithm given this as parameter.

  4. Initialize this.[[writable]] to a new WritableStream.

  5. Set up this.[[writable]] with its writeAlgorithm set to writeEncodedData given this as parameter and its highWaterMark set to Infinity.

    highWaterMark is set to Infinity to explicitly disable backpressure.

  6. Initialize this.[[pipeToController]] to null.

  7. Initialize this.[[lastReceivedFrameCounter]] to 0.

  8. Initialize this.[[lastEnqueuedFrameCounter]] to 0.

  9. Queue a task to run the following steps:

    1. If this.[[pipeToController]] is not null, abort these steps.

    2. Set this.[[pipeToController]] to a new AbortController.

    3. Call pipeTo with this.[[readable]], this.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to true and this.[[pipeToController]]’s signal.

Streams backpressure can optimize throughput while limiting processing and memory consumption by pausing data production as early as possible in a data pipeline. This proves useful in contexts where reliability is essential and latency is less of a concern. On the other hand, WebRTC media pipelines favour low latency over reliability, for instance by allowing to drop frames at various places and by using recovery mechanisms. Buffering within a transform would add latency without allowing web applications to adapt much. The User Agent is responsible for doing these adaptations, especially since it controls both ends of the transform. For those reasons, streams backpressure is disabled in WebRTC encoded transforms.

2.1.2. Stream processing

The readEncodedData algorithm is given a rtcObject as parameter. It is defined by running the following steps:

  1. Wait for a frame to be produced by rtcObject’s encoder if it is a RTCRtpSender or rtcObject’s packetizer if it is a RTCRtpReceiver.

  2. Increment rtcObject.[[lastEnqueuedFrameCounter]] by 1.

  3. Let frame be the newly produced frame.

  4. Set frame.[[owner]] to rtcObject.

  5. Set frame.[[counter]] to rtcObject.[[lastEnqueuedFrameCounter]].

  6. Enqueue frame in rtcObject.[[readable]].

The writeEncodedData algorithm is given a rtcObject as parameter and a frame as input. It is defined by running the following steps:

  1. If frame.[[owner]] is not equal to rtcObject, abort these steps and return a promise resolved with undefined. A processor cannot create frames, or move frames between streams.

  2. If frame.[[counter]] is equal or smaller than rtcObject.[[lastReceivedFrameCounter]], abort these steps and return a promise resolved with undefined. A processor cannot reorder frames, although it may delay them or drop them.

  3. Set rtcObject.[[lastReceivedFrameCounter]] to frame[[counter]].

  4. Let data be frame.[[data]].

  5. Let serializedFrame be StructuredSerializeWithTransfer(frame, « data »).

  6. Let frameCopy be StructuredDeserialize(serializedFrame, frame’s relevant realm).

  7. Enqueue frameCopy for processing as if it came directly from the encoded data source, by running one of the following steps:

  8. Return a promise resolved with undefined.

On sender side, as part of readEncodedData, frames produced by rtcObject’s encoder MUST be enqueued in rtcObject.[[readable]] in the encoder’s output order. As writeEncodedData ensures that the transform cannot reorder frames, the encoder’s output order is also the order followed by packetizers to generate RTP packets and assign RTP packet sequence numbers. The packetizer may expect the transformed data to still conform to the original format, e.g. a series of NAL units separated by Annex B start codes.

On receiver side, as part of readEncodedData, frames produced by rtcObject’s packetizer MUST be enqueued in rtcObject.[[readable]] in the same encoder’s output order. To ensure the order is respected, the depacketizer will typically use RTP packet sequence numbers to reorder RTP packets as needed before enqueuing frames in rtcObject.[[readable]]. As writeEncodedData ensures that the transform cannot reorder frames, this will be the order expected by rtcObject’s decoder.

2.2. Extension attribute

A RTCRtpTransform has two private slots called [[readable]] and [[writable]].

Each RTCRtpTransform has an association steps set, which is empty by default.

The transform getter steps are:

  1. Return this.[[transform]].

The transform setter steps are:

  1. Let transform be the argument to the setter.

  2. Let checkedTransform set to transform if it is not null or to an identity transform stream otherwise.

  3. Let reader be the result of getting a reader for checkedTransform.[[readable]].

  4. Let writer be the result of getting a writer for checkedTransform.[[writable]].

  5. Initialize newPipeToController to a new AbortController.

  6. If this.[[pipeToController]] is not null, run the following steps:

    1. Add the chain transform algorithm to this.[[pipeToController]]’s signal.

    2. signal abort on this.[[pipeToController]].

  7. Else, run the chain transform algorithm steps.

  8. Set this.[[pipeToController]] to newPipeToController.

  9. Set this.[[transform]] to transform.

  10. Run the steps in the set of association steps of transform with this.

The chain transform algorithm steps are defined as:

  1. If newPipeToController’s signal is aborted, abort these steps.

  2. Release reader.

  3. Release writer.

  4. Assert that newPipeToController is the same object as rtcObject.[[pipeToController]].

  5. Call pipeTo with rtcObject.[[readable]], checkedTransform.[[writable]], preventClose equal to false, preventAbort equal to false, preventCancel equal to true and newPipeToController’s signal.

  6. Call pipeTo with checkedTransform.[[readable]], rtcObject.[[writable]], preventClose equal to true, preventAbort equal to true, preventCancel equal to false and newPipeToController’s signal.

This algorithm is defined so that transforms can be updated dynamically. There is no guarantee on which frame will happen the switch from the previous transform to the new transform.

If a web application sets the transform synchronously at creation of the RTCRtpSender (for instance when calling addTrack), the transform will receive the first frame generated by the RTCRtpSender's encoder. Similarly, if a web application sets the transform synchronously at creation of the RTCRtpReceiver (for instance when calling addTrack, or at track event handler), the transform will receive the first full frame generated by the RTCRtpReceiver's packetizer.

3. SFrameTransform

The API presented in this section allows applications to process SFrame data as defined in [SFrame].

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform : EventTarget {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

The new SFrameTransform(options) constructor steps are:

  1. Let transformAlgorithm be an algorithm which takes a frame as input and runs the SFrame transform algorithm with this and frame.

  2. Set this.[[transform]] to a new TransformStream.

  3. Set up this.[[transform]] with transformAlgorithm set to transformAlgorithm.

  4. Let options be the method’s first argument.

  5. Set this.[[role]] to options["role"].

  6. Set this.[[readable]] to this.[[transform]].[[readable]].

  7. Set this.[[writable]] to this.[[transform]].[[writable]].

3.1. Algorithm

The SFrame transform algorithm, given sframe as a SFrameTransform object and frame, runs these steps:

  1. Let role be sframe.[[role]].

  2. If frame.[[owner]] is a RTCRtpSender, set role to 'encrypt'.

  3. If frame.[[owner]] is a RTCRtpReceiver, set role to 'decrypt'.

  4. Let data be undefined.

  5. If frame is a BufferSource, set data to frame.

  6. If frame is a RTCEncodedAudioFrame, set data to frame.data

  7. If frame is a RTCEncodedVideoFrame, set data to frame.data

  8. If data is undefined, abort these steps.

  9. Let buffer be the result of running the SFrame algorithm with data and role as parameters. This algorithm is defined by the SFrame specification and returns an ArrayBuffer.

  10. If the SFrame algorithm exits abruptly with an error, queue a task to run the following sub steps:

    1. If the processing fails on decryption side due to data not following the SFrame format, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to syntax and its frame attribute set to frame.

    2. If the processing fails on decryption side due to the key identifier parsed in data being unknown, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to keyID, its frame attribute set to frame and its keyID attribute set to the keyID value parsed in the SFrame header.

    3. If the processing fails on decryption side due to validation of the authentication tag, fire an event named error at sframe, using the SFrameTransformErrorEvent interface with its errorType attribute set to authentication and its frame attribute set to frame.

    4. Abort these steps.

  11. If frame is a BufferSource, set frame to buffer.

  12. If frame is a RTCEncodedAudioFrame, set frame.data to buffer.

  13. If frame is a RTCEncodedVideoFrame, set frame.data to buffer.

  14. Enqueue frame in sframe.[[transform]].

3.2. Methods

The setEncryptionKey(key, keyID) method steps are:
  1. Let promise be a new promise.

  2. If keyID is a bigint which cannot be represented as a integer between 0 and 264-1 inclusive, reject promise with a RangeError exception.

  3. Otherwise, in parallel, run the following steps:

    1. Set key with its optional keyID as key material to use for the SFrame transform algorithm, as defined by the SFrame specification.

    2. If setting the key material fails, reject promise with an InvalidModificationError exception and abort these steps.

    3. Resolve promise with undefined.

  4. Return promise.

4. RTCRtpScriptTransform

4.1. RTCEncodedVideoFrameType dictionary

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};
Enumeration description
Enum value Description
empty

This frame contains no data.

key

This frame can be decoded without reference to any other frames.

delta

This frame references another frame and can not be decoded without that frame.

4.2. RTCEncodedVideoFrameMetadata dictionary

dictionary RTCEncodedVideoFrameMetadata {
    unsigned long long frameId;
    sequence<unsigned long long> dependencies;
    unsigned short width;
    unsigned short height;
    unsigned long spatialIndex;
    unsigned long temporalIndex;
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    long long timestamp;    // microseconds
    unsigned long rtpTimestamp;
    DOMString mimeType;
};

4.2.1. Members

frameId, of type unsigned long longunsigned long long

An identifier for the encoded frame, monotonically increasing in decode order. Its lower 16 bits match the frame_number of the AV1 Dependency Descriptor Header Extension defined in Appendix A of [[?AV1-RTP-SPEC]], if present. Only present for received frames if the Dependency Descriptor Header Extension is present.

dependencies, of type sequence<unsigned long long>sequence<unsigned long long>

List of frameIds of frames this frame references. Only present for received frames if the AV1 Dependency Descriptor Header Extension defined in Appendix A of [[?AV1-RTP-SPEC]] is present.

synchronizationSource, of type unsigned longunsigned long

The synchronization source (ssrc) identifier is an unsigned integer value per [RFC3550] used to identify the stream of RTP packets that the encoded frame object is describing.

payloadType, of type octetoctet

The payload type is an unsigned integer value in the range from 0 to 127 per [RFC3550] that is used to describe the format of the RTP payload.

contributingSources, of type sequence<unsigned long>sequence<unsigned long>

The list of contribution sources (csrc list) as defined in [RFC3550].

timestamp, of type long longlong long

The media presentation timestamp (PTS) in microseconds of raw frame, matching the timestamp for raw frames which correspond to this frame.

rtpTimestamp, of type unsigned longunsigned long

The RTP timestamp identifier is an unsigned integer value per [RFC3550] that reflects the sampling instant of the first octet in the RTP data packet.

mimeType, of type DOMStringDOMString

The codec MIME media type/subtype defined in the IANA media types registry [IANA-MEDIA-TYPES], e.g. video/VP8.

4.3. RTCEncodedVideoFrame interface

dictionary RTCEncodedVideoFrameOptions {
    RTCEncodedVideoFrameMetadata metadata;
};

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker), Serializable]
interface RTCEncodedVideoFrame {
    constructor(RTCEncodedVideoFrame originalFrame, optional RTCEncodedVideoFrameOptions options = {});
    readonly attribute RTCEncodedVideoFrameType type;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

4.3.1. Constructor

constructor()

Creates a new RTCEncodedVideoFrame from the given originalFrame and options.[metadata]. The newly created frame is completely independent of originalFrame, with its [[data]] being a deep copy of originalFrame.[[data]]. The new frame’s [[metadata]] is a deep copy of originalFrame.[[metadata]], with fields replaced with deep copies of the fields present in options.[metadata].

When called, run the following steps:

  1. Set this.[[type]] to originalFrame.[[type]].

  2. Let this.[[data]] be the result of [CloneArrayBuffer](originalFrame.[[data]], 0, originalFrame.[[data]].[[ArrayBufferByteLength]]).

  3. Let [[metadata]] represent the metadata associated with this newly constructed frame.

    1. For each {[[key]],[[value]]} pair of originalFrame.[[getMetadata()]], set [[metadata]].[[key]] to a deep copy of [[value]].

    2. For each {[[key]],[[value]]} pair of options.[metadata], set [[metadata]].[[key]] to a deep copy of [[value]].

4.3.2. Members

type, of type RTCEncodedVideoFrameType, readonlyRTCEncodedVideoFrameType

The type attribute allows the application to determine when a key frame is being sent or received.

data, of type ArrayBufferArrayBuffer

The encoded frame data. The format of the data depends on the video codec that is used to encode/decode the frame which can be determined by looking at the mimeType. For SVC, each spatial layer is transformed separately.

Since packetizers may drop certain elements, e.g. AV1 temporal delimiter OBUs, the input to an receive-side transform may be different from the output of a send-side transform.

The following table gives a number of examples:

mimeType Data format
video/VP8 The data starts with the "uncompressed data chunk" defined in section 9.1 of [RFC6386] and is followed by the rest of the frame data. The VP8 payload descriptor is not accessible.
video/VP9 The data is a frame as described in Section 6 of [VP9]. The VP9 payload descriptor is not accessible.
video/H264 The data is a series of NAL units in Annex B format, as defined in [ITU-T-REC-H.264] Annex B.
video/AV1 The data is a series of OBUs compliant to the low-overhead bitstream format as described in Section 5 of [AV1]. The AV1 aggregation header is not accessible.

4.3.3. Methods

getMetadata()

Returns the metadata associated with the frame.

4.3.4. Serialization

RTCEncodedVideoFrame objects are serializable objects [HTML]. Their serialization steps, given value, serialized, and forStorage, are:

  1. If forStorage is true, then throw a DataCloneError.

  2. Set serialized.[[type]] to the value of value.type

  3. Set serialized.[[metadata]] to an internal representation of value’s metadata.

  4. Set serialized.[[data]] to value.[[data]]

Their deserialization steps, given serialized, value and realm, are:

  1. Set value.type to serialized.[[type]]

  2. Set value’s metadata to the platform object representation of serialized.[[metadata]]

  3. Set value.[[data]] to serialized.[[data]].

The internal form of a serialized RTCEncodedVideoFrame is not observable; it is defined chiefly so that it can be used with frame cloning in the writeEncodedData algorithm and in the structuredClone() operation. An implementation is therefore free to choose whatever method works best.

4.4. RTCEncodedAudioFrameMetadata dictionary

dictionary RTCEncodedAudioFrameMetadata {
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    short sequenceNumber;
    unsigned long rtpTimestamp;
    DOMString mimeType;
};

4.4.1. Members

synchronizationSource, of type unsigned longunsigned long

The synchronization source (ssrc) identifier is an unsigned integer value per [RFC3550] used to identify the stream of RTP packets that the encoded frame object is describing.

payloadType, of type octetoctet

The payload type is an unsigned integer value in the range from 0 to 127 per [RFC3550] that is used to describe the format of the RTP payload.

contributingSources, of type sequence<unsigned long>sequence<unsigned long>

The list of contribution sources (csrc list) as defined in [RFC3550].

sequenceNumber, of type shortshort

The RTP sequence number as defined in [RFC3550]. Only exists for incoming audio frames.

Comparing two sequence numbers requires serial number arithmetic described in [RFC1982].

rtpTimestamp, of type unsigned longunsigned long

The RTP timestamp identifier is an unsigned integer value per [RFC3550] that reflects the sampling instant of the first octet in the RTP data packet.

mimeType, of type DOMStringDOMString

The codec MIME media type/subtype defined in the IANA media types registry [IANA-MEDIA-TYPES], e.g. audio/opus.

4.5. RTCEncodedAudioFrame interface

dictionary RTCEncodedAudioFrameOptions {
    RTCEncodedAudioFrameMetadata metadata;
};

[Exposed=(Window,DedicatedWorker), Serializable]
interface RTCEncodedAudioFrame {
    constructor(RTCEncodedAudioFrame originalFrame, optional RTCEncodedAudioFrameOptions options = {});
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};

4.5.1. Constructor

constructor()

Creates a new RTCEncodedAudioFrame from the given originalFrame and options.[metadata]. The newly created frame is completely independent of originalFrame, with its [[data]] being a deep copy of originalFrame.[[data]]. The new frame’s [[metadata]] is a deep copy of originalFrame.[[metadata]], with fields replaced with deep copies of the fields present in options.[metadata].

When called, run the following steps:

  1. Let this.[[data]] be the result of [CloneArrayBuffer](originalFrame.[[data]], 0, originalFrame.[[data]].[[ArrayBufferByteLength]]).

  2. Let [[metadata]] represent the metadata associated with this newly constructed frame.

    1. For each {[[key]],[[value]]} pair of originalFrame.[[getMetadata()]], set [[metadata]].[[key]] to a deep copy of [[value]].

    2. For each {[[key]],[[value]]} pair of options.[metadata], set [[metadata]].[[key]] to a deep copy of [[value]].

4.5.2. Members

data, of type ArrayBufferArrayBuffer

The encoded frame data. The format of the data depends on the audio codec that is used to encode/decode the frame which can be determined by looking at the mimeType. The following table gives a number of examples:

mimeType Data format
audio/opus The data is Opus packets, as described in section 3 of [RFC6716].
audio/PCMU The data is a sequence of bytes of arbitrary length, where each byte is a u-law encoded PCM sample as defined by Table 2a and 2b in [ITU-G.711].
audio/PCMA The data is a sequence of bytes of arbitrary length, where each byte is an A-law encoded PCM sample as defined by Tables 1a and 1b in [ITU-G.711].
audio/G722 The data is G.722 audio as described in [ITU-G.722].
audio/RED The data is Redundant Audio Data as described in section 3 of [RFC2198].
audio/CN The data is Comfort Noise as described in section 3 of [RFC3389].

4.5.3. Methods

getMetadata()

Returns the metadata associated with the frame.

4.5.4. Serialization

RTCEncodedAudioFrame objects are serializable objects [HTML]. Their serialization steps, given value, serialized, and forStorage, are:

  1. If forStorage is true, then throw a DataCloneError.

  2. Set serialized.[[metadata]] to an internal representation of value’s metadata.

  3. Set serialized.[[data]] to value.[[data]]

Their deserialization steps, given serialized, value and realm, are:

  1. Set value’s metadata to the platform object representation of serialized.[[metadata]]

  2. Set value.[[data]] to serialized.[[data]].

4.6. Interfaces

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer : EventTarget {
    // Attributes and methods related to the transformer source
    readonly attribute ReadableStream readable;
    Promise<unsigned long long> generateKeyFrame(optional DOMString rid);
    Promise<undefined> sendKeyFrameRequest();
    // Attributes and methods related to the transformer sink
    readonly attribute WritableStream writable;
    attribute EventHandler onkeyframerequest;
    // Attributes for configuring the Javascript code
    readonly attribute any options;
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};

[Exposed=DedicatedWorker]
interface KeyFrameRequestEvent : Event {
  constructor(DOMString type, optional DOMString rid);
  readonly attribute DOMString? rid;
};

4.7. Operations

The new RTCRtpScriptTransform(worker, options, transfer) constructor steps are:

  1. Set t1 to an identity transform stream.

  2. Set t2 to an identity transform stream.

  3. Set this.[[writable]] to t1.[[writable]].

  4. Set this.[[readable]] to t2.[[readable]].

  5. Let serializedOptions be the result of StructuredSerializeWithTransfer(options, transfer).

  6. Let serializedReadable be the result of StructuredSerializeWithTransfer(t1.[[readable]], « t1.[[readable]] »).

  7. Let serializedWritable be the result of StructuredSerializeWithTransfer(t2.[[writable]], « t2.[[writable]] »).

  8. Queue a task on the DOM manipulation task source worker’s global scope to run the following steps:

    1. Let transformerOptions be the result of StructuredDeserialize(serializedOptions, the current Realm).

    2. Let readable be the result of StructuredDeserialize(serializedReadable, the current Realm).

    3. Let writable be the result of StructuredDeserialize(serializedWritable, the current Realm).

    4. Let transformer be a new RTCRtpScriptTransformer.

    5. Set transformer.[[options]] to transformerOptions.

    6. Set transformer.[[readable]] to readable.

    7. Set transformer.[[writable]] to writable.

    8. Fire an event named rtctransform using RTCTransformEvent with transformer set to transformer on worker’s global scope.

// FIXME: Describe error handling (worker closing flag true at RTCRtpScriptTransform creation time. And worker being terminated while transform is processing data).

Each RTCRtpScriptTransform has the following set of association steps, given rtcObject:

  1. Let transform be the RTCRtpScriptTransform object that owns the association steps.

  2. Let encoder be rtcObject’s encoder if rtcObject is a RTCRtpSender or undefined otherwise.

  3. Let depacketizer be rtcObject’s depacketizer if rtcObject is a RTCRtpReceiver or undefined otherwise.

  4. Queue a task on the DOM manipulation task source worker’s global scope to run the following steps:

    1. Let transformer be the RTCRtpScriptTransformer object associated to transform.

    2. Set transformer.[[encoder]] to encoder.

    3. Set transformer.[[depacketizer]] to depacketizer.

The generateKeyFrame(rid) method steps are:

  1. Let promise be a new promise.

  2. Run the generate key frame algorithm with promise, this.[[encoder]] and rid.

  3. Return promise.

The sendKeyFrameRequest() method steps are:

  1. Let promise be a new promise.

  2. Run the send request key frame algorithm with promise and this.[[depacketizer]].

  3. Return promise.

4.8. Attributes

A RTCRtpScriptTransformer has the following private slots called [[depacketizer]], [[encoder]], [[options]], [[readable]] and [[writable]]. In addition, a RTCRtpScriptTransformer is always associated with its parent RTCRtpScriptTransform transform. This allows algorithms to go from an RTCRtpScriptTransformer object to its RTCRtpScriptTransform parent and vice versa.

The options getter steps are:

  1. Return this.[[options]].

The readable getter steps are:

  1. Return this.[[readable]].

The writable getter steps are:

  1. Return this.[[writable]].

The onbandwidthestimate EventHandler has type bandwidthestimate.

The onkeyframerequest EventHandler has type keyframerequest.

4.9. Events

The following event fires on an RTCRtpScriptTransformer:

The steps that generate an event of type KeyFrameRequestEvent are as follows:

Given a RTCRtpScriptTransformer transform:

When transform’s [[encoder]] receives a keyframe request, for instance from an incoming RTCP Picture Loss Indication (PLI) or Full Intra Refresh (FIR), queue a task to perform the following steps:

  1. Set rid to the RID of the appropriate layer, or undefined if the request is not for a specific layer.

  2. Fire an event named keyframerequest at transform using KeyFrameRequestEvent with its cancelable attribute initialized to "true", and with rid set to rid.

  3. If the event’s canceled flag is true, abort these steps.

  4. Run the generate key frame algorithm with a new promise, transform.[[encoder]] and rid.

4.10. KeyFrame Algorithms

The generate key frame algorithm, given promise, encoder and rid, is defined by running these steps:

  1. If encoder is undefined, reject promise with InvalidStateError, abort these steps.

  2. If encoder is not processing video frames, reject promise with InvalidStateError, abort these steps.

  3. If rid is defined, but does not conform to the grammar requirements specified in Section 10 of [RFC8851], then reject promise with TypeError and abort these steps.

  4. In parallel, run the following steps:

    1. Gather a list of video encoders, named videoEncoders from encoder, ordered according negotiated RIDs if any.

    2. If rid is defined, remove from videoEncoders any video encoder that does not match rid.

    3. If rid is undefined, remove from videoEncoders all video encoders except the first one.

    4. If videoEncoders is empty, reject promise with NotFoundError and abort these steps. videoEncoders is expected to be empty if the corresponding RTCRtpSender is not active, or the corresponding RTCRtpSender track is ended.

    5. Let videoEncoder be the first encoder in videoEncoders.

    6. If rid is undefined, set rid to the RID value corresponding to videoEncoder.

    7. Create a pending key frame task called task with task.[[rid]] set to rid and task.[[promise]]| set to promise.

    8. If encoder.[[pendingKeyFrameTasks]] is undefined, initialize encoder.[[pendingKeyFrameTasks]] to an empty set.

    9. Let shouldTriggerKeyFrame be false if encoder.[[pendingKeyFrameTasks]] contains a task whose [[rid]] value is equal to rid, and true otherwise.

    10. Add task to encoder.[[pendingKeyFrameTasks]].

    11. If shouldTriggerKeyFrame is true, instruct videoEncoder to generate a key frame for the next provided video frame.

For any RTCRtpScriptTransformer named transformer, the following steps are run just before any frame is enqueued in transformer.[[readable]]:

  1. Let encoder be transformer.[[encoder]].

  2. If encoder or encoder.[[pendingKeyFrameTasks]] is undefined, abort these steps.

  3. If frame is not a video "key" frame, abort these steps.

  4. For each task in encoder.[[pendingKeyFrameTasks]], run the following steps:

    1. If frame was generated by a video encoder identified by task.[[rid]], run the following steps:

      1. Remove task from encoder.[[pendingKeyFrameTasks]].

      2. Resolve task.[[promise]] with frame’s timestamp.

By resolving the promises just before enqueuing the corresponding key frame in a RTCRtpScriptTransformer's readable, the resolution callbacks of the promises are always executed just before the corresponding key frame is exposed. If the promise is associated to several rid values, it will be resolved when the first key frame corresponding to one the rid value is enqueued.

The send request key frame algorithm, given promise and depacketizer, is defined by running these steps:

  1. If depacketizer is undefined, reject promise with InvalidStateError, abort these steps.

  2. If depacketizer is not processing video packets, reject promise with InvalidStateError, abort these steps.

  3. In parallel, run the following steps:

    1. If sending a Full Intra Request (FIR) by depacketizer’s receiver is not deemed appropriate, resolve promise with undefined and abort these steps. Section 4.3.1 of [RFC5104] provides guidelines of how and when it is appropriate to sending a Full Intra Request.

    2. Generate a Full Intra Request (FIR) packet as defined in section 4.3.1 of [RFC5104] and send it through depacketizer’s receiver.

    3. Resolve promise with undefined.

5. RTCRtpSender extension

An additional API on RTCRtpSender is added to complement the generation of key frame added to RTCRtpScriptTransformer.

partial interface RTCRtpSender {
    Promise<undefined> generateKeyFrame(optional sequence <DOMString> rids);
};

5.1. Extension operation

The generateKeyFrame(rids) method steps are:

  1. Let promise be a new promise.

  2. In parallel, run the generate key frame algorithm with promise, this’s encoder and rids.

  3. Return promise.

6. Privacy and security considerations

This API gives Javascript access to the content of media streams. This is also available from other sources, such as Canvas and WebAudio.

However, streams that are isolated (as specified in [WEBRTC-IDENTITY]) or tainted with another origin, cannot be accessed using this API, since that would break the isolation rule.

The API will allow access to some aspects of timing information that are otherwise unavailable, which allows some fingerprinting surface.

The API will give access to encoded media, which means that the JS application will have full control over what’s delivered to internal components like the packetizer or the decoder. This may require additional care with auditing how data is handled inside these components.

For instance, packetizers may expect to see data only from trusted encoders, and may not be audited for reception of data from untrusted sources.

7. Examples

See the explainer document.

Conformance

Document conventions

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Conformant Algorithms

Requirements phrased in the imperative as part of algorithms (such as "strip any leading space characters" or "return false and abort these steps") are to be interpreted with the meaning of the key word ("must", "should", "may", etc) used in introducing the algorithm.

Conformance requirements phrased as algorithms or specific steps can be implemented in any manner, so long as the end result is equivalent. In particular, the algorithms defined in this specification are intended to be easy to understand and are not intended to be performant. Implementers are encouraged to optimize.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[IANA-MEDIA-TYPES]
Media Types. URL: https://www.iana.org/assignments/media-types/
[MEDIACAPTURE-STREAMS]
Cullen Jennings; et al. Media Capture and Streams. URL: https://w3c.github.io/mediacapture-main/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://datatracker.ietf.org/doc/html/rfc2119
[RFC8851]
A.B. Roach, Ed.. RTP Payload Format Restrictions. January 2021. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc8851
[STREAMS]
Adam Rice; et al. Streams Standard. Living Standard. URL: https://streams.spec.whatwg.org/
[WEBCODECS]
Paul Adenot; Bernard Aboba; Eugene Zemtsov. WebCodecs. URL: https://w3c.github.io/webcodecs/
[WebCryptoAPI]
Mark Watson. Web Cryptography API. URL: https://w3c.github.io/webcrypto/
[WEBIDL]
Edgar Chen; Timothy Gu. Web IDL Standard. Living Standard. URL: https://webidl.spec.whatwg.org/
[WEBRTC]
Cullen Jennings; et al. WebRTC: Real-Time Communication in Browsers. URL: https://w3c.github.io/webrtc-pc/

Informative References

[AV1]
Peter de Rivaz; Jack Haughton. AV1 Bitstream & Decoding Process Specification. 8 January 2019. Standard. URL: https://aomediacodec.github.io/av1-spec/av1-spec.pdf
[CloneArrayBuffer]
CloneArrayBuffer. URL: https://tc39.es/ecma262/#sec-clonearraybuffer
[ITU-G.711]
G.711 : Pulse code modulation (PCM) of voice frequencies. URL: https://www.itu.int/rec/T-REC-G.711/
[ITU-G.722]
G.722 : 7 kHz audio-coding within 64 kbit/s. URL: https://www.itu.int/rec/T-REC-G.722/
[ITU-T-REC-H.264]
H.264 : Advanced video coding for generic audiovisual services. URL: https://www.itu.int/rec/T-REC-H.264
[RFC1982]
R. Elz; R. Bush. Serial Number Arithmetic. August 1996. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc1982
[RFC2198]
C. Perkins; et al. RTP Payload for Redundant Audio Data. September 1997. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc2198
[RFC3389]
R. Zopf. Real-time Transport Protocol (RTP) Payload for Comfort Noise (CN). September 2002. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc3389
[RFC3550]
H. Schulzrinne; et al. RTP: A Transport Protocol for Real-Time Applications. July 2003. Internet Standard. URL: https://www.rfc-editor.org/rfc/rfc3550
[RFC5104]
S. Wenger; et al. Codec Control Messages in the RTP Audio-Visual Profile with Feedback (AVPF). February 2008. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc5104
[RFC6386]
J. Bankoski; et al. VP8 Data Format and Decoding Guide. November 2011. Informational. URL: https://www.rfc-editor.org/rfc/rfc6386
[RFC6716]
JM. Valin; K. Vos; T. Terriberry. Definition of the Opus Audio Codec. September 2012. Proposed Standard. URL: https://www.rfc-editor.org/rfc/rfc6716
[SFrame]
Secure Frame (SFrame). URL: https://www.ietf.org/archive/id/draft-ietf-sframe-enc-04.html
[VP9]
VP9 Bitstream & Decoding Process Specification. URL: https://storage.googleapis.com/downloads.webmproject.org/docs/vp9/vp9-bitstream-specification-v0.6-20160331-draft.pdf
[WEBRTC-IDENTITY]
Cullen Jennings; Martin Thomson. Identity for WebRTC 1.0. URL: https://w3c.github.io/webrtc-identity/
[WEBRTC-NV-USE-CASES]
Bernard Aboba. WebRTC Extended Use Cases. URL: https://w3c.github.io/webrtc-nv-use-cases/

IDL Index

typedef (SFrameTransform or RTCRtpScriptTransform) RTCRtpTransform;

// New methods for RTCRtpSender and RTCRtpReceiver
partial interface RTCRtpSender {
    attribute RTCRtpTransform? transform;
};

partial interface RTCRtpReceiver {
    attribute RTCRtpTransform? transform;
};

enum SFrameTransformRole {
    "encrypt",
    "decrypt"
};

dictionary SFrameTransformOptions {
    SFrameTransformRole role = "encrypt";
};

typedef [EnforceRange] unsigned long long SmallCryptoKeyID;
typedef (SmallCryptoKeyID or bigint) CryptoKeyID;

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransform : EventTarget {
    constructor(optional SFrameTransformOptions options = {});
    Promise<undefined> setEncryptionKey(CryptoKey key, optional CryptoKeyID keyID);
    attribute EventHandler onerror;
};
SFrameTransform includes GenericTransformStream;

enum SFrameTransformErrorEventType {
    "authentication",
    "keyID",
    "syntax"
};

[Exposed=(Window,DedicatedWorker)]
interface SFrameTransformErrorEvent : Event {
    constructor(DOMString type, SFrameTransformErrorEventInit eventInitDict);

    readonly attribute SFrameTransformErrorEventType errorType;
    readonly attribute CryptoKeyID? keyID;
    readonly attribute any frame;
};

dictionary SFrameTransformErrorEventInit : EventInit {
    required SFrameTransformErrorEventType errorType;
    required any frame;
    CryptoKeyID? keyID;
};

// New enum for video frame types. Will eventually re-use the equivalent defined
// by WebCodecs.
enum RTCEncodedVideoFrameType {
    "empty",
    "key",
    "delta",
};

dictionary RTCEncodedVideoFrameMetadata {
    unsigned long long frameId;
    sequence<unsigned long long> dependencies;
    unsigned short width;
    unsigned short height;
    unsigned long spatialIndex;
    unsigned long temporalIndex;
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    long long timestamp;    // microseconds
    unsigned long rtpTimestamp;
    DOMString mimeType;
};

dictionary RTCEncodedVideoFrameOptions {
    RTCEncodedVideoFrameMetadata metadata;
};

// New interfaces to define encoded video and audio frames. Will eventually
// re-use or extend the equivalent defined in WebCodecs.
[Exposed=(Window,DedicatedWorker), Serializable]
interface RTCEncodedVideoFrame {
    constructor(RTCEncodedVideoFrame originalFrame, optional RTCEncodedVideoFrameOptions options = {});
    readonly attribute RTCEncodedVideoFrameType type;
    attribute ArrayBuffer data;
    RTCEncodedVideoFrameMetadata getMetadata();
};

dictionary RTCEncodedAudioFrameMetadata {
    unsigned long synchronizationSource;
    octet payloadType;
    sequence<unsigned long> contributingSources;
    short sequenceNumber;
    unsigned long rtpTimestamp;
    DOMString mimeType;
};

dictionary RTCEncodedAudioFrameOptions {
    RTCEncodedAudioFrameMetadata metadata;
};

[Exposed=(Window,DedicatedWorker), Serializable]
interface RTCEncodedAudioFrame {
    constructor(RTCEncodedAudioFrame originalFrame, optional RTCEncodedAudioFrameOptions options = {});
    attribute ArrayBuffer data;
    RTCEncodedAudioFrameMetadata getMetadata();
};

[Exposed=DedicatedWorker]
interface RTCTransformEvent : Event {
    readonly attribute RTCRtpScriptTransformer transformer;
};

partial interface DedicatedWorkerGlobalScope {
    attribute EventHandler onrtctransform;
};

[Exposed=DedicatedWorker]
interface RTCRtpScriptTransformer : EventTarget {
    // Attributes and methods related to the transformer source
    readonly attribute ReadableStream readable;
    Promise<unsigned long long> generateKeyFrame(optional DOMString rid);
    Promise<undefined> sendKeyFrameRequest();
    // Attributes and methods related to the transformer sink
    readonly attribute WritableStream writable;
    attribute EventHandler onkeyframerequest;
    // Attributes for configuring the Javascript code
    readonly attribute any options;
};

[Exposed=Window]
interface RTCRtpScriptTransform {
    constructor(Worker worker, optional any options, optional sequence<object> transfer);
};

[Exposed=DedicatedWorker]
interface KeyFrameRequestEvent : Event {
  constructor(DOMString type, optional DOMString rid);
  readonly attribute DOMString? rid;
};

partial interface RTCRtpSender {
    Promise<undefined> generateKeyFrame(optional sequence <DOMString> rids);
};