-
trackIdentifier
of type DOMString
-
Represents the id
property of the track.
-
remoteSource
of type boolean
-
True if the source is remote, for instance if it is sourced from another host via
an RTCPeerConnection. False otherwise.
-
ended
of type boolean
-
Reflects the "ended" state of the track.
-
detached
of type boolean
-
True if the track has been detached from the PeerConnection object. If true, all
stats reflect their values at the time when the track was detached.
-
kind
of type DOMString
-
Either "audio
" or "video
". This reflects the "kind"
attribute of the MediaStreamTrack.
-
estimatedPlayoutTimestamp
of type DOMHighResTimeStamp
-
Only valid for remote sources. This is the estimated playout time of this track.
The playout time is the NTP timestamp of the last playable audio sample or video
frame that has a known timestamp (from an RTCP SR packet mapping RTP timestamps to
NTP timestamps), extrapolated with the time elapsed since it was ready to be
played out. This is the "current time" of the track in NTP clock time of the
sender and can be present even if there is no audio or video currently playing.
This can be useful for estimating how much audio and video is out of sync for two
tracks from the same source, audioTrackStats.estimatedPlayoutTimestamp -
videoTrackStats.estimatedPlayoutTimestamp
.
-
frameWidth
of type unsigned
long
-
Only valid for video MediaStreamTracks and represents the width of the last
processed video frame for this track. Before the first frame is processed this
attribute is missing.
-
frameHeight
of type unsigned
long
-
Only valid for video MediaStreamTracks and represents the height of the last
processed video frame for this track. Before the first frame is processed this
attribute is missing.
-
framesPerSecond
of type double
-
Only valid for video. It represents the nominal FPS value before the degradation
preference is applied. It is the number of complete frames in the last second. For sending tracks it is the current captured FPS
and for the receiving tracks it is the current decoding framerate.
-
framesCaptured
of type
unsigned long
-
Only valid for local video. It represents the total number of frames captured for
this MediaStreamTrack, before encoding. For example, if this track represents a
camera this is the number of frames produced by the camera for this track, whose
framerate could vary due to hardware limitations or environmental factors such as
lighting conditions.
-
framesSent
of type unsigned
long
-
Only valid for video. It represents the total number of frames sent for this
MediaStreamTrack.
-
framesReceived
of type unsigned long
-
Only valid for video and when remoteSource is set to true
. It
represents the total number of frames received for this MediaStreamTrack.
-
framesDecoded
of type unsigned long
-
Only valid for video and when remoteSource is set to true
. It
represents the total number of frames correctly decoded for this
MediaStreamTrack, independent of which SSRC it was received from. It is defined
as totalVideoFrames
in Section 5 of [MEDIA-SOURCE].
-
framesDropped
of type unsigned long
-
Only valid for video. It is the total number of frames dropped predecode or
dropped because the frame missed its display deadline for this MediastreamTrack.
It is the same definition as droppedVideoFrames
in Section 5 of
[MEDIA-SOURCE].
-
framesCorrupted
of type unsigned long
-
Only valid for video. It is the total number of corrupted frames that have been
detected for this MediaStreamTrack. It is the same definition as
corruptedVideoFrames
in Section 5 of [MEDIA-SOURCE].
-
partialFramesLost
of type unsigned long
-
Only valid for video. partialFramesLost
is the cumulative number of
partial frames lost, as defined in Appendix A (j) of [RFC7004].
-
fullFramesLost
of type unsigned long
-
Only valid for video. fullFramesLost
is the cumulative number of
full frames lost, as defined in Appendix A (i) of [RFC7004].
-
audioLevel
of type double
-
Only valid for audio. The value is between 0..1 (linear),
where 1.0 represents 0 dBov, 0 represents silence, and 0.5
represents approximately 6 dBSPL change in the sound pressure
level from 0 dBov.
The "audio level" value defined in [RFC6464] and used in the
RTCRtpSynchronizationSource.audioLevel of [WEBRTC] (defined as 0..127, where 0
represents 0 dBov, 126 represents -126 dBov and 127 represents silence) is
obtained by the calculation given in appendix A of [RFC6465]: informally,
level = -round(log10(audioLevel) * 20), with audioLevel 0.0 and values above 127
mapped to 127.
-
totalAudioEnergy
of type double
-
Only valid for audio. This value MUST be computed as follows:
for each audio sample sent/received for this object (and
counted by totalSamplesSent
or
totalSamplesReceived
), add the sample's
value divided by the highest-intensity encodable value,
squared and then multiplied by the duration of the sample in
seconds. In other words, duration *
Math.pow(energy/maxEnergy, 2)
.
This can be used to obtain a root mean square (RMS) value
that uses the same units as audioLevel
,
as defined in [RFC6464]. It can be converted to these units
using the formula
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
.
This calculation can also be performed using the
differences between the values of two different
getStats()
calls, in order to compute the
average audio level over any desired time interval. In other
words, do Math.sqrt((energy2 - energy1)/(duration2 -
duration1))
.
For example, if a 10ms packet of audio is received with an
RMS of 0.5 (out of 1.0), this should add 0.5 * 0.5 *
0.01 = 0.0025
to totalAudioEnergy
. If
another 10ms packet with an RMS of 0.1 is received, this should
similarly add 0.0001
to
totalAudioEnergy
. Then,
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
becomes Math.sqrt(0.0026/0.02) = 0.36
, which is
the same value that would be obtained by doing an RMS
calculation over the contiguous 20ms segment of audio.
-
voiceActivityFlag
of type
boolean
-
Only valid for audio. Whether the last RTP packet sent or played out by this track
contained voice activity or not based on the presence of the V bit in the
extension header, as defined in [RFC6464].
This value indicates the voice activity in the latest RTP
packet played out from a given SSRC, and is defined in the
RTCRtpSynchronizationSource.voiceActivityFlag
of [[WEBRTC].
-
echoReturnLoss
of type double
-
Only present on audio tracks sourced from a microphone where echo cancellation is
applied. Calculated in decibels, as defined in [ECHO] (2012) section 3.14.
-
echoReturnLossEnhancement
of type double
-
Only present on audio tracks sourced from a microphone where echo cancellation is
applied. Calculated in decibels, as defined in [ECHO] (2012) section 3.15.
-
totalSamplesSent
of type
unsigned long long
-
Only present for outbound audio tracks. The total number of audio samples that have
been sent for this track.
-
totalSamplesReceived
of type
unsigned long long
-
Only present for inbound audio tracks. The total number of audio samples that have
been received for this track. This includes concealedSamples
.
-
totalSamplesDuration
of type
double
-
Only present for audio tracks. Represents the total duration
in seconds of all samples that have sent or received (and
thus counted by totalSamplesSent
or
totalSamplesReceived
). Can be used with
totalAudioEnergy
to compute an average audio
level over different intervals.
-
concealedSamples
of type
unsigned long long
-
Only present for inbound audio tracks. The total number of inbound audio samples
that are concealed samples. A concealed sample is a sample that is based on data
that was synthesized to conceal packet loss and does not represent incoming data.
-
concealmentEvents
of type
unsigned long long
-
Only present for inbound audio tracks. The number of concealment events. This
counter increases every time a concealed sample is synthesized after a
non-concealed sample. That is, multiple consecutive concealed samples will
increase the concealedSamples
count multiple times but is a single
concealment event.
-
jitterBufferDelay
of type double
-
It is the total time each audio sample or video frame takes from the time it
is received to the time it is rendered. The delay is measured from the time
the first packet belonging to an audio/video frame enters the jitter buffer
to the time the complete frame is sent for rendering after decoding. The average
jitter buffer delay can be calculated by dividing the jitterBufferDelay
with the framesDecoded
(for video) or totalSamplesReceived
(for audio).
-
priority
of type
RTCPriorityType
-
Indicates the priority set for the track. It is specified in
[RTCWEB-TRANSPORT], Section 4.