Guideline: Provide captions and associated meta-data for all audio content within XR environments.
Note: WCAG 2.2 continues this by saying except when the media is a media alternative for text and is clearly labeled as such.
Outcome 1: Captions are used to understand speech
Translates speech and non-speech audio into alternative formats (e.g. captions) so media can be understood when sound is unavailable or limited. User agents and APIs support the display and control of captions
Outcome 2: Caption Meta-Data is used to convey further information
Conveys information about the sound in addition to the text of the sound (for example, sound source, duration, distance and direction) so users know the necessary information about the context of the sound in relation to the environment it is situated in.
Outcome 3: Captions format allows for alternative devices
Provides captions and caption meta-data in alternative formats (for example, second screen or braille display) to allow users the opportunity to move caption and meta-data to alternative displays. For example, this benefits users without sound and vision, users who need assistive technology to magnify portions of the view, or users who have limited reach.
Outcome 4: Caption visual display can be customised
Provides customization of caption style and position to support people with limited vision or color perception. Customization options can benefit all users.
Outcome 5: Caption temporal display can be customised
Provides customization of caption timing to support people with limited manipulation, strength, or cognition.