Copyright © 2021-2025 World Wide Web Consortium. W3C® liability, trademark and document use rules apply.
W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of any of these disabilities. These guidelines address the accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; audiovisual media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements and assertions to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement or assertion.
To keep pace with changing technology, this specification is expected to be updated regularly with updates to and new methods, requirements, and guidelines that address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.
For an overview of WCAG 3 and links to WCAG technical and educational material, see WCAG 3.0 Introduction.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
This is an update to W3C Accessibility Guidelines (WCAG) 3.0. It includes all requirements that have reached the developing status.
To comment, file an issue in the wcag3 GitHub repository. Create separate GitHub issues for each comment, rather than commenting on multiple topics in a single issue. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, email public-agwg-comments@w3.org (comment archive).
In-progress updates to the guidelines can be viewed in the public Editor's Draft.
This document was published by the Accessibility Guidelines Working Group as a Working Draft using the Recommendation track.
Publication as a Working Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 18 August 2025 W3C Process Document.
This section (with its subsections) provides advice only and does not specify guidelines, meaning it is informative or non-normative.
Introduction
End of summary for Introduction
This draft includes an updated list of the potential guidelines, requirements, and assertions that have progressed to developing status.
Requirements and assertions at the Exploratory level are not listed in this Working Draft. If you would like to see the complete list, please review the Editor's Draft.
The list of requirements is longer than the list of success criteria in WCAG 2. This is because:
The final set of requirements in WCAG 3.0 will be different from what is in this draft. Requirements are likely to be added, combined, and removed. We also expect changes to the text of the requirements. Only some of the requirements will be used to meet the base level of conformance.
Please consider the following questions when reviewing this draft:
Additionally the Working Group would welcome any research that supports requirements or assertions.
To provide feedback, please file a GitHub issue or email public-agwg-comments@w3.org (comment archive).
This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. W3C Accessibility Guidelines (WCAG) 3.0 supports a wide set of user needs, uses new approaches to testing, and allows frequent maintenance of guidelines and related content to keep pace with accelerating technology changes. WCAG 3.0 supports this evolution by focusing on the functional needs of users. These needs are then supported by guidelines that are written as outcome statements, requirements, assertions, and technology-specific methods to meet those needs.
WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [WCAG22] and previous versions, but does not deprecate WCAG 2. It will also incorporate some content from and partially extend User Agent Accessibility Guidelines 2.0 [UAAG20] and Authoring Tool Accessibility Guidelines 2.0 [ATAG20]. These earlier versions provided a flexible model that kept them relevant for over 15 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.
There are many differences between WCAG 2 and WCAG 3.0. The WCAG 3.0 guidelines address the accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control methods. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement.
Content that conforms to WCAG 2.2 Level A and Level AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, while others may wish to migrate to the new standard. For those that wish to migrate to WCAG 3, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.
As part of the WCAG 3.0 drafting process, each normative section of this document is given a status. This status is used to indicate how far along in the development this section is, how ready it is for experimental adoption, and what kind of feedback the Accessibility Guidelines Working Group is looking for.
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
Guidelines
The following guidelines are being considered for WCAG 3.0. They are currently a list of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance and additional requirements. The list will change in future drafts.
Unless otherwise stated, requirements assume the content described is provided both visually and programmatically.
End of summary for Guidelines
The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To meet the varying needs of this audience, several layers of guidance will be provided including guidelines written as outcome statements, requirements that can be tested, assertions, a rich collection of methods, resource links, and code samples.
The following list is an initial set of potential guidelines and requirements that the Working Group will be exploring. The goal is to guide the next phase of work. They should be considered drafts and should not be considered as final content of WCAG 3.0.
Ordinarily, exploratory content includes editor's notes listing concerns and questions for each item. Because this Guidelines section is very early in the process of working on WCAG 3.0, this editor's note covers most of the content in this section. Unless otherwise noted, all items in the list as exploratory at this point. It is a list of all possible topics for consideration. Not all items listed will be included in the final version of WCAG 3.0.
The guidelines and requirements listed below came from analysis of user needs that the Working Group has been studying, examining, and researching. They have not been refined and do not include essential exceptions or methods. Some requirements may be best addressed by authoring tools or at the platform level. Many requirements need additional work to better define the scope and to ensure they apply correctly to multiple languages, cultures, and writing systems. We will address these questions as we further explore each requirement.
Additional Research
One goal of publishing this list is to identify gaps in current research and request assistance filling those gaps.
Editor's notes indicate the requirements within this list where the Working Group has not found enough research to fully validate the guidance and create methods to support it or additional work is needed to evaluate existing research. If you know of existing research or if you are interested in conducting research in this area, please file a GitHub issue or send email to public-agwg-comments@w3.org (comment archive).
Users have equivalent alternatives for images.
For each image:
Decorative images are programmatically hidden.
Equivalent
The role and importance of images are programmatically indicated.
The image types (photo, icon, etc.) are indicated.
Needs additional research
Auto generated text descriptions are editable by content creator.
Content author(s) follow an organizational style guide for text alternatives.
Users have equivalent alternatives for audio and video content.
Descriptive transcript is available for audio or video content.
Media alternative content is equivalent to audio and video content.
At least one mechanism is available to help users find media alternatives.
A mechanism to turn media alternatives on and off is available.
Speakers are identified in media alternatives.
When more than one language is spoken in audio content, the language spoken by each speaker is identified in media alternatives.
Media alternatives are provided in all spoken languages used in audio content.
Sounds needed to understand the media are identified or described in media alternatives.
Visual information needed to understand the media and not described in the audio content is included in the media alternatives.
This includes actions, charts or informative visuals, scene changes, and on-screen text,
Needs additional research
Nonverbal cues needed to understand the media are explained in media alternatives.
This includes tone of voice, facial expressions, body gestures, or music with emotional meaning.
Content author(s) follow a style guide that includes guidance on media alternatives.
Content author(s) conducted tests with users who need media alternatives and fixed issues based on the findings.
Content author(s) provide a video player that supports appropriate media alternatives. The video player includes the following features [list all that apply]:
Content author(s) have reviewed the media alternatives.
Users have alternatives available for non-text, non-image content that conveys context or meaning.
Needs additional research
Equivalent text alternatives are available for non-text, non-image content that conveys context or meaning.
Users have captions for the audio content.
Captions are available for all prerecorded audio content, except when the audio content is an alternative for text and clearly labelled as such.
Captions are placed on the screen so that they do not hide visual information needed to understand the video content.
Captions are presented consistently throughout the media, and across related productions, unless exceptions are essential. This includes consistent styling and placement of the captions text and consistent methods for identifying speakers, languages, and sounds.
The appearance of captions, including associated visual indicators, is adaptable including font size, font weight, font style, font color, background color, background transparency, and placement;
In 360-degree digital environments, captions remain directly in front of the user.
In 360-degree digital environments, the direction of a sound or speech is indicated when audio is heard from outside the current view.
Needs additional research
Enhanced features that allow users to interact with captions are available.
Users have audio descriptions for video content.
Audio descriptions are available in prerecorded video for visual content needed to understand the media, except when the video content is an alternative for text and clearly labelled as such.
WCAG 3 needs to specify how to handle video content with audio that does not include gaps to insert audio descriptions. Two possible solutions are providing an exception that allows the content author(s) to use descriptive transcripts instead or requiring content authors to provide an extended audio description.
Audio description remains in synch with video content without overlapping dialogue and meaningful audio content.
Audio descriptions are available in live video for visual content needed to understand the media.
In cases where the existing pauses in a soundtrack are not long enough, the video pauses to extend the audio track and provides an extended audio description to describe visual information needed to understand the media.
A mechanism is available that allow users to control the audio description volume independently from the audio volume of the video and to change the language of the audio description, if multiple languages are provided.
A mechanism is available that allow users to change the audio description language if multiple languages are available.
Users can view figure captions even if not focused at figure.
Needs additional research
Figure captions persist or a mechanism is available to make figure captions persist, even if the focus moves away.
Users have content that does not rely on a single sense or perception.
Needs additional research
Information conveyed by graphical elements does not rely on hue.
Needs additional research
Information conveyed with visual depth is also conveyed programmatically and/or through text.
Information conveyed with sound is also conveyed programmatically and/or through text.
Information conveyed with spatial audio is also conveyed programmatically and/or through text.
Users can read visually rendered text.
For each word of text:
The default/authored presentation of blocks of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | ≥0.5em around paragraphs | ||||
Line length | 30-100 characters | ||||
Line height | 1.0 - paragraph separation height | ||||
Justification | Left aligned or Justified |
The default/authored presentation of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Font face | |||||
Font size | Vertical viewing angle of ≥0.2° (~10pt at typical desktop viewing distances) | ||||
Font width | |||||
Text decoration | Most text is not bold, italicized, and/or underlined | ||||
Letter spacing | |||||
Capitalization | |||||
Hyphenation |
The presentation of blocks of text can be adjusted to meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Information could be lost if the user overrides the appearance. See [other structural guideline] about ensuring the structure conveys the meaning when possible.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | |||||
Line length | |||||
Line height | |||||
Justification | Not applicable | Left aligned |
The presentation of each of the following font features can be adjusted to meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Information could be lost if the user overrides the appearance. See [other structural guideline] about ensuring the structure conveys the meaning when possible.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Underlining |
|
||||
Italics | Disabled | ||||
Bold | Disabled | ||||
Font face | |||||
Font width | |||||
Letter spacing | |||||
Capitalization |
|
||||
Automatic hyphenation | Disabled |
Content and functionality are not lost when the content is adjusted according to Adjustable blocks of text and Adjustable text style.
The default/authored presentation of blocks of text meets the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Inline margin | |||||
Block Margin | |||||
Line length | |||||
Line height | |||||
Justification | Left aligned |
The default/authored presentation of text meet the corresponding values for the content’s language (or the language of the ones listed with the most similar orthography).
Readable blocks of text (foundational) and Readable text style (foundational) are based on common usage, and their supplemental counterparts are based on readability research. We need more readability research in these languages.
The metrics in the following table are still to be determined, the current content is an example.
Characteristic | Arabic | Chinese | English | Hindi | Russian |
---|---|---|---|---|---|
Font face | |||||
Font size | Vertical viewing angle of ≥0.2° (~10pt at typical desktop viewing distances) | ||||
Font width | |||||
Text decoration | Most text is not bold, italicized, and/or underlined | ||||
Letter spacing | |||||
Capitalization | |||||
Hyphenation |
Users can access text content and its meaning with text-to-speech tools.
Needs additional research
Text content can be converted into speech.
The human language of the view and content within the view is programmatically available.
Needs additional research
Meaning conveyed by text appearance is programmatically available.
Numerical information includes sufficient context to avoid confusion when presenting dates, temperatures, time, and Roman numerals.
Users can understand the content without having to process complex or unclear language.
This guideline will include exceptions for poetic, scriptural, artistic, and other content whose main goal is expressive rather than informative.
See also: Structure as these guidelines are closely related.
To ensure this guideline works well across different languages, members of AG, COGA, and internationalization (i18n) agreed on an initial set of languages to pressure-test the guidance.
The five “guardrail” languages are:
We started with the six official languages of the United Nations (UN). Then we removed French and Spanish because they are similar to English. We added Hindi because it is the most commonly spoken language that is not on the UN list.
The group of five languages includes a wide variety of language features, such as:
This list doesn’t include every language, but it helps keep the work manageable while making the guidance more useful for a wide audience.
We will work with W3C’s Global Inclusion community group, the Internationalization (i18n) task force, and others to review and refine the testing and techniques for these requirements. We also plan to create guidance for translating the guidelines into more languages in the future.
Sentences do not include unnecessary words or phrases.
Sentences do not include nested causes.
Common words are used, and definitions are available for uncommon words.
This requirement will include tests and techniques for identifying common words for the intended audience. The intended audience may be the public or a specialized group such as children or experts.
For example: In content intended for the public, one technique for determining what counts as a common word is to use a high-frequency corpus. These corpora exist for many languages including Arabic, Hindi, Mandarin, and Russian as well as American English, British English, and Canadian English. Exceptions will be made for any language that does not have a high-frequency corpus.
Abbreviations are explained or expanded when first used.
Explanations or unambiguous alternatives are available for non-literal language, such as idioms and metaphors.
Alternatives are provided for numerical information such as statistics.
Content author(s) have reviewed written content for complex ideas such as processes, workflows, relationships, or chronological information and added supplemental visual aids to assist readers with understanding them.
A summary is available for documents and articles that have more than a certain length.
More research is needed on the number of words that would trigger the need for a summary. The length may also depend on the language used for the content.
Letters or diacritics required for identifying the correct meaning of the word are available.
This most often applies to languages such as Arabic and Hebrew.
Content author(s) review content for clear language before publication.
If AI tools are used to generate or alter content, the content author(s) have a documented process for a human to review and attest that the content is clear and conveys the intended meaning.
Content author(s) follow a style guide that includes guidance on clear language and a policy that requires editors to follow the style guide.
The style guide includes guidance on clear words as well as clear numbers, such as avoiding or explaining Roman numerals, removing numerical information that is not essential for understanding the content, and providing explanations of essential numerical information to aid users with disabilities that impact cognitive accessibility.
Content author(s) provide training materials that includes guidance on clear language and a policy that editors are required to complete the training regularly.
Content author(s) conduct plain language reviews to check against plain language guidance appropriate to the language used. This includes checking that:
Users can see which element has keyboard focus.
For each focusable item:
A custom focus indicator is used with sufficient size, change of contrast, adjacent contrast, distinct style and adjacency.
Focusable item uses the user agent default indicator.
@@
Content author(s) follow a style guide that includes guidance on focus indicators.
Users can see the location of the pointer focus.
There is a visible indicator of pointer focus.
Users have interactive components that behave as expected.
Interactive components with the same functionality behave consistently.
Interactive components with the same functionality have consistent labels.
Interactive components that have similar function and behavior have a consistent visual design.
Needs additional research
Interactive components are visually and programmatically located in conventional locations.
Needs additional research
Interactive components follow established conventions.
Conventional interactive components are used.
Interactive components retain their position unless a user changes the viewport or moves the component.
Users have information about interactive components that is identifiable and usable visually and using assistive technology.
Needs additional research
Visual information required to identify user interface components and states meet a minimum contrast ratio test, except for inactive components or where the appearance of the component is determined by the user agent and not modified by the author.
Needs additional research
The importance of interactive components is indicated.
Interactive components have visible labels that identify the purpose of the component.
Changes to interactive components’ names, roles, values or states are visually and programmatically indicated.
Interactive components are visually distinguishable without interaction from static content and include visual cues on how to use them.
Field constraints and conditions (required line length, date format, password format, etc.) are available.
Inputs have visible labels that identify the purpose of the input.
The programmatic name includes the visual label.
Accurate names, roles, values, and states are available for interactive components.
Users can navigate and operate content using only the keyboard.
All elements that can be controlled or activated by pointer, audio (voice or other), gesture, camera input, or other means can be controlled or activated from the keyboard interface.
All content that can be accessed by other input modalities can be accessed using keyboard interface only.
All content includes content made available via hovers, right clicks, etc.
Other input modalities include pointing devices, voice and speech recognition, gesture, camera input, and any other means of input or control.
The All Elements Keyboard-Actionable requirement allows you to navigate to all actionable elements, but if the next element is 5 screens down, you also need to be able to access all the content. Also, if the content is in expanding sections, you need to not only open them but also access all of the content, not just its actionable elements.
Author-generated keyboard commands do not conflict with standard platform keyboard commands or they can be remapped.
It is always possible to navigate away from an element after navigating to, entering, or activating the element by using a common keyboard navigation technique, or by using a technique described on the page/view or on a page/view earlier in the process where it is used.
When the keyboard focus is moved, one of the following is true:
Except for skip links and other elements that are hidden but specifically added to aid keyboard navigation, tabbing does not move the keyboard focus into content that was not visible before the tab action.
Accordions, dropdown menus, and ARIA tab panels are examples of expandable content. According to this requirement, these would not expand simply because they include an element in the tab-order contained in them. They would either not expand or would not have any tab-order elements in them.
Users can use keyboard without unnecessary physical or cognitive effort.
The keyboard focus moves through content in an order and way that preserves meaning and operability.
When keyboard focus moves from one context to another within a page/view, whether automatically or by user request, the keyboard focus is preserved so that, when the user returns to the previous context, the keyboard focus is restored to its previous location unless that location no longer exists.
When the previous focus location no longer exists, best practice is to put focus on the focusable location just before the one that was removed. An example of this would be a list of subject-matter tags in a document, with each tag having a delete button. A user clicks on the delete button in a tag in the middle of the tag list. When the tag is deleted, focus is placed onto the tag that was before the now-deleted tag.
This is also quite useful when moving between pages but this would usually have to be done by the browser, unless the user is in some process where that information is stored in a cookie or on the server between pages in the process so that it still has the old location when the person returns to the page.
Repetitive adjacent links that have the same destination are avoided.
Supplemental if applicable to all content, else best practice.
A common pattern is having a component that contains a linked image and some linked text, where both links go to the same content. Someone using screen reading software can be disoriented from the unnecessary chatter, and a keyboard user has to navigate through more tab stops than should be necessary. Combining adjacent links that go to the same content improves the user experience.
Content author(s) follow user interface design principles that include minimizing the difference between the number of input commands required when using the keyboard interface only and the number of commands when using other input modalities.
Other input modalities include pointing devices, voice and speech recognition, gesture, camera input, and any other means of input or control.
Pointer input is consistent and all functionality can be done with simple pointer input in a time and pressure insensitive way.
For functionality that can be activated using a simple pointer input, at least one of the following is true:
Any functionality that uses pointer input other than simple pointer input can also be operated by a simple pointer input or a sequence of simple pointer inputs that do not require timing.
Examples of pointer input that are not simple pointer input are double clicking, swipe gestures, multipoint gestures like pinching or split tap or two-finger rotor, variable pressure or timing, and dragging movements.
Complex pointer inputs are not banned, but they cannot be the only way to accomplish an action.
Simple pointer input is different than single pointer input and is more restrictive than simply using a single pointer.
The method of pointer cancellation is consistent for each type of interaction within a set of pages/views except where it is essential to be different.
Where it is essential to be different, it can be helpful to alert the user.
Specific pointer pressure is not the only way of achieving any functionality, except where specific pressure is essential to the functionality.
Specific pointer speed is not the only way of achieving any functionality, except where specific pointer speed is essential to the functionality.
Provide alternatives to speech input and facilitate speech control.
Speech input is not the only way of achieving any functionality except where a speech input is essential to the functionality.
Wherever there is real-time bidirectional voice communication, a real-time text option is available.
Users have the option to use different input techniques and combinations and switch between them.
If content interferes with pointer or keyboard focus behavior of the user agent, then selecting anything on the view with a pointer moves the keyboard focus to that interactive element, even if the user drags off the element (so as to not activate it).
When receiving and then removing pointer hover or keyboard focus triggers additional content to become visible and then hidden, and the visual presentation of the additional content is controlled by the author and not by the user agent, all of the following are true:
Examples of additional content controlled by the user agent include browser tooltips created through use of the HTML title
attribute.
This applies to content that appears in addition to the triggering of the interactive element itself. Since hidden interactive elements that are made visible on keyboard focus (such as links used to skip to another part of a page/view) do not present additional content, they are not covered by this requirement.
Gestures are not the only way of achieving any functionality, except where a gesture is essential to the functionality.
Where functionality, including input or navigation, is achievable using different input methods, users have the option to switch between those input methods at any time.
Full or gross body movement is not the only way of achieving any functionality, except where full or gross body movement is essential to the functionality.
This includes both detection of body movement and actions to the device, such as shaking, that require body movement.
Users have alternative authentication methods available to them.
Biometric identification is not the only way to identify or authenticate.
Voice identification is not the only way to identify or authenticate.
Users know about and can correct errors.
When an error is detected, users are notified visually and programatically that an error has occurred.
Content in error is programmatically indicated.
Error messages clearly describe the problem.
Clear langaguage guidance outlines requirements for writing understanable content.
When an error occurs due to a user interaction with an interactive element, the error message includes the human readable name of the element in error. If the interactive element is located in a different part of a process, then the page/view or step in the process is included in the error message.
Error messages includes suggestions for correction that can be automatically determined, unless it would jeopardize the security or purpose of the content.
Error messages are visually identifiable including at least two of the following:
Symbols and colors signifying errors vary depending on cultural context and should be modified accordingly.
Error messages persist until the user dismisses them or the error is resolved.
Error messages are programmatically associated with the error source.
When an error notification is not adjacent to the item in error, a link to the error is provided.
Error messages are visually collocated with the error source.
When users are submitting information, at least one of the following is true:
On submission, users are notified of submitted information and submission status.
During data entry, ensure data validation occurs after the user enters data and before the form is submitted.
When completing a multi-step process, validation is completed before the user moves to the next step in the process.
Users do not experience physical harm from content.
Content does not include audio shifting designed to create a perception of motion, or it can be paused or prevented.
Content does not include non-essential flashing or strobing beyond flashing thresholds.
When flashing is essential, a trigger warning is provided to inform users that flashing exists, and a mechanism is available to access the same information and avoid the flashing content.
Content does not include non-essential visual motion lasting longer than 5 seconds and pseudo-motion
When visual motion lasting longer than 5 seconds or pseudo-motion is essential, a trigger warning is provided to inform users that such content exists, and users are provided a way to access the same information and avoid the visual motion or pseudo-motion.
Content does not include visual motion lasting longer than 5 seconds or pseudo-motion.
Content does not include non-essential visual motion and pseudo-motion triggered by interaction unless it can be paused or prevented.
Users can determine relationships between content both visually and using assistive technologies.
The relationships between parts of the content is clearly indicated.
The starting point or home is visually and programmatically labeled.
Needs additional research
Relationships that convey meaning between pieces of content are programmatically determinable. Note: Examples of relationships include item positioned next to each other, arranged in a hierarchy, or visually grouped.
Needs additional research
Sections are visually and programmatically distinguishable.
Users have consistent and recognizable layouts available.
The relative order of content and interactions remain consistent throughout a workflow. Note: Relative order means that content can be added or removed, but repeated items are in the same order relative to each other.
Conventional layouts are available.
Information required to understand options is visually and programmatically associated with the options.
Users can determine their location in content both visually and using assistive technologies.
Needs additional research
The current location within the view, multi-step process, and product is visually and programmatically indicated.
Context is provided to orient the user in a site or multi-step process.
Contextual information is provided to help the user orient within the product.
Users can understand and navigate through the content using structure.
See also: Clear Language as these guidelines are closely related.
Relationship between elements are conveyed programmatically.
Elements are programmatically grouped together within landmarks.
Groups of elements have a label that defines their purpose.
Groups of elements are organized with a logical and meaningful hierarchy of headings.
Lists are visually and programmatically identifiable as a collection of related items.
Steps in a multi-step process are numbered.
Content has a title or high-level description.
Related elements are visually grouped together.
Whitespace separates chunks of content.
Users can perceive and operate user interface components and navigation without obstruction.
Content that is essential for a user’s task or understanding is not permanently covered by non-dismissible or non-movable elements.
When content temporarily overlays other content, it must be clearly dismissible or movable via standard interaction methods and its presence does not disrupt critical screen reader announcements or keyboard focus.
If a control is disabled, then information explaining why it is disabled and what actions are needed to enable it is provided visually and programmatically.
Content does not shift or reflow in a way that causes users to lose their place or makes previously-visible content inaccessible without explicit user action.
Elements designed to be visually persistent have predictable positions and do not overlap with primary content in a way that makes it unreadable or unusable.
Design should avoid scenarios where disabling a control implicitly suggests a false pathway or intentionally hides the correct one.
Content does not include infinite scrolling.
Users have consistent and alternative methods for navigation.
The product provides at least two ways of navigating and finding information (Search, Scan, Site Map, Menu Structure, Breadcrumbs, contextual links, etc.).
Users can complete tasks without needing to memorize nor complete advanced cognitive tasks.
Automated input from user agents, third-party tools, or copy-and-paste is not prevented.
Processes, including authentication, can be completed without puzzles, calculations, or other cognitive tests, unless it would jeopardize the security or purpose of the content.
Needs additional research
Processes can be completed without memorizing and recalling information from previous stages of the process.
Users have enough time to read and use content.
For each process with a time limit, a mechanism exists to disable or extend the limit before the time limit starts.
For each process with a time limit, a mechanism exists to disable or extend the time limit at timeout.
For each process with a time limit, a mechanism exists to disable the limit.
Users can complete tasks without unnecessary steps.
Processes can be completed without being forced to read or understand unnecessary content.
Processes can be completed without entering unnecessary information.
Users do not encounter deception when completing tasks.
Changes in terms of agreement to a continuing process, service, or task are conveyed to users and an opportunity to consent is given.
Content does not include double negatives, false statements, or other misleading wording.
Process completion does not include artificial time limits unless this is essential to the task.
Implying to a user that they will lose a benefit if they don’t act immediately is an artificial time limit.
Needs additional research
A mechanism is available to alert users they are exiting the site. Users are notified before they exit a site.
When completing a process, all financial-, privacy-, or safety-related information and choices are provided to the user.
Content does not threaten individuals or restate decisions in a degrading way.
Content is not designed to draw attention away from information that impacts finances, privacy, or safety by visually emphasizing other information.
Needs additional research
Once a user declines a request, the request is not repeated.
Content author(s) conducted tests with people with cognitive- and mental-health-related disabilities and fixed issues based on findings.
Users do not have to reenter information or redo work.
In a multi-step process, the interface supports stepping backwards in a process and returning to the current point without data loss.
Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated, or available for the user to select.
Data entry and other task completion processes allow saving and resuming from the current step in the task.
Users understand how to complete tasks.
In a process, the interface indicates when user input or action is required to proceed to the next step.
Information needed to complete a multi-step process is provided at the start of the process, including:
The steps and instructions needed to complete a multi-step process are available.
Users can determine when content is provided by a Third Party
Needs additional research
The author or source of the primary content is visually and programmatically indicated.
Needs additional research
Third-party content (AI, Advertising, etc.) is visually and programmatically indicated.
When providing private and sensitive information, users understand:
When the amount of information shared can be adjusted, the method of adjusting the amount of information causes a minimal cognitive burden.
When private or sensitive information is displayed, notify the user and provide a mechanism to hide the information.
Content author(s) programmatically (and visually?) indicate content that may be inappropriate or cause harm as identified by an existing standard, policy, or regulation OR identified through user research and provide a mechanism to avoid it is provided.
Content author(s) private and sensitive information is handled according to [named security procedures] and reviews are conducted.
Needs additional research
The interface provides a mechanism to support decision-making while enabling user autonomy.
Users understand the benefits, risks, and consequences of options they select.
Legal-, financial-, privacy-, or security-related consequences are provided in content before a user enters a legal-, financial-, privacy-, or security-related agreement.
When people with disabilities are required to use alternative or additional processes or content not used by people without disabilities, use of the alternative does not expose them to additional risk.
Content that requires legal, financial, privacy, or security choices clearly states the benefits, risks, and potential consequences prior to the choice being confirmed.
Users are not disadvantaged or harmed by algorithms.
Content author(s) train AI models using representative and unbiased disability-related information that is proportional to the general population.
Content author(s) conduct usability testing and ethics reviews to minimize the possibility that algorithms disadvantage people with disabilities.
Users have help available.
Needs additional research
Help is labeled consistently and available in a consistent visual and programmatic location.
Contextual help is available.
Conversational support allowing both text and verbal modes is available.
Needs additional research
Help is available to understand and use data visualizations.
Needs additional research
When interfaces dramatically change (due to redesign), a mechanism to learn the new interface or revert to the older design is available.
Needs additional research
Help is adaptable and personalizable.
Instructions and help do not rely on sensory characteristics.
Needs additional research
Accessible support is available during data entry, task completion and search.
Users can provide feedback to content author(s).
A mechanism is available to provide feedback to authors.
Users can control text presentation.
Text and background colors can be customized.
Patterns, designs, or images placed behind text are avoided or can be removed by the user.
When font size conveys visual meaning (such as headings), the text maintains its meaning and purpose when text is resized.
Users can change the text style (like font and size) and the layout (such as spacing and single column) to fit their needs.
Users can transform size and orientation of content presentation to make it viewable and usable.
Content orientation allows the user to read the language presented without changing head or body position.
Content can be viewed in multiple viewport sizes, orientations, and zoom levels — without loss of content, functionality, meaningful relationships, and with scrolling only occurring in one direction.
Users can transform content to make it understandable.
Needs additional research
Complex information or instructions for complex processes are available in multiple presentation formats.
Role and priority of content is programmatically determinable.
Access to a plain-language summary, abstract, or executive summaries is available.
Needs additional research
Content can be transformed to make its purpose clearer.
Users can control media and media alternative.
The position and formatting of captions can be changed.
Audio can be turned off, while still playing the video, and without affecting the system sound.
Needs additional research
Alternatives for audio include the ability to search and look up terms.
Captions and audio descriptions can be turned on and off.
Needs additional research
Media can be navigated by chapters.
Users can control interruptions.
The timing and positioning of notifications and other interruptions can be changed, suppressed or saved, except interruptions involving an emergency.
Users can control potential sources of harm.
Needs additional research
Warnings are available about content that may be emotionally disturbing, and the disturbing content can be hidden.
Haptic feedback can be reduced or turned off.
Needs additional research
Warnings are available about triggering content, and the warnings and triggering content can be hidden.
Needs additional research
Overwhelming wordiness can be reduced or turned off.
Needs additional research
Visual stimulation from combinations of density, color, movement, etc. can be reduced or turned off.
Users can control content settings from their user agents including assistive technology.
Content can be controlled using assistive and adaptive technology.
Needs additional research
Printing respects user’s content presentation preferences.
User settings are honored.
Assistive technologies can access content and interactions when using mechanisms that convey alternative points of regard or focus (i.e. virtual cursor).
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY and MUST in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Conformance
You might want to make a claim that your content or product meets the WCAG 3.0 guidelines. If it does meet the guidelines, we call this “conformance”.
If you want to make a formal conformance claim, you must use the process described in this document. Conformance claims are not required and your content can conform to WCAG 3.0, even if you don’t want to make a claim.
There are two types of content in this document:
We are experimenting with different conformance approaches for WCAG 3.0. Once we have developed enough guidelines, we will test how well each works.
End of summary for Conformance
WCAG 3.0 will use a different conformance model than WCAG 2.2 in order to meet its requirements. Developing and vetting the conformance model is a large portion of the work AG needs to complete over the next few years.
AG is exploring a model based on Foundational Requirements, Supplemental Requirements, and Assertions.
The most basic level of conformance will require meeting all of the Foundational Requirements. This set will be somewhat comparable to WCAG 2.2 Level AA.
Higher levels of conformance will be defined and met using Supplemental Requirements and Assertions. AG will be exploring whether meeting the higher levels would work best based on points, percentages, or predefined sets of requirements (modules).
Other conformance concepts AG continues to explore the following include conformance levels, issue severity, adjectival ratings and pre-assessment checks.
See Explainer for W3C Accessibility Guidelines (WCAG) 3.0 for more information.The concept of "accessibility-supported" is to account for the variety of user agents and scenarios. How does an author know that a particular technique for meeting a guideline will work in practice with user agents that are used by real people?
The intent is for the responsibility of testing with user agents to vary depending on the level of conformance.
At the foundational level of conformance, assumptions can be made by authors that methods and techniques provided by WCAG 3.0 work. At higher levels of conformance the author may need to test that a technique works, or check that available user agents meet the requirement, or a combination of both.
This approach means the Working Group will ensure that methods and techniques included do have reasonably wide and international support from user agents, and there are sufficient techniques to meet each requirement.
The intent is that WCAG 3.0 will use a content management system to support tagging of methods/techniques with support information. There should also be a process where interested parties can provide information.
An "accessibility support set" is used at higher levels of conformance to define which user agents and assistive technologies you test with. It would be included in a conformance claim, and enables authors to use techniques that are not provided with WCAG 3.0.
An exception for long-present bugs in assistive technology is still under discussion.
When evaluating the accessibility of content, WCAG 3.0 requires the guidelines apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more subsets of the whole. Reasons for this include:
WCAG 3.0 therefore defines two ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.
Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.
We recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.
This section (with its subsections) provides requirements which must be followed to conform to the specification, meaning it is normative.
Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.
This glossary includes terms used by content that has reached a maturity level of Developing or higher. The definitions themselves include a maturity level and may mature at a different pace than the content that refers to them. The AGWG will work with other taskforces and groups to harmonize terminology across documents as much as is possible.
shortened form of a word, phrase, or name where the abbreviation has not become part of the language
This includes initialisms, acronyms, and numeronyms.
Some companies have adopted what used to be an initialism as their company name. In these cases, the new name of the company is the letters (for example, Ecma) and the word is no longer considered an abbreviation.
group of user agents and assistive technologies you test with
The AGWG is considering defining a default set of user agents and assistive technologies that they use when validating guidelines.
Accessibility support sets may vary based on language, region, or situation.
If you are not using the default accessibility set, the conformance report should indicate what set is being used.
supported by in at least 2 major free browsers on every operating system and/or available in assistive technologies used by 80% cumulatively of the AT users on each operating system for each type of AT used
available for the user to read and use any actionable items included
formal claim of fact, attributed to a person or organization, regarding procedures practiced in the development and maintenance of the content or product to improve accessibility
hardware and/or software that acts as a user agent, or along with a mainstream user agent, to provide functionality to meet the requirements of users with disabilities that go beyond those offered by mainstream user agents
Functionality provided by assistive technology includes alternative presentations (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, and content transformations (e.g., to make tables more accessible).
Assistive technologies often communicate data and messages with mainstream user agents by using and monitoring APIs.
The distinction between mainstream user agents and assistive technologies is not absolute. Many mainstream user agents provide some features to assist individuals with disabilities. The basic difference is that mainstream user agents target broad and diverse audiences that usually include people with and without disabilities. Assistive technologies target narrowly defined populations of users with specific disabilities. The assistance provided by an assistive technology is more specific and appropriate to the needs of its target users. The mainstream user agent may provide important functionality to assistive technologies like retrieving web content from program objects or parsing markup into identifiable bundles.
the technology of sound reproduction
Audio can be created synthetically (including speech synthesis), recorded from real world sounds, or both.
narration added to the soundtrack to describe important visual details that cannot be understood from the main soundtrack alone
For audiovisual media, audio description provides information about actions, characters, scene changes, on-screen text, and other visual content.
Audio description is also sometimes called “video description”, “described video”, “visual description”, or “descriptive narration”.
In standard audio description, narration is added during existing pauses in dialogue. See also extended audio description.
If all important visual information is already provided in the main audio track, no additional audio description track is necessary.
evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests
Automated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.
switching back and forth between two visual states in a way that is meant to draw attention
See also flash. It is possible for something to be large enough and blink brightly enough at the right frequency to be also classified as a flash.
more than one sentence of text
control using a camera as a motion sensor to detect gestures of any type, for example “in the air” gestures
This does not include, for example, a static QR code image on a web page.
synchronized visual and/or text alternative for both the speech and non-speech audio portion of a work of audiovisual content
Closed captions are equivalents that can be turned on and off with some players and can often be read using assistive technology..
Open captions are any captions that cannot be turned off in the player. For example, if the captions are visual equivalent images of text embedded in video.
Audio descriptions can be, but do not need to be, captioned since they are descriptions of information that is already presented visually.
In some countries, captions are called subtitles. The term ‘subtitles’ is often also used to refer to captions that present a translated version of the audio content.
keyboard navigation technique that is the same across most or all applications and platforms and can therefore be relied upon by users who need to navigate by keyboard alone
A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG common keyboard navigation techniques list
any pointer input other than a single pointer input
grouping of elements for a distinct function
satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim
See the Conformance section for more information.
information and sensory experience to be communicated to the user by an interface, including code or markup that defines the content’s structure, presentation, and interactions
To be defined.
serving only an aesthetic purpose, providing no information, and having no functionality
Text is only purely decorative if the words can be rearranged or substituted without changing their purpose.
declare something outdated and in the process of being phased out, usually in favor of a specified replacement
Deprecated documents are no longer recommended for use and may cease to exist in the future.
a text version of the speech and non-speech audio information and visual information needed to understand the content.
platform event that occurs when the trigger stimulus of a pointer is depressed
The down event may have different names on different platforms, such as “touchstart” or “mousedown”.
exception because there is no way to carry out the function without doing it this way or fundamentally changing the functionality
process of examining content for conformance to these guidelines
Different approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and usability testing.
audio description that is added to audiovisual media by pausing the video to allow for additional time to add audio description
This technique is only used when the sense of the video would be lost without the additional audio description and the pauses between dialogue or narration are too short.
title, brief explanation, or comment that accompanies a work of visual media and is always visible on the page
a pair of opposing changes in relative luminance that can cause seizures in some people if it is large enough and in the right frequency range
See general flash and red flash thresholds for information about types of flash that are not allowed.
See also blinking.
statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context
a flash or rapidly-changing image sequence is below the threshold (i.e., content passes) if any of the following are true:
where:
Exception: Flashing that is a fine, balanced, pattern such as white noise or an alternating checkerboard pattern with “squares” smaller than 0.1 degree (of visual field at typical viewing distance) on a side does not violate the thresholds.
For general software or web content, using a 341 x 256 pixel rectangle anywhere on the displayed screen area when the content is viewed at 1024 x 768 pixels will provide a good estimate of a 10 degree visual field for standard screen sizes and viewing distances (e.g., 15-17 inch screen at 22-26 inches). This resolution of 75 - 85 ppi is known to be lower, and thus more conservative than the nominal CSS pixel resolution of 96 ppi in CSS specifications. Higher resolutions displays showing the same rendering of the content yield smaller and safer images so it is lower resolutions that are used to define the thresholds.
A transition is the change in relative luminance (or relative luminance/color for red flashing) between adjacent peaks and valleys in a plot of relative luminance (or relative luminance/color for red flashing) measurement against time. A flash consists of two opposing transitions.
The new working definition in the field for “pair of opposing transitions involving a saturated red” (from WCAG 2.2) is a pair of opposing transitions where, one transition is either to or from a state with a value R/(R + G + B) that is greater than or equal to 0.8, and the difference between states is more than 0.2 (unitless) in the CIE 1976 UCS chromaticity diagram. [ISO_9241-391]
Tools are available that will carry out analysis from video screen capture. However, no tool is necessary to evaluate for this condition if flashing is less than or equal to 3 flashes in any one second. Content automatically passes (see #1 and #2 above).
motion made by the body or a body part used to communicate to technology
high-level, plain-language outcome statements used to organize requirements
Guidelines provide a high-level, plain-language outcome statements for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the requirements so that non-experts can learn about and understand the concepts.
Each guideline includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs on specific topics, such as contrast, forms, readability, and more.
Guidelines group related requirements and are technology-independent.
evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated
Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, in contrast with usability testing which directly tests the experience of users with content.
To be defined.
To be defined.
To be defined.
content provided for information purposes and not required for conformance. Also referred to as non-normative
element that responds to user input and has a distinct programmatically determinable name
In contrast to non-interactive elements. For example, headings or paragraphs.
smallest testable unit for testing scope
point in the content where any keyboard actions would take effect
API (Application Programming Interface) where software gets “keystrokes” from
“Keystrokes” that are passed to the software from the “keyboard interface” may come from a wide variety of sources including but not limited to a scanning program, sip-and-puff morse code software, speech recognition software, AI of all sorts, as well as other keyboard substitutes or special keyboards.
process or technique for achieving a result
The mechanism may be explicitly provided in the content, or may be relied upon to be provided by either the platform or by user agents, including assistive technologies.
The mechanism needs to meet all requirements for the conformance level claimed.
alternative formats, usually text, for audio, video, and audio-video content including captions, audio descriptions, and descriptive transcripts
detailed information, either technology-specific or technology-agnostic, on ways to meet the requirement as well as tests and scoring information
element that does not respond to user input and does not include sub-parts
If a paragraph included a link, the text either side of the link would be considered a static element, but not the paragraph as a whole.
Letters within text do not constitute a “smaller part”.
words or phrases used in a way that are beyond their standard or dictionary meaning to express deeper, more complex ideas
This is also called figurative language.
To understand the content, users have to interpret the implied meaning behind the words, rather than just their literal or direct meaning.
Examples include:
content whose instructions are required for conformance
captions that are visual equivalent images of text that are embedded in video
Open captions are also known as burned-in, baked-on, or hard-coded captions. Open captions cannot be turned off and cannot be read using assistive technology.
non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together
Where a URI is available and represents a unique set of content, that would be the preferred conformance unit.
gesture that depends on the path of the pointer input and not just its endpoints
Path based gesture includes both time dependent and non-time dependent path-based gestures.
software, or collection of layers of software, that lie below the subject software and provide services to the subject software and that allows the subject software to be isolated from the hardware, drivers, and other software below
Platform software both makes it easier for subject software to run on different hardware, and provides the subject software with many services (e.g. functions, utilities, libraries) that make the subject software easier to write, keep updated, and work more uniformly with other subject software.
A particular software component might play the role of a platform in some situations and a client in others. For example a browser is a platform for the content of the page but it also relies on the operating system below it.
The platform is the context in which the product exists.
position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard can vary
The point of regard is almost always within the viewport, but it can exceed the spatial or temporal dimensions of the viewport. See rendered content for more information about viewport dimensions.
The point of regard can also refer to a particular moment in time for content that changes over time. For example, an audio-only presentation.
User agents can determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection.
To be defined.
private and sensitive information
series of views or pages associated with user actions, where actions required to complete an activity are performed, often in a certain order, regardless of the technologies used or whether it spans different sites or domains
testing scope that is a combination of all items, views, and task flows that make up the web site, set of web pages, web app, etc.
The context for the product would be the platform.
meaning of the content and all its important attributes can be determined by software functionality that is accessibility supported
static content on the page that gives the user the perception or feeling of motion
the relative brightness of any point in a colorspace, normalized to 0 for darkest black and 1 for lightest white
For the sRGB colorspace, the relative luminance of a color is defined as L = 0.2126 * R + 0.7152 * G + 0.0722 * B where R, G and B are defined as:
and RsRGB, GsRGB, and BsRGB are defined as:
The ”^” character is the exponentiation operator. (Formula taken from [SRGB].)
Before May 2021 the value of 0.04045 in the definition was different (0.03928). It was taken from an older version of the specification and has been updated. It has no practical effect on the calculations in the context of these guidelines.
Almost all systems used today to view web content assume sRGB encoding. Unless it is known that another color space will be used to process and display the content, authors should evaluate using sRGB colorspace.
If dithering occurs after delivery, then the source color value is used. For colors that are dithered at the source, the average values of the colors that are dithered should be used (average R, average G, and average B).
Tools are available that automatically do the calculations when testing contrast and flash.
WCAG 2.2 contains a separate page giving the relative luminance definition using MathML to display the formulas. This will need to be addressed for inclusion in WCAG 3.
result of practices that reduce or eliminate barriers that people with disabilities experience
self-contained portion of content that deals with one or more related topics or thoughts
A section may consist of one or more paragraphs and include graphics, tables, lists and sub-sections.
evaluation conducted using machines to guide humans to areas that need inspection
Semi-automated evaluation involves components of automated evaluation and human evaluation.
input event that involves only a single ‘click’ event or a ‘button down’ and ‘button up’ pair of events with no movement between
Examples of things that are not simple pointer actions include double clicks, dragging motions, gestures, and any use of multipoint input or gestures, and the simultaneous use of a mouse and keyboard.
input modality that only targets a single point on the page/screen at a time – such as a mouse, single finger on a touch screen, or stylus
Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.
input modality that only targets a single point on the view at a time – such as a mouse, single finger on a touch screen, or stylus
Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.
Single pointer input is in contrast to multipoint input such as two, three or more fingers or pointers touching the surface, or gesturing in the air, at the same time.
Activation is usually by click or tap but can also be by programmatic simulation of a click or tap or other similar simple activation.
keyboard commands that are the same across most or platforms and are relied upon by users who need to navigate by keyboard alone
A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG standard keyboard navigation techniques list.
testing scope that includes a series views that support a specified user activity
mechanism to evaluate implementation of a method
sequence of characters that can be programmatically determined, where the sequence is expressing something in human language
text that is programmatically associated with non-text content or referred to from text that is programmatically associated with non-text content
platform event that occurs when the trigger stimulus of a pointer is released
The up event may have different names on different platforms, such as “touchend” or “mouseup”.
evaluation of the experience of users using a product or process by observation and feedback
software that retrieves and presents external content for users
end goal a user has when starting a process through digital means
text which the user can adjust
This could include, but is not limited to, changing:
the technology of moving or sequenced pictures or images
Video can be made up of animated or photographic images, or both.
content that is actively available in a viewport including that which can be scrolled or panned to, and any additional content that is included by expansion while leaving the rest of the content in the viewport actively available
A modal dialog box would constitute a new view because the other content in the viewport is no longer actively available.
object in which the platform presents content
The author has no control of the viewport and almost always has no idea what is presented in a viewport (e.g. what is on screen) because it is provided by the platform. On browsers the hardware platform is isolated from the content.
Content can be presented through one or more viewports. Viewports include windows, frames, loudspeakers, and virtual magnifying glasses. A viewport may contain another viewport. For example, nested frames. Interface components created by the user agent such as prompts, menus, and alerts are not viewports.
The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy.
The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security.
This section shows substantive changes made in WCAG 3.0 since the First Public Working Draft was published in 21 January 2021.
The full commit history to WCAG 3.0 and commit history to Silver is available.
Additional information about participation in the Accessibility Guidelines Working Group (AG WG) can be found on the Working Group home page.
Abi James, Abi Roper, Alastair Campbell, Alice Boxhall, Alina Vayntrub, Alistair Garrison, Amani Ali, Andrew Kirkpatrick, Andrew Somers, Andy Heath, Angela Hooker, Aparna Pasi, Ashley Firth, Avneesh Singh, Avon Kuo, Azlan Cuttilan, Ben Tillyer, Betsy Furler, Brooks Newton, Bruce Bailey, Bryan Trogdon, Caryn Pagel, Charles Hall, Charles Nevile, Chris Loiselle, Chris McMeeking, Christian Perera, Christy Owens, Chuck Adams, Cybele Sack, Daniel Bjorge, Daniel Henderson-Ede, Darryl Lehmann, David Fazio, David MacDonald, David Sloan, David Swallow, Dean Hamack, Detlev Fischer, DJ Chase, E.A. Draffan, Eleanor Loiacono, Filippo Zorzi, Francis Storr, Frankie Wolf, Frederick Boland, Garenne Bigby, Gez Lemon, Giacomo Petri, Glenda Sims, Graham Ritchie, Greg Lowney, Gregg Vanderheiden, Gundula Niemann, Hidde de Vries, Imelda Llanos, Jaeil Song, JaEun Jemma Ku, Jake Abma, Jan Jaap de Groot, Jan McSorley, Janina Sajka, Jaunita George, Jeanne Spellman, Jeff Kline, Jennifer Chadwick, Jennifer Delisi, Jennifer Strickland, Jennison Asuncion, Jill Power, Jim Allan, Joe Cronin, John Foliot, John Kirkwood, John McNabb, John Northup, John Rochford, John Toles, Jon Avila, Joshue O’Connor, Judy Brewer, Julie Rawe, Justine Pascalides, Karen Schriver, Katharina Herzog, Kathleen Wahlbin, Katie Haritos-Shea, Katy Brickley, Kelsey Collister, Kim Dirks, Kimberly McGee, Kimberly Patch, Laura Carlson, Laura Miller, Len Beasley, Léonie Watson, Lisa Seeman-Kestenbaum, Lori Oakley, Lori Samuels, Lucy Greco, Luis Garcia, Lyn Muldrow, Makoto Ueki, Marc Johlic, Marie Bergeron, Mark Tanner, Mary Ann Jawili, Mary Jo Mueller, Matt Garrish, Matthew King, Melanie Philipp, Melina Maria Möhnle, Michael Cooper, Michael Crabb, Michael Elledge, Michael Weiss, Michellanne Li, Michelle Lana, Mike Beganyi, Mike Crabb, Mike Gower, Nicaise Dogbo, Nicholas Trefonides, Nina Krauß, Omar Bonilla, Patrick H. Lauke, Paul Adam, Peter Korn, Peter McNally, Pietro Cirrincione, Poornima Badhan Subramanian, Rachael Bradley Montgomery, Rain Breaw Michaels, Ralph de Rooij, Rashmi Katakwar, Rebecca Monteleone, Rick Boardman, Roberto Scano, Ruoxi Ran, Ruth Spina, Ryan Hemphill, Sarah Horton, Sarah Pulis, Scott Hollier, Scott O’Hara, Shadi Abou-Zahra, Shannon Urban, Shari Butler, Shawn Henry, Shawn Lauriat, Shawn Thompson, Sheri Byrne-Haber, Shrirang Sahasrabudhe, Shwetank Dixit, Stacey Lumley, Stein Erik Skotkjerra, Stephen Repsher, Steve Faulkner, Steve Lee, Sukriti Chadha, Susi Pallero, Suzanne Taylor, sweta wakodkar, Takayuki Watanabe, Tananda Darling, Theo Hale, Thomas Logan, Thomas Westin, Tiffany Burtin, Tim Boland, Todd Libby, Todd Marquis Boutin, Victoria Clark, Wayne Dick, Wendy Chisholm, Wendy Reid, Wilco Fiers.
These researchers selected a Silver research question, did the research, and graciously allowed us to use the results.
WCAG Success Criteria Usability Study
Internet of Things (IoT) Education: Implications for Students with Disabilities
WCAG Use by UX Professionals
Web Accessibility Perceptions(Student project from Worcester Polytechnic Institute)
This publication has been funded in part with U.S. Federal funds from the Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), initially under contract number ED-OSE-10-C-0067, then under contract number HHSP23301500054C, and now under HHS75P00120P00168. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Health and Human Services or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in:
Referenced in: