W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. The guidelines apply to various types of web content including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements and assertions to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement or assertion.
This specification is expected to be updated regularly to keep pace with changing technology by updating and adding methods, requirements, and guidelines to address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.
See WCAG 3.0 Introduction for an introduction and links to WCAG technical and educational material.
This is an update to the W3C Accessibility Guidelines (WCAG) 3.0. It includes a restructuring of the guidelines and first draft decision trees for three Guidelines: Clear meaning, Image alternatives, and Keyboard focus appearance.
To comment, file an issue in the W3C wcag3 GitHub repository. The Working Group requests that public comments be filed as new issues, one issue per discrete comment. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, email public-agwg-comments@w3.org (comment archive). In-progress updates to the guidelines can be viewed in the public editors’ draft.
What’s new in this version of WCAG 3.0?
This draft includes an updated list of the potential Guidelines and Requirements that we are exploring. The list of Requirements is longer than the list of Success Criteria in WCAG 2.2. This is because:
The Requirements are grouped into the following sections:
The purpose of this update is to demonstrate a potential structure for guidelines and indicate the current direction of the WCAG 3.0 conformance. Please consider the following questions when reviewing this draft:
To provide feedback, please file a GitHub issue or email public-agwg-comments@w3.org (comment archive).
This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. The W3C Accessibility Guidelines (WCAG) 3.0 support a wide set of user needs, use new approaches to testing, and allow frequent maintenance of guidelines and related content to keep pace with accelerating technology change. WCAG 3.0 supports this evolution by focusing on the functional needs of users. These needs are then supported by guidelines written as outcome statements, requirements, assertions, and technology-specific methods to meet those needs.
WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [[WCAG22]] and previous versions, but does not deprecate WCAG 2. It will also incorporate some content from and partially extend User Agent Accessibility Guidelines 2.0 [[UAAG20]] and Authoring Tool Accessibility Guidelines 2.0 [[ATAG20]]. These earlier versions provided a flexible model that kept them relevant for over 15 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.
There are many differences between WCAG 2 and WCAG 3.0. The WCAG 3.0 guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.
Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement.
Content that conforms to WCAG 2.2 levels A and AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, while others may wish to migrate to the new standard. For those that wish to migrate to the new standard, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.
As part of the WCAG 3.0 drafting process each normative section of this document is given a status. This status is used to indicate how far along in the development this section is, how ready it is for experimental adoption, and what kind of feedback the Accessibility Guidelines Working Group is looking for.
The following guidelines are being considered for WCAG 3.0. They are currently a list of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance and additional requirements. The list will change in future drafts.
Unless otherwise stated, requirements assume the content described is provided both visually and programmatically.
The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To meet the varying needs of this audience, several layers of guidance will be provided including guidelines written as outcome statements, requirements that can be tested, assertions, a rich collection of methods, resource links, and code samples.
The following list is an initial set of potential guidelines and requirements that the Working Group will be exploring. The goal is to guide the next phase of work. They should be considered drafts and should not be considered as final content of WCAG 3.0.
Ordinarily, exploratory content includes editor's notes listing concerns and questions for each item. Because this Guidelines section is very early in the process of working on WCAG 3.0, this editor's note covers most of the content in this section. Unless otherwise noted, all items in the list as exploratory at this point. It is a list of all possible topics for consideration. Not all items listed will be included in the final version of WCAG 3.0.
The guidelines and requirements listed below came from analysis of user needs that the Working Group has been studying, examining, and researching. They have not been refined and do not include essential exceptions or methods. Some requirements may be best addressed by authoring tools or at the platform level. Many requirements need additional work to better define the scope and to ensure they apply correctly to multiple languages, cultures, and writing systems. We will address these questions as we further explore each requirement.
Additional Research
One goal of publishing this list is to identify gaps in current research and request assistance filling those gaps.
Editor's notes indicate the requirements within this list where the Working Group has not found enough research to fully validate the guidance and create methods to support it or additional work is needed to evaluate existing research. If you know of existing research or if you are interested in conducting research in this area, please file a GitHub issue or send email to public-agwg-comments@w3.org (comment archive).
Users have equivalent alternatives for images.
For each image:
Decorative image is programmatically hidden.
The role and importance of the image is programmatically indicated.
The image type (photo, icon, etc.) is indicated.
Auto generated text descriptions are editable by content creator.
Needs additional research
Text alternatives follow an organizational style guide.
Users have equivalent alternatives for media content.
Where there is visual content in media, there is an equivalent synchronized audio track.
Where there is audio content in media, there are equivalent synchronized captions.
A transcript is available whenever audio or visual alternatives are used.
Media that has the desired media alternatives (captions, audio descriptions, and descriptive transcripts) can be found. (Needs additional research).
Needs additional research
Equivalent audio alternatives are available in the preferred language.
Needs additional research
Media alternatives explain nonverbal cues, such as tone of voice, facial expressions, body gestures, or music with emotional meaning.
Needs additional research
Users have alternatives available for non-text, non-image content that conveys context or meaning.
Equivalent text alternatives are available for non-text, non-image content that conveys context or meaning.
Needs additional research
Users can view figure captions even if not focused at figure.
Figure captions persist or can be made to persist even if the focus moves away.
Needs additional research
Users have content that does not rely on a single sense or perception.
Information conveyed by graphical elements does not rely on hue.
Needs additional research
Information conveyed with visual depth is also conveyed programmatically and/or through text.
Needs additional research
Information conveyed with sound is also conveyed programmatically and/or through text.
Information that is conveyed with spatial audio is also conveyed programmatically and/or through text.
Users can read visually rendered text.
The rendered text against its background meets a maximum contrast ratio test for its text appearance.
Needs additional research
The rendered text against its background meets a minimum contrast ratio test for its text appearance.
Needs additional research
The rendered text meets a minimum font size and weight.
Needs additional research
The rendered text does not use a decorative or cursive font face.
Users can access text content and its meaning with text-to-speech tools.
Text content can be converted into speech.
Needs additional research
The human language of the view and content within the view is programmatically available.
Meaning conveyed by text appearance is programmatically available.
Needs additional research
For each item of ambiguous text, such as non-literal text, abbreviations and acronyms, ambiguous numbers, or text missing letters or diacritics:
Exception
Text is programmatically determinable
Explain ambiguous text or provide an unambiguous alternative.
Users are not required to navigate complex words or sentence structures in order to understand content.
The language and tone used is appropriate to the topic or subject matter.
Needs additional research
Content does not include double negatives to express a positive unless it is standard usage for that language or dialect.
The voice used is easiest to understand in context.
Needs additional research
Definitions for uncommon or new words are available.
Needs additional research
Sentences are concise, without unnecessary filler words and phrases.
The verb tense used is easiest to understand in context.
Needs additional research
Users can see which element has keyboard focus.
For each focusable item:
A custom focus indicator is used with sufficient size, change of contrast, adjacent contrast, distinct style and adjacency.
Focusable item uses the user agent default indicator.
@@
Focus indicators follow an organizational style guide.
Users can see the location of the pointer focus.
There is a visible indicator of pointer focus.
Users can determine where they are and move through content (including interactive elements) in a systematic and meaningful way regardless of input or movement method.
The focus does not move to a position outside the current viewport, unless a mechanism is available to return to the previous focus point.
A user can focus on a content “area,” such as a modal or pop-up, then resume their view of all content using a limited number of steps.
The keyboard focus moves sequentially through content in an order and way that preserves meaning and operability.
When the focus is moved by the content into a temporary change of view (e.g. a modal), the focus is restored to its previous location when returned from the temporary change of view.
The focus order does not include repetitive, hidden, or static elements.
Users have interactive components that behave as expected.
Interactive components with the same functionality behave consistently.
Interactive components with the same functionality have consistent labels.
Interactive components that have similar function and behavior have a consistent visual design.
Interactive components are visually and programmatically located in conventional locations.
Needs additional research
Interactive components follow established conventions.
Needs additional research
Conventional interactive components are used.
Interactive components retain their position unless a user changes the viewport or moves the component.
Users have information about interactive components that is identifiable and usable visually and using assistive technology.
Visual information required to identify user interface components and states meet a minimum contrast ratio test, except for inactive components or where the appearance of the component is determined by the user agent and not modified by the author.
Needs additional research
The importance of interactive components is indicated.
Needs additional research
Interactive components have visible labels that identify the purpose of the component.
Changes to interactive components’ names, roles, values or states are visually and programmatically indicated.
Interactive components are visually distinguishable without interaction from static content and include visual cues on how to use them.
Field constraints and conditions (required line length, date format, password format, etc.) are available.
Inputs have visible labels that identify the purpose of the input.
The programmatic name includes the visual label.
Accurate names, roles, values, and states are available for interactive components.
Users can use different input techniques and combinations and switch between them.
Any input modality available on a platform can be used concurrently.
Users can dismiss additional content (triggered by hover) without moving the pointer, unless the additional content communicates an input error or does not obscure or replace other content.
Interactive components are available to all navigation and input methods.
Users are aware of changes to content or context.
Changes in content and updates notify users, regardless of the update speed.
Notification is provided when viewing content that was previously viewed is changed.
Interactive components that can alter the order of content convey their purpose prior to activation, and convey their impact on content order when activated.
Components that trigger a 'change of context' are indicated, or the change of context can be reversed.
Users are not required to accurately position a pointer in order to view or operate content.
The combined target size and spacing to adjacent targets is at least 24x24 pixels
The combined target size and spacing to adjacent targets is at least 48x48 pixels.
Users can navigate and operate content using only the keyboard focus.
The number of input commands required to complete a task using the keyboard is similar to the number of input commands when using other input modalities.
Needs additional research
Authored keyboard commands do not conflict with platform commands or they can be remapped.
Keyboard interface interactions are consistent.
If the keyboard is non-hardware (such as a virtual keyboard), the keyboard input mode is indicated.
All functionality must be accessible through the keyboard, except when a task requires input based on the user's specific input action.
If keyboard focus can be moved to an interactive component, then the keyboard focus can be moved away from that component, or the component can be dismissed, with focus returning to the previous point.
The user is informed of non-standard authored keyboard commands.
Users are not required to use gestures or dragging to view or operate content.
Selecting an interactive component with a pointer sets the focus to that element.
Every function that can be operated by a pointer, can be operated by a single pointer input or a sequence of single pointer inputs without requiring certain timing.
Functionality which supports pointers is available to any pointer device supported by the platform.
The method of pointer cancellation is consistent.
Where specific pressures are used, they can be adjusted and/or disabled without loss of function.
Needs additional research
Where specific speeds are used, they can be adjusted and/or disabled without loss of function.
Needs additional research
Users are not required to move their bodies or devices to operate functionality.
All functionality that requires full or gross body movement may also be accomplished with a standard input device.
All functionality can be completed without reorienting or repositioning hardware devices.
Users know about and can correct mistakes.
Error notifications are programmatically associated with the error source so that users can access the error information while focused on the source of the error.
Errors are visually identifiable without relying on only text, only color, or only symbols.
Errors that can be automatically detected are identified and described to the user.
Error notifications persist until the user dismisses them or the error is resolved.
Error notifications are visually collocated with the source of the error within the viewport, or provide a link to the source of the error which, when activated, moves the viewport to the error.
Needs additional research
Users do not experience physical harm from content.
Audio shifting designed to create a perception of motion is avoided; or can be paused or prevented.
Needs additional research
Flashing or strobing beyond thresholds defined by safety standards are avoided; or can be paused or prevented.
Visual motion and pseudo-motion that lasts longer than 5 seconds is avoided; or can be paused or prevented.
Needs additional research
Visual motion and pseudo-motion triggered by interaction is avoided; or can be prevented, unless the animation is essential to the functionality or the information being conveyed.
Needs additional research
Users can determine relationships between content both visually and using assistive technologies.
The relationships between parts of the content is clearly indicated.
The starting point or home is visually and programmatically labeled.
Relationships that convey meaning between pieces of content are programmatically determinable. Note: Examples of relationships include items positioned next to each other, arranged in a hierarchy, or visually grouped.
Needs additional research
Sections are visually and programmatically distinguishable.
Needs additional research
Users have consistent and recognizable layouts available.
The relative order of content and interactions remain consistent throughout a workflow. Note: Relative order means that content can be added or removed, but repeated items are in the same order relative to each other.
Conventional layouts are available.
Information required to understand options is visually and programmatically associated with the options.
Related information is grouped together within a visual and programmatic structure.
Users can determine their location in content both visually and using assistive technologies.
The current location within the view, multi-step process, and product is visually and programmatically indicated.
Needs additional research
Provides context that orients the user in a site or multi-step process.
Provide contextual information to help the user orient within the product.
Users can understand and navigate through the content using structure.
Major sections of content have within them well structured, understandable visual and programmatic headings.
Content is organized into short sections of related content.
Needs additional research
The purpose of each section of the content is clearly indicated.
The number of concepts within a segment of text is minimized.
For text intended to inform the user, each paragraph of text begins with a topic sentence stating the aim or purpose.
Whitespace separates chunks of content.
Content has a title or high-level description.
Three or more items of related data are presented as bulleted or numbered lists.
Steps in a multi-step process are numbered.
Users have consistent and alternative methods for navigation.
Navigation elements remain consistent across views within the product.
The product provides at least two ways of navigating and finding information (Search, Scan, Site Map, Menu Structure, Breadcrumbs, contextual links, etc.).
Navigation features are available regardless of screen size and magnification (responsive design)
Users can complete tasks without needing to memorize nor complete advanced cognitive tasks.
Automated input from user agents, 3rd party tools, or copy-and-paste is not prevented.
Processes, including authentication, can be completed without puzzles, calculations, or other cognitive tests (essential exceptions would apply).
Processes can be completed without memorizing and recalling information from previous stages of the process.
Needs additional research
Users have enough time to read and use content.
For each process with a time-limit, a mechanism exists to disable or extend the limit before the time-limit starts.
For each process with a time-limit, a mechanism exists to disable or extend the time-limit at timeout.
For each process with a time-limit, a mechanism exists to disable the limit.
Users can complete tasks without unnecessary steps.
Processes can be completed without being forced to read or understand unnecessary content.
Processes can be completed without entering unnecessary information.
Users do not encounter deception when completing tasks, unless essential to the task.
Interactive components are not deceptively designed.
Needs additional research
Process completion does not include exploitive behaviors.
Needs additional research
Processes can be completed without navigating misinformation or redirections.
Needs additional research
Preselected options are visible by default during process completion without additional interactions.
A mechanism is available to prevent fraudulent redirection or alert users they are exiting the site.
Needs additional research
Users do not have to reenter information or redo work.
In a multistep process, the interface supports stepping backwards in a process and returning to the current point without data loss.
Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated, or available for the user to select.
Data entry and other task completion processes allow saving and resuming from the current step in the task.
Users understand how to complete tasks.
In a process, the interface indicates when user input or action is required to proceed to the next step. c
Information needed to complete a multi-step process is provided at the start of the process, including:
The steps and instructions needed to complete a multistep process are available
Users can determine when content is provided by a Third Party
The author or source of the primary content is visually and programmatically indicated.
Needs additional research
Third party content (AI, Advertising, etc.) is visually and programmatically indicated.
Needs additional research
Advertising and other third-party content that obscures the primary content can be moved or removed without interacting with the advertising or third-party content.
Needs additional research
Users’ safety, security or privacy are not decreased by accessibility measures.
The interface indicates when a user is entering an agreement or submitting data.
Needs additional research
Disability information is not disclosed to or used by third parties and algorithms (including AI).
Needs additional research
Prompts to hide and remove sensitive information from observers are available.
Needs additional research
Clear explanations of the risks and consequences of choices, including use, are stated.
Needs additional research
Users are not disadvantaged by algorithms.
Algorithms (including AI) used are not biased against people with disabilities.
Needs additional research
A mechanism is available to understand and control social media algorithms.
Needs additional research
Users have help available.
Help is labeled consistently and available in a consistent visual and programmatic location.
Needs additional research
Contextual help is available.
Conversational support allowing both text and verbal modes is available.
Help is available to understand and use data visualizations.
Needs additional research
When interfaces dramatically change (due to redesign), a mechanism to learn the new interface or revert to the older design is available.
Needs additional research
Help is adaptable and personalizable.
Needs additional research
Instructions and help do not rely on sensory characteristics.
Accessible support is available during data entry, task completion and search.
Needs additional research
Users have supplemental content available.
Text or visual alternatives are available for numerical concepts.
Visual illustrations, pictures, and images are available to help explain complex ideas, events, and processes.
Needs additional research
Users can provide feedback to authors.
A mechanism is available to provide feedback to authors.
Users can control text presentation.
Text and background colors can be customized.
Patterns, designs or images placed behind text are avoided or can be removed by the user.
When font size conveys visual meaning (such as headings), the text maintains its meaning and purpose when text is resized.
Users can change the text style (like font and size) and the layout (such as spacing and single column) to fit their needs.
Users can transform size and orientation of content presentation to make it viewable and usable.
Content orientation allows the user to read the language presented without changing head or body position.
Content can be viewed in multiple viewport sizes, orientations, and zoom levels -- without loss of content, functionality, meaningful relationships, and with scrolling only occurring in one direction.
Users can transform content to make it understandable.
Complex information or instructions for complex processes are available in multiple presentation formats.
Needs additional research
Role and priority of content is programmatically determinable.
Access to a plain-language summary, abstract, or executive summaries is available.
Content can be transformed to make its purpose clearer.
Needs additional research
Users can control media and media alternative.
The position and formatting of captions can be changed.
Audio can be turned off, while still playing the video, and without affecting the system sound.
Alternatives for audio include the ability to search and look up terms.
Needs additional research
Captions and audio descriptions can be turned on and off.
Media can be navigated by chapters.
Needs additional research
Users can control interruptions.
The timing and positioning of notifications and other interruptions can be changed, suppressed or saved, except interruptions involving an emergency.
Users can control potential sources of harm.
Warnings are available about content that may be emotionally disturbing, and the disturbing content can be hidden.
Needs additional research
Haptic feedback can be reduced or turned off.
Needs additional research
Warnings are available about triggering content, and the warnings and triggering content can be hidden.
Needs additional research
Overwhelming wordiness can be reduced or turned off.
Needs additional research
Visual stimulation from combinations of density, color, movement, etc. can be reduced or turned off.
Needs additional research
Users can control content settings from their User Agents including Assistive Technology.
Content can be controlled using assistive and adaptive technology.
Printing respects user's content presentation preferences.
Needs additional research
User settings are honored.
Assistive technologies can access content and interactions when using mechanisms that convey alternative points of regard or focus (i.e. virtual cursor).
You might want to make a claim that your content or product meets the WCAG 3.0 guidelines. If it does meet the guidelines, we call this “conformance”.
If you want to make a formal conformance claim, you must use the process described in this document. Conformance claims are not required and your content can conform to WCAG 3.0, even if you don’t want to make a claim.
There are two types of content in this document:
We are experimenting with different conformance approaches for WCAG 3.0. Once we have developed enough guidelines, we will test how well each works.
WCAG 3.0 will use a different conformance model than WCAG 2.2 in order to meet its requirements. Developing and vetting the conformance model is a large portion of the work AG needs to complete over the next few years.
AG is exploring a model based on Foundational Requirements, Supplemental Requirements, and Assertions.
The most basic level of conformance will require meeting all of the Foundational Requirements. This set will be somewhat comparable to WCAG 2.2 Level AA.
Higher levels of conformance will be defined and met using Supplemental Requirements and Assertions. AG will be exploring whether meeting the higher levels would work best based on points, percentages, or predefined sets of requirements (modules).
Other conformance concepts AG continues to explore the following include conformance levels, issue severity, adjectival ratings and pre-assessment checks.
See Explainer for W3C Accessibility Guidelines (WCAG) 3.0 for more information.The concept of "accessibility-supported" is to account for the variety of user-agents and scenarios. How does an author know that a particular technique for meeting a guideline will work in practice with user-agents that are used by real people?
The intent is for the responsibility of testing with user-agents to vary depending on the level of conformance.
At the foundational level of conformance assumptions can be made by authors that methods and techniques provided by WCAG 3.0 work. At higher levels of conformance the author may need to test that a technique works, or check that available user-agents meet the requirement, or a combination of both.
This approach means the Working Group will ensure that methods and techniques included do have reasonably wide and international support from user-agents, and there are sufficient techniques to meet each requirement.
The intent is that WCAG 3.0 will use a content-management-system to support tagging of methods/techniques with support information. There should also be a process where interested parties can provide information.
An "accessibility support set" is used at higher levels of conformance to define which user-agents and assistive technologies you test with. It would be included in a conformance claim, and enables authors to use techniques that are not provided with WCAG 3.0.
An exception for long-present bugs in assistive technology is still under discussion.
When evaluating the accessibility of content, WCAG 3.0 requires the guidelines apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more sub-sets of the whole. Reasons for this include:
WCAG 3.0 therefore defines two ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.
Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.
We recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.
Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.
This glossary includes terms used by content that has reached a maturity level of Developing or higher. The definitions themselves include a maturity level and may mature at a different pace than the content that refers to them. The AGWG will work with other taskforces and groups to harmonize terminology across documents as much as is possible.
To be defined.
The group of user-agents and assistive technologies you test with.
The AG is considering defining a default set of user agents and assistive technologies that they use when validating guidelines. Accessibility support sets may vary based on language, region, or situation. If you are not using the default accessibility set, the conformance report should indicate what set is being used.
A formal claim of fact, attributed to a person or organization. An attributable and documented statement of fact regarding procedures practiced in the development and maintenance of the content or product to improve accessibility.
Evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests.
Automated testing is contrasted with other types of testing that involve human judgement or experience. [=Semi-automated evaluation=] allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.
Satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim.
See Conformance.
To be defined.
To be defined.
To be defined.
To declare something outdated and in the process of being phased out, usually in favor of a specified replacement.
Deprecated documents are no longer recommended for use and may cease to exist in the future.
To be defined.
The process of examining content for conformance to these guidelines.
Different approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and user testing.
A statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context.
High-level, plain-language outcome statements used to organize requirements.
Guidelines provide a high-level, plain-language outcome statements for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the requirements so that non-experts can learn about and understand the concepts. Each guideline includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs on specific topics, such as contrast, forms, readability, and more. Guidelines group related requirements and are technology-independent.
Evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated.
Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, by contrast with user testing which directly tests the experience of users with content.
To be defined.
To be defined.
To be defined.
Content provided for information purposes and not required for conformance. Also refered to as non-normative.
To be defined.
The smallest testable unit for testing scope. They could be interactive components such as a drop down menu, a link, or a media player. They could also be units of content such as a phrase, a paragraph, a label or error message, an icon, or an image.
Detailed information, either technology-specific or technology-agnostic, on ways to meet the requirement as well as tests and scoring information.
Non-literal text uses words or phrases in a way that goes beyond their standard or dictionary meaning to express deeper, more complex ideas. This is also called figurative language. To understand it, users have to interpret the implied meaning behind the words, rather than just their literal or direct meaning.
Examples: Allusions, hyperbole, idioms, irony, jokes, litotes, metaphors, metonymies, onomatopoeias, oxymorons, personification, puns, sarcasm, and similes. More detailed examples are available in the Methods section.
Content whose instructions are required for conformance.
To be defined.
The position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard can vary. For example, it can be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio rendering or a cursor position in a graphical rendering), or a range of text (e.g. focused text), or a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport). The point of regard is almost always within the viewport, but it can exceed the spatial or temporal dimensions of the viewport (see the definition of rendered content for more information about viewport dimensions). The point of regard can also refer to a particular moment in time for content that changes over time (e.g. an audio-only presentation). User agents can determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection.
A sequence of steps that need to be completed to accomplish an activity / task from end-to-end.
Testing scope that is a combination of all items, views, and task flows that comprise the web site, set of web pages, web app, etc.
To be defined.
Result of practices that reduce or eliminate barriers that people with disabilities experience.
Evaluation conducted using machines to guide humans to areas that need inspection.
Semi-automated evaluation involves components of automated evaluation and human evaluation.
To be defined.
Testing scope that includes a series views that support a specified user activity. A task flow may include a subset of items in a view or a group of views. Only the part of the views that support the user activity are included in a test of the task flow.
Mechanism to evaluate implementation of a method.
To be defined.
To be defined.
The end goal a user has when starting a process through digital means.
Evaluation of content by observation of how users with specific functional needs are able to complete a process and how the content meets the relevant requirements.
Testing scope that includes all content visually and programmatically available without a significant change. Conceptually, views correspond to the definition of a web page as used in WCAG 2, but are not restricted to content meeting that definition. For example, a view could be considered a “screen” in a mobile app or a layer of web content, such as a modal dialog.
The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy.
The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security.
This section shows substantive changes made in WCAG 3.0 since the First Public Working Draft was published in 21 January 2021 .
The full commit history to WCAG 3.0 and commit history to Silver is available.
Additional information about participation in the Accessibility Guidelines Working Group (AG WG) can be found on the Working Group home page.