Web Technology Accessibility Guidelines advises creators of technical specifications how to ensure their technology meets the needs of user with disabilities. It addresses primarily web content technologies but also relates to any technology that affects web content sent to users, including client-side APIs, transmission protocols, and interchange formats. Specifications that implement these guidelines make it possible for content authors and user agents to render the content in an accessible manner to people with a wide range of abilities.
Status of This Document
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
Numerous guidelines exist for creating and supporting content that is accessible to people with disabilities, on and off the Web. When these guidelines are supported in the entire web ecosystem, content creators can author accessible content, and expect the accessibility features to be made available by user agents, including assistive technologies when needed. Authoring tools support creation of accessible content, and accessibility features survive transmission to different systems or conversion of content to different formats.
Nearly all of these accessibility features depend on support in some form from the technology in which content is encoded, transmitted, and sometimes transformed. But there is not yet a set of well-documented guidance for such technologies. Instead, requirements are inferred from authoring and user agent guidelines. This makes it complicated for technology creators to ensure they have met the full set of needs. Review from accessibility specialists is limited by bandwidth and expertise, so does not fully address that problem. As a result, varying technologies provide various levels of support with varying levels of compatibility with other technologies. These issues at the core layers of Web technology impact the progress that can be made from support of higher-level guidelines.
Web Technology Accessibility Guidelines aims to fill this gap. It is intended to be a single, well-considered set of guidelines addressing specifically the features technologies need to provide to support accessible. These guidelines relate to the requirements of other guidelines but should not be confused with them. The goal of WTAG is to provide a single source of guidelines for Web technology accessibility. They relate to other guidelines and documentation to provide additional information and rationale for the requirement, but are intended to be a self-sufficient set of guidelines that technology creators can follow.
The primary audience of WTAG is creators of Web technologies. Most of the guidelines relate to content and presentation technologies like HTML, CSS, SVG, PDF, audio/video formats, etc. Some guidelines also address data formats, interchange formats, transmission protocols, etc., usually aimed at ensuring these technologies preserve the accessibility features of content impacted by these technologies. Because of this broad set of relevant technologies, all Web technology creators are considered part of the audience for WTAG.
Secondary audience include creators of higher level accessibility guidelines and other advocates for web accessibility features. Because WTAG has a strong grounding in user needs, researchers and advocates who identify the accessibility requirements of web users with disabilities are also an important audience.
Web Technology Accessibility Guidelines is a product of the World Wide Web Consortium and as such targets only the accessibility requirements of web technologies. Many of the user needs are the same for web use and non-web use, so WTAG will necessarily overlap and hopefully be compatible with similar guidelines addressing non-web space. Nonetheless, WTAG is not designed to be used for non-web technologies and there could be key differences. Furthermore, there are user needs that exist outside the web that do not impact the web, and those needs are completely unaddressed by WTAG.
In spite of these caveats about the scope of WTAG, this scope will evolve as the Web does. More and more technologies are becoming part of the Web, and bringing user needs to the Web along with them. For instance, strictly hardware accessibility issues may be non-web requirements, but the Web of Things brings many of these issues closer to the Web than in the past. WTAG will need to reflect this evolution, and future versions may be required to address user needs that are new to the Web.
The goal of WTAG is to help ensure that web technologies meet the needs of users with disabilities. To do this, the work involves three stages:
Inventory user needs;
Identify ways to meet needs;
Develop technology guidelines.
1.4.1 Inventory user needs
The first step in the development of these guidelines is to inventory known user needs. Many user needs affecting web content accessibility are well known and documented in multiple places. These needs are collected and related to each other in order to arrive at a single set of known needs. Sources examined in the development of these guidelines include:
Note the goal of this exercise is not to supplant other good work in this field. The aim is to assemble disparate sources if knowledge about user needs in one place, to facilitate analysis. This work is likely to spin off from the core work of developing WTAG. If another organization creates a sufficiently rich collection of documented user needs it will be possible to use that resource rather than reinvent the work in W3C.
1.4.2 Identify ways to meet needs
The second stage in development of the guidelines is to identify ways these needs can be met. There are three high-level ways user needs can be met:
user agent support.
These are not mutually exclusive categories. A given user need could be met by more than one of these categories, but the ability of a given category to meet a user need implies the need for guidelines targeting that category. In policy setting and evaluation there may be a preference hierarchy for how best to meet needs, e.g., user agent support of standard features is preferred, but author technical override is needed if user agent support is lacking.
Some needs can be met with present technology only via one of these routes. Other needs can be met by more than one route, and for content to be accessible it is only required that one of the available routes be implemented. Many needs, however, require more than one route to be implemented together for the need to be met. The most common example is that a technology provides a feature, the author uses that feature in the content, and the user agent makes the result available to the user.
All of these ways of meeting user needs are identfied, along with their relationships to each other. Once these approaches are identified, the result is separate lists of requirements for content technologies, authors, and user agents. The relationship among the routes may play a role in prioritization of guidelines, since needs that can only be met by one route may be more important to meet by that route, than needs that could be met by other means as well.
1.4.3 Develop technology guidelines
From the above analyses, it should also be easy to see where content technology features are required to make it possible to meet user needs. For example:
If the author must implement something, the technology must provide a feature for the author to implement.
If the user need is met by design, the technology must provide suitably rich design capabilities.
If the user need is met by user agents, the technology must provide a sufficiently rich definition of the object for user agents to implement.
Not all technologies will address all ways of meeting user needs. For instance, CSS is primarily design-oriented, and HTML is somewhat semantics-oriented. The technology requirements may need conformance profiles or some other way of guiding technology developers seeking to follow them. It may not be easy to state in a general prescriptive way whether a given technology should, for instance, provide a richer design capability to meet a user need or should instead rely on better semantics for assistive technology-oriented content alternatives. A good structure of the technology requirements should help make it clear that some method of meeting a given user need is important. Horizontal review may continue to be important in guiding technology developers through the possibilities.
The set of approaches to meeting user needs that affects technology features becomes the base information for the Web Technology Accessibility Guidelines. (The other two routes, while important to the analysis, are not directly relevant to WTAG but may inform other work.) These approaches are prioritized, organized, and translated into guidelines-type language to become the Web Technology Accessibility Guidelines.
With the above analyses done, it should be easy to see how current guidelines address which user needs. In turn it should be easy to see where current guidelines to not meet user needs, that in theory should be able to be met by activities within the remit of that set of guidelines. This should be important input into WAI 3.0 / WAI 2020 planning.
1.5 Explanation of User Needs
Identify user needs that we plan to provide guidance on meeting. These should describe the needs of humans as they currently exist (i.e., without significant evolution or cyborgization from early 21st century norms), and therefore is as era-independent as possible with current knowledge. The focus is on the needs of people with disabilities, but because that is sometimes a relative / contextual condition, a significant proportion of mainstream needs will also be identified.
At least two levels of needs may be identified. The first is truly generic needs, requirements users have to access and use content such as perceive and understand it, and should be stable over time. The second layer is needs specific to technologies of the day, such as ability to understand and operate controls. This layer may be understood as an implementation of the generic needs, so may not be classed as user needs in the end. Regardless of its classification, it will be an important component of understanding the space. This level of needs evolves as technology and design patterns do, so needs to be maintainable separately from the generic needs.
1.5.1 Evolution of user needs
Where user needs are suspected but known, related work may expand the inventory through research when feasible. Therefore the set of documented user needs will evolve over time. A given set of guidelines including WTAG, however, can only address needs that were known at the time of development of the guidelines.
User is accessing Web content on hardware / OS / AT combination that supports their needs. This may not always be true for shared device / public kiosk situations, but that issue is out of scope.
1.5.3 Applicability of Needs
Controls (buttons, fields, etc.)
Input indicators (mouse, keyboard cursors)
Signals (non-actionable state indicators)
Alerts (dialogs, alarms, etc.)
2. Collected User Needs
This is a draft user needs collection. It may or may not be prove desirable to prioritize needs in order to yield manageable sets. These needs will be cross-referenced to authoritative sources that also express the same requirement.
In the list below, there may be overlap between user needs, and known ways to meet user needs. These will need to be teased out over the course of the project.
Content perceivable in form other than produced by author
Alternate content can be provided
Content encoded in manner that permits machine transformation
Different alternate content for different signals
Alternate content findable
In same time and location as control
Enable direct perception of output for people with wide range of perception disabilities
Visual presentation perceivable across a range of vision impairments
Luminosity contrast sufficient
Hue contrast sufficient for certain color perception disorders
Pointers and cursors can be perceived
On-request location indicator
Auditory content perceivable for audio cues., 5.8 Sufficient quality (e.g. volume, direction, clarity, frequency) for audio feedback.)
Tactile indications perceivable (not web? But probably will be)
Distinct enough to accommodate reduced sensitivity
Adjustability of output
Visual presentation can be adjusted to meet needs of user
Text can be resized
Content format allows resize
Author provides resize feature
Resize of text does not break presentation
Layout resizes with text (zoom)
Layout self-adjusts to text resizes (reflow) (not fixed units)
Color scheme can be changed
Content format defines color style scheme that user or user agent can override
Author provides color switch options
Luminosity contrast can be changed
User agent exaggerates luminosity differences
Author provides color switch options
Typeface, font weight, font style can be changed
Line / word / letter spacing can be changed
Margins can be changed
Line length can be changed
Justification can be changed
Auditory content can be adjusted to meet needs of user
User can adjust volume level
Contrast between foreground and background audio is sufficient
Unnecessary background audio can be muted separately from the foreground audio
User can control time-based media
User can pause, stop, replay media
Send output to alternate device, for better accessibility or to manage security / privacy in public space to manage overhearing / looking over shoulder of large print
User can navigate content effectively
Useful navigation order [also relates to efficient usage]
Navigation supports different thinking styles
Meaning conveyed by style perceivable to users that is presented through color to be also presented in another way that does not rely on color.)
Author provides multiple redundant style cues
Meaning supported by programmatically determined semantics
Input is device independent
Input does not require pointing device
Support keyboard input with only visual feedback.)
Allow speech input
Input does not require speech input
Enable input for users with wide range of abilities
Do not require simultaneous actions
Do not require precision in movement
Allow input with different parameters for speed, acceleration, etc.
User can manage distractions
Autoplay of any moving content can be prevented
Blinking or movement can be easily stopped
Layout helps user find important content
Cursor blinking can be turned off
User can avoid personal risk
Flash can be prevented
Content with flash can be reliably warned in advance and avoided
Unexpected flash can be immediately stopped
@@ triggering condition can be prevented
Content with @@ triggering condition can be reliably warned in advance and avoided
Unexpected @@ triggering condition can be immediately stopped
Important alerts can reach the user quickly
Users can discover content on the page
Headings identify regions of content
Regions of content are programmatically determined
User can skip to main content
Controls and their functions are programmatically determined
Controls have contrast with their surroundings
Location of controls is predictable
Support users with instructions (@@overlaps with understand, avoid confusion)
Instructions for accessible interaction
Users can understand content, navigation, and available interactions
Description of layout available (WCAG @@ Table Summary)
Description does not depend on specific sensory characteristics
Simple wording in instructions
Use standard icon / design / description conventions .)
Don’t require hierarchical thinking
Clear instructions for interaction
Users are not confused
Style does not imply perceived affordance of non-actionable features
Not confused with non-indicating tactile objects
Predictability / consistency of design
Objects don’t change their function
Minimize reliance on user short-term memory
Provide guidance for multi-step processes
Simplified interface available
Do not require specific physical characteristics (non-web? Handedness, having hands, etc.)
Alternative biometric identification
Avoid accidental activation of controls
Due to tremors etc.
Due to overly small and proximate controls
Due to design confusion
Recover from errors
Notification when error detected
Instructions on error recovery
Ability to correct error
Ability to redo task
Complete time-sensitive tasks
Ability to complete task in allotted time
Support ability to plan task execution strategy
Avoid distractions (see separate topic)
Ability to extend time
More time to read
More time to complete task
Support efficient usage
Change characteristics (speed, voice, pitch, accent) of audio alternatives
Provide accessibility preferences that take effect immediately
Preferences return to default state when switching users [not web?]
3. Meeting User Needs
This is a preliminary draft to document how user needs are met in various ways.
For each user need, ways to meet it are proposed for:
Other categories may be included later. Many user needs can be met in more than one way. The mechanism to meet user needs in one of the above areas may require support from one or more of the other areas.
3.1 Ways to Meet User Needs
User needs need to be analyzed for how they can be met. The following ways of meeting needs are currently understood:
Author technical implementation
User agent accessibility support of standard features
User agent support of author-implemented accessibility features
Assistive technology support (including accessibility API mediation)
3.1.1 Author Implementation
3.1.2 User Agent Features
220.127.116.11 Accessibility Support in Mainstream User Agents
18.104.22.168 Assistive Technology
22.214.171.124 Accessibility APIs
3.2 Meeting User Needs Table
This version of the resource is primarily to show the structure, not yet a comprehensive documentation of how user needs can be met.
The table below shows how two of the user needs identified above might be met by technology features, author implementation, and user agent support. Each row of the table shows a related set of approaches, in which the approach in column depends on successful implementation of the approaches in the other columns for that row. For instance, many author features depend on support from the technology as well as exposure from the user agents. Some approaches to meeting user needs do not require support from others, which is reflected by rows with blank columns. For instance, it is possible for a user agent to meet certain needs with no particular support provided by the technology or author. This layout is preliminary and a more expressive layout is sought.
Provide a mechanism for author to create text alternatives and associate with content
Create text alternative content and associate with primary content using features of the content technology
Expose text alternatives provided by the author
Define parseable and semantically rich content encoding that supports automated creation of text alternatives
Encode content using a content technology that is sufficiently rich that machines can create useful automated text alternatives
Create automated text alternative content based on the semantics of the primary content
Provide color definition features that allow authors to set colors to meet requirements
Use only colors that meet luminosity contrast guidelines
Provide color definition features that allow users to override author-set colors
Provide a feature for users to override author colors
Provide color definition semantics that allow colors of common object types to be globally remapped easily
Use semantically defined color mappings to allow user global preferences to be easily applied
Support semantically defined color mappings to allow users to define global preferences that are easily applied across a range of content
Provide a feature to allow users to define their own color preferences
Provide a feature to allow users to request "high contrast" mode
Provide a "high contrast" mode that overrides author colors
4. Web Technology Accessibility Guidelines
The content below is merely initial draft content intended to show how guidelines aimed at web technology developers might look. It has not yet been related to the user needs and ways of meeting them outlined above. It serves as initial brainstorming to help demonstrate viability of this set of guidelines.
4.1 Alternative Content
Provide a way to explicitly mark content as not needing alternative content because it does not perform an important role.
Provide a way to explicitly indicate when author declined to provide alternative content.
Provide a way to explicitly indicate that authoring tool is unable to generate or obtain alternative content.
Provide a way to explicitly associate alternative content with the primary content.
Allow multiple types and instances of alternative content to be associated with primary content.
4.1.1 Text Alternatives
Provide a way to define short text alternatives / labels for non-text content.
Provide a way to define long text alternatives for non-text content.
Allow text alternatives to be semantically "rich" e.g., with page structure, text style, hyperlinks, etc.
4.1.2 Rich Alternatives
Provide a way to define non-text alternatives for text content.
Provide a way to define non-text alternatives for non-text content .
Allow users to override colors of text and user interface components.
Provide a feature for authors to define semantically available "color classes" that users can easily map to custom colors, and give preference to this vs. coloring objects individually.
Provide a feature for users to choose color schemata that work for them.
Ensure that the foreground and background color of an object can be reported to the user via AT.
Provide ways to set foreground and background colors separately for all objects.
Define compositing rules for foreground and background colors well.
@@similar for patterns, gradients, etc.
4.3 Device Independence
Define a conformance model sufficiently robust that implementations of all types can process content without variations due to different handling of conformance violations.
4.3.1 Universal meaning support
Provide declarative mechanisms (that have accessibility semantics pre-defined in the spec) to implement technology features whenever possible.
Define unambiguous ways to express relationships between units of content, such as object nesting, ID referencing, etc.
Prefer structural semantics to presentational semantics.
When providing presentational semantics, define ways they can be easily mapped to structural semantics, e.g., to support restyling or meaningful exposure to AAPIs.
Minimize the need for alternative content by supporting a comprehensive set of authoring use cases. (e.g., don't make authors resort to text in images to get the style they want)
4.3.2 Compatibility with AAPIs
For every user interface object type, define the "type" of object as a role to AAPIs.
For every user interface object type, define how authors provide or user agent determines the "accessible name" for AAPIs.
For user interface objects that can have states, properties, or values, define how authors can set these and how these are exposed to AAPIs.
When providing imperative mechanisms to implement technology features (e.g., scripts), provide a way for authors to expose accessibility information to AAPIs.
Provide a way to title Web pages and sections of content.
Provide a way to clearly indicate the target of a hyperlink and function of a control.
Provide a way to indicate content language, for the page as a whole and for blocks of content.
Provide a way for authors to support understanding of abbreviations / acronyms / initialisms, idioms, jargon, etc.
Provide a mechanism to support correct machine pronunciation of ambiguously spelled terms (e.g., in the phrase "I am content with this content" there are different correct pronunciations of the lexeme "content").
4.3.3 Hardware Interfaces
Abstract hardware interfaces so various device types can simulate each others' functions.
Always provide a keyboard interface to interact with content.
4.4 User Customization
Provide ways for display of content to be resized without loss of effective layout.
Define adaptive layout mechanisms that optimize for different display conditions (screen size, font size, lighting, noise, etc.) well.
Separate layout semantics from content so users or tools can easily create alternate layouts.
Provide ways for users to easily select font preferences (size, weight, family, specific typeface, etc.).
Provide mechanisms for users to easily apply custom display requirements that preserve presentation of meaning without knowledge the idiosyncratic implementation of content.
4.5 User Control
4.5.1 Blinking / Flashing
Define a mechanism to warn users of flashing content.
Define a mechanism to identify potentially flashing content to user agents so they can suppress it on user preference.
Define declarative mechanisms for features that could cause blinking or flashing so their parameters can be more easily controlled by user agents than imperative mechanisms.
4.5.2 Automatic actions
Provide a way for users to prevent time-based content from playing automatically, e.g., as a user agent preference that implementations must provide.
Provide a way for users to request no interruptions. Note, bona fide emergency interruptions should still be allowed.
Define that there will be not automatic action leading to a change of context when focus lands on a given element.
4.5.3 Time-based content
Provide a way for users to pause and stop time-based content.
Provide a way for users to start or restart time-based content.
Any object that can respond to user interaction must be capable of receiving keyboard focus and actuating from keyboard input.
Common objects defined by the technology should be specified to be focusable by default.
Provide a means for authors to make objects focusable explicitly.
Define a complete focus movement model including cycling past the end, recovering from unexpectedly lost focus, etc. so they keyboard focus is always somewhere meaningful and findable.
Provide sensory and programmatic indication of focus location at all times.
If allowing authors to restyle focus indicators, provide a mechanism for users to override, particularly to "unhide" focus indicators.
4.7 Audiovisual Content
Provide a mechanism to associate synchronized text tracks with audio or visual tracks (to support captions, subtitles).
Provide support for at least two audio tracks so volume of foreground and background audio can be set separately.
When providing extension mechanisms, ensure the tools exist to allow developers to provide the same level of accessibility in extensions that are possible in the root technology. This may involve exposing API methods or other properties that would otherwise not be needed.
Allow extensions to intersect so accessibility features in one extension can automatically work in others.
4.9 Alternate view modalities
Provide ways for linear (1-dimensional) presentations of content to presented in a different subjective order than graphical (2-dimensional) presentations.
Provide ways for focus order to adapt to presentation order in order to remain meaningful.
@@support for haptic and other modalities?
Do not create time limitations on user interaction (e.g., actuate a default behavior if the user does not respond within a set window) unless such limitations are intrinsic to the functionality.
When creating time limitations on user interaction, provide mechanisms for users to be informed in advance of those limitations.
When creating time limitations on user interaction, provide a way for users to request additional time.
Provide an interface between content and servers so users can be informed of and request changes to time limitations of the server.
4.11 User input
Provide ways to indicate if user input controls must have input ("required") or not accepting input ("disabled").
Provide a way to associate information about input validation errors with the specific control(s) with input problems.
Provide a way for authors to provide context-sensitive help to users, and for users to access this via standard user agent features.
The following people contributed to the development of this document.
A.1 Participants in the PFWG at the time of publication
David Bolter (Mozilla)
Sally Cain (Royal National Institute of Blind People)
Michael Cooper (W3C/MIT)
James Craig (Apple Inc.)
Steve Faulkner (Invited Expert, The Paciello Group)
Geoff Freed (Invited Expert, NCAM)
Jon Gunderson (Invited Expert, UIUC)
Markus Gylling (DAISY Consortium)
Sean Hayes (Microsoft Corporation)
Kenny Johar (Vision Australia)
Matthew King (IBM Corporation)
Gez Lemon (International Webmasters Association / HTML Writers Guild (IWA-HWG))
Thomas Logan (HiSoftware Inc.)
William Loughborough (Invited Expert)
Shane McCarron (Invited Expert, Aptest)
Charles McCathieNevile (Opera Software)
Mary Jo Mueller (IBM Corporation)
James Nurthen (Oracle Corporation)
Joshue O'Connor (Invited Expert)
Artur Ortega (Yahoo!, Inc.)
Sarah Pulis (Media Access Australia)
Gregory Rosmaita (Invited Expert)
Janina Sajka (Invited Expert, The Linux Foundation)
Joseph Scheuhammer (Invited Expert, Inclusive Design Research Centre, OCAD University)
Stefan Schnabel (SAP AG)
Richard Schwerdtfeger (IBM Corporation)
Lisa Seeman (Invited Expert, Aqueous)
Cynthia Shelly (Microsoft Corporation)
Andi Snow-Weaver (IBM Corporation)
Gregg Vanderheiden (Invited Expert, Trace)
Léonie Watson (Invited Expert, Nomensa)
Gottfried Zimmermann (Invited Expert, Access Technologies Group)
A.2 Enabling funders
This publication has been funded in part with Federal funds from the U.S. Department of Education, National Institute on Disability and Rehabilitation Research (NIDRR) under contract number ED-OSE-10-C-0067. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.