This document describes features that quality assurance and web authoring tools may incorporate to support the evaluation of accessibility requirements, such as those defined by the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such features and to give an introductory guidance for tool developers on what kinds of features they could provide in future implementations of their tools. This list of features could also be used to help compare different types of evaluation tools, for example, during the procurement of such tools.
The features in scope of this document include capabilities to specify, manage, carry out and report the results from web accessibility evaluations. For example, some of the described features relate to crawling web sites, interacting with tool users to carry out semiautomated evaluation, or providing evaluation results in a machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0 and its supporting documents.
This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document neither prescribes nor prioritizes any particular accessibility evaluation feature or specific type of evaluation tools. It describes features that can be provided by tools that support fully-automated, semiautomated and manual web accessibility evaluation. Following this document can help tool developers to meet accessibility checking requirements defined by the Authoring Tool Accessibility Guidelines (ATAG).
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.
This 24 July 2014 First Public Working Draft of Developers' Guide to Features of Web Accessibility Evaluation Tools is intended to be published and maintained as a W3C Working Group Note after review and refinement. It provides an initial outline for the format and approach taken in this document to gather early feedback on it. Some features may be missing in this first iteration of the document. Suggestions for additional features to be listed in this document is welcome.
The Evaluation and Repair Tools Working Group (ERT WG) invites discussion and feedback on this document by web accessibility evaluation tool developers, web authoring and quality assurance tool developers, evaluators, researchers, and others with interest in web accessibility evaluation tools. In particular, ERT WG is looking for feedback on:
Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document has been produced by the Evaluation and Repair Tools Working Group (ERT WG), as part of the Web Accessibility Initiative (WAI) Technical Activity.
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
Designing, developing, monitoring, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an integrated development environment (IDE) to create templates for a content management system (CMS), while a web content author will typically use the content-editing facility provided by the CMS to create and edit the web pages. Ideally all these tools should provide features to support everyone involved throughout the process of evaluating accessibility. For example, an IDE could provide functionality to check document fragments so that the developer can test individual web page components during their development, and a CMS could provide functionality to customize the accessibility checks that are automatically carried out to help monitor the quality of the website. This document lists and describes the types of features that can be provided by tools to support accessibility evaluation in a variety of situations and contexts.
In the context of this document, an evaluation tool is a (web-based or non-web-based) software application that enables its users to evaluate web content according to specific quality criteria, such as web accessibility requirements. This includes but is not limited to the following (non-mutually-exclusive) types of tools:
Note that these terms are not mutually exclusive. A web accessibility evaluation tool is a particular type of web quality assurance tool. In other cases an evaluation tool could be considered to be a web authoring tool, for example, if it provides repair functionality that modifies the content. Also, a web quality assurance tool might not check for accessibility criteria but might provide other functionality, such as managing quality assurance processes and reporting evaluation results, which may be useful for web accessibility evaluation. This document refers to any of these tools collectively as evaluation tools.
W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.
The features of an accessibility evaluation tool are presented in this section from different perspectives: the resource to be evaluated (i.e., web content and its linked resources, which enable its rendering in the user agent), the evaluation requirements, the reporting customization capabilities of the tool and other tool usage characteristics like the integration into the development and editing workflow of the user.
The list of accessibility evaluation features described below is not exhaustive. It may be neither possible nor desired for a single tool to implement all of the listed features. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. As mentioned in the abstract, the features presented in this section are provided as a reference. This document does not prescribe any of them to developers of accessibility evaluation tools. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to look for.
This category includes features that help to retrieve and render different types of web content. There are tools that may retrieve the content to be analyzed from the file system or from a database. However, the majority of them do it via a network connection through the HTTP(S) protocol. This section focuses mostly on this latter scenario.
Due to the characteristics of the HTTP(S) protocol, the rendering of a web resource implies the manipulation and storage of many other components associated with it, like request and response headers, session information, cookies, authentication information, etc. These associated components are also considered in the following.
The most common resource formats include:
This feature identifies which character encodings are supported by the evaluation tool. Web content can be transmitted using different character encodings and sets. The correct rendering of web resources for their evaluation depends upon the correct interpretation of these encodings. With HTML5 [HTML5] there is a push to use UTF-8 as the default encoding and abandon other legacy encodings. However, it is recommended that evaluation tools offer support for other legacy encodings.
More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].
Because the web is a multilingual and multicultural space in which information can be presented in different languages, evaluation tools should be in the position to handle them. From an accessibility standpoint, the language of the content is relevant to some accessibility criteria like readability. Therefore, the tool should explicitly declare for such criteria which content languages they support.
Many web sites are generated dynamically by combining code templates with HTML snippets that are created by website editors. Evaluation tools may be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to test these snippets as developers and/or editors create them.
Usually this is implemented in the evaluation tools by creating DOM document fragments [DOM] from these snippets. Evaluation tools may filter as well the accessibility tests according to their relevance to the document fragment.
In this case, it is also frequent that these resources are not transmitted via HTTP. For such cases, it is recommended to include the considerations of section 2.1.5.
As mentioned earlier (see section 2.1.1), from the accessibility standpoint it is very important to consider not only the static HTML source code of a web page, but how the web page is rendered with its associated resources (i.e., media, scripts, styles) and presented to the end user. Therefore, it is recommended that the evaluation process occurs on the rendered DOM, otherwise the tool will miss the complexity of the interface presented to the end-user. This is especially relevant for web and cloud applications, which are common on the web.
The rendering process should not be underestimated and most of times requires the integration of the evaluation tool with a web browser engine. This integration can be achieved in different ways (see section 2.4.6).
Content negotiation is a characteristic of the HTTP(S) protocol that enables web servers to customize the representation of the requested resources according to the demands of the client user agent. Because of this, the identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient. From the accessibility perspective, this implies that a resource may present accessibility problems, whilst another one under the same URI may be fully accessible (for instance, when the page is requested in another language).
To support content negotiation, the testing tool customizes and stores the HTTP headers according to different criteria:
Content negotiation is supported by other elements described in the following like cookies, authentication and session information.
A cookie is a name-value pair that it is stored by the user-agent [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information exchanged between the client and the server, which as seen before, may be relevant for content negotiation. A tool that supports cookies may store the cookie information provided by the server in an HTTP response and reuse it in subsequent requests. It may also allow the user to manually set cookie information to be used with the HTTP requests.
Websites may require authentication (e.g., HTTP authentication, OpenID, etc.) to control access to given parts of the website or to present customized content to authenticated users. A tool that supports authentication allows the user to provide their credentials beforehand, so that they are used when accessing protected resources, or it prompts the user to enter her credentials upon the server request. The tool may also support the use of different credentials for different parts of a web site.
Within HTTP, session information can be used for different purposes like, e.g., implementation of security mechanisms (login information, to logout a user after a long inactivity period) or track the interaction paths of the users. Session information can be stored in the user agent local storage, in a session ID in the URL or in a cookie, for example. An evaluation tool that supports session tracking should be able to handle these different scenarios.
Some evaluation tools incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. There are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong results in the evaluation process.
A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:
This feature refers to the capability of an evaluation tool to select a subset of web pages within a website according to different criteria. These criteria correspond to different parameters: random selection, user access visits, modification dates, type of content, pages with frequent user interaction (such as search forms or feedback forms), manual selection by the evaluation tool user, etc.
This feature is important for manual tests in large web sites where it is practically impossible to carry out manual accessibility tests in all of its web resources.
This category includes features targeted to the configuration of the tests to be performed.
Accessibility evaluation tools may offer the possibility to select a given subset of evaluation tests or even a single one. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.
This feature shall not be confused with the fact that some tools are focused on testing a single characteristic of the web page, like for example, a tool to test color contrast.
Accessibility evaluation tools carry out testing in different modes:
langattribute in a
altattribute) for an image (
imgelement), but it cannot judge the adequacy of the text to describe the image.
(See: Evaluation and Report Language 1.0 [EARL10] and Authoring Tool Accessibility Guidelines 2.0 [ATAG20].)
Support for automatic testing varies significantly between tools. Evaluation tools may support their users when performing semiautomatic or manual tests. This support could be introduced, for example, by highlighting in the source code or in the rendered document the areas that create accessibility problems or where human intervention is needed to evaluate the outcome of the test.
Tools may keep provenance information (i.e., which part of the report was automatically generated by the tool and which was manually modified). Few accessibility requirements can be tested automatically, thus full accessibility conformance can only be ensured by supporting evaluation tool users to carry out the tests in manual and semiautomatic mode.
It is recommended that accessibility evaluation tools document which accessibility criteria (for instance, at the level of failures and techniques) are implemented, thus aggregation of results (see section 2.3.6) and conformance statements (see section 2.3.7) could be better justified.
This information could also indicate which of the implementations are fully automatic, semiautomatic or require a manual evaluation (see section 2.2.2).
Developers and quality assurance engineers sometimes need to implement their own tests. For that purpose, some tools define an API that helps developers to create their own tests, which respond to internal demands within their organization.
When evaluating accessibility of web sites and applications, it is some times convenient to create scripts, which emulate user interaction (e.g., activating interface components by clicking with the mouse, swiping with the fingers on a touch-screen or using the keyboard) that modify the status of the current page or load new resources.
There are tools that enable developers to write scripts that automate the emulation of application's and the end-users' behavior. There is an effort to standardize a common API for such tools. One of these APIs is the W3C WebDriver API [WebDriver].
More than a testing feature, this is an awareness raising component of some tools for its users, to emulate how people with different disabilities experience the web. For instance, the tool could linearize the web page to recreate how a screen reader could present the page content, or the tool could modify the page and its components' colors to emulate some color deficiencies.
This category includes features related to the ability of the tool to present, store, import, export and compare the testing results in different ways. In this document the term report must be interpreted in its widest sense. It could be a set of computer screens presenting different tables and graphics, a set of icons superimposed on top of the content displayed to the user indicating different types of errors/warnings, a HTML document or a word processor document summarizing the evaluation results, etc.
These are formats normally not adequate for human consumption. They are used for storage purposes in a database (see section 2.3.3) or exported so that other evaluation tools can parse and interpret its results. The most common reporting languages used are:
These are reports targeted to the tool users. They may be HTML or word processor documents with the test results (with tables, graphics, etc.) to be read out of the tool context, or they may be a set of application windows, which guide the tool user through the different evaluation results, presenting aggregated views when necessary (see section 2.3.6).
The implementation of monitoring features requires that the tool has a persistence layer (a database, for example) where results could be stored and retrieved at a later stage to compare different evaluation rounds.
There are cases where tool users want to filter, combine, or compare evaluation results with other tools (for instance, when tool A does not test a given problem, but tool B does it). The support for a common reporting language (see section 2.3.1) facilitates those tasks by allowing importing of information. That functionality also permits the integration of the evaluation tool into other development and testing environments.
This feature allows the customization of the resulting report according to different criteria, such as the target audience, the type of results, the part of the site being analyzed, the type of content, etc. This feature may also allow the tool user to add additional comments in the report.
The presentation of evaluation results and their aggregation is influenced by different aspects:
Conformance statements are demanded by some users to quickly assess the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of accessibility techniques (i.e., common failures, sufficient techniques, etc.) and aggregate results as described in the previous section.
As described in section 2.2.2, full accessibility compliance can only be achieved when manual testing and/or semiautomated checking have been implemented.
The majority of web developers have little or no knowledge about web accessibility. Together with their reporting capabilities, tools may provide additional information to support the correction of the accessibility problems detected. This information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. This feature may include, for example, a guided step-by-step wizard which guides the evaluator to correct the problems found (some user interface mockups can be found in the document Implementing ATAG 2.0, A guide to understanding and implementing Authoring Tool Accessibility Guidelines 2.0). Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.
If the evaluation tool is part of an authoring tool as described in the Authoring Tool Accessibility Guidelines 2.0 [ATAG20], then it can support the authoring tool to meet success criterion B.3.2.1.
This section includes characteristics that describe the integration into the development and edition workflow of the user or are targeted to the customization of different aspects of the tool depending on its audience, like for instance, user interface language, user interface functionality, user interface accessibility, etc.
Accessibility evaluation tools present different interfaces, which allow their integration into the standard development workflow of the user. The typical ones that can be highlighted are the following:
Localization and internationalization are important to address worldwide markets. Tool users may not be able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N]. From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines.
Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:
The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.
Although there is an international effort to harmonize web accessibility standards, there are still minor differences in accessibility requirements in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the accessibility standard most commonly referenced in policies worldwide.
Accessibility evaluation teams may include people with disabilities. Therefore, it is important that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the underlying operating system. In such cases, compliance with part A of the Authoring Tool Accessibility Guidelines 2.0 becomes an important feature to support both from the perspective of the user interface of the tool and the access to its results.
When producing evaluation reports to be read outside the tool itself (for instance, a HTML report to be read in a browser), it is important to ensure that they follow the accessibility recommendations of the Web Content Accessibility Guidelines 2.0 [WCAG20].
Accessibility evaluation tools present different architectures and run on different platforms. Typical platform examples are: desktop applications, browser add-ons, distributed enterprise applications (with client- and server-side components), etc. Additionally, some of them include a persistence layer in the form of a database to enable monitoring of results.
This section presents 3 examples of accessibility evaluation tools. They are provided for illustration purposes and do not represent an existing product. In every subsection, we will highlight some of the key features of the tool. The table at the end of the chapter summarizes and complements these textual descriptions.
Tool A is a browser plug-in, which can perform a quick automatic accessibility evaluation on a rendered HTML page. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
Tool B is a large-scale accessibility evaluation tool used to analyze web sites with large volumes of content. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment based upon a virtual machine environment that emulates the accessibility API of some devices. The main features of the tool are:
Table 1 presents an overview of the matching features as described in section 2.
This section presents a tabular comparison the tool features described previously. They are provided for illustration purposes and do not represent an existing product.
|Category||Feature||Tool A||Tool B||Tool C|
|Character encodings||ISO-8859-1, UTF-8, UTF-16||ISO-8859-1, UTF-8||ISO-8859-1, UTF-8|
|Content language||any language supported by these encodings: ISO-8859-1, UTF-8, UTF-16||any language supported by these encodings: ISO-8859-1, UTF-8||any language supported by these encodings: ISO-8859-1, UTF-8|
|DOM Document fragments||no||no||no|
|Static code evaluation vs. rendered DOM evaluation||rendered DOM (relies on browser capabilities)||rendered DOM (rendering engine)||rendered DOM (rendering engine)|
|Content negotiation||relies on browser capabilities; not configurable||full support; configurable||relies on browser capabilities; not configurable|
|Cookies||relies on browser capabilities; not configurable||full support; configurable||relies on browser capabilities; not configurable|
|Authentication||relies on browser capabilities; not configurable||full support; configurable||relies on browser capabilities; not configurable|
|Session tracking||relies on browser capabilities; not configurable||full support; configurable||relies on browser capabilities; not configurable|
|Testing functionality||Selection of evaluation tests||no||yes||no|
|Test modes||only automatic||all||all|
|Documenting implementation of accessibility requirements||no||yes||no|
|Development of own tests and test extensions||no||no||no|
|Emulating how people with disabilities experience the web||no||no||yes|
|Reporting and monitoring||Machine-readable reporting formats||EARL||EARL||none|
|Human-readable reports||via UI icons||dashboard; HTML report||dashboard|
|Persistence of results||no||yes||no|
|Importing evaluation results||EARL||EARL, CSV||no|
|Report customization||no||comments/results added by evaluator||no|
|Error repair guidance||inline hints||in report||yes|
|Tool usage||Workflow integration||browser plug-in||stand-alone client+server application||stand-alone desktop application|
|Localization and internationalization||en||en, de, fr, es, jp||en|
|Functionality customization to different audiences||developers||developers, commissioners||developers|
|Policy environments||WCAG 2.0||WCAG 2.0, Section 508 (USA), BITV 2.0 (Germany)||WCAG 2.0|
|Tool accessibility||not accessible||accessible under Microsoft Windows||not accessible|
|Platform||browser add-on||distributed enterprise application with an external database||desktop application|
The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Philip Ackermann, Evangelos Vlachogiannis, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.
This publication was developed with support from the WAI-ACT project, co-funded by the ICT initiative under the European Commission's Seventh Framework Programme.