[contents]


Abstract

This document describes features that quality assurance and web authoring tools may incorporate to support the evaluation of accessibility requirements, such as those defined by the Web Content Accessibility Guidelines (WCAG) 2.0. The main purpose of this document is to promote awareness on such features and to give an introductory guidance for tool developers on what kinds of features they could provide in future implementations of their tools. This list of features could also be used to help compare different types of evaluation tools, for example, during the procurement of such tools.

The features in scope of this document include capabilities to specify, manage, carry out and report the results from web accessibility evaluations. For example, some of the described features relate to crawling web sites, interacting with tool users to carry out semiautomated evaluation, or providing evaluation results in a machine-readable format. This document does not describe the evaluation of web content features, which is addressed by WCAG 2.0 and its supporting documents.

This document encourages the incorporation of accessibility evaluation features in all web authoring and quality assurance tools, and the continued development and creation of different types of web accessibility evaluation tools. The document neither prescribes nor prioritizes any particular accessibility evaluation feature or specific type of evaluation tools. It describes features that can be provided by tools that support fully-automated, semiautomated and manual web accessibility evaluation. Following this document can help tool developers to meet accessibility checking requirements defined by the Authoring Tool Accessibility Guidelines (ATAG).

Status of this document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This 24 July 2014 First Public Working Draft of Developers' Guide to Features of Web Accessibility Evaluation Tools is intended to be published and maintained as a W3C Working Group Note after review and refinement. It provides an initial outline for the format and approach taken in this document to gather early feedback on it. Some features may be missing in this first iteration of the document. Suggestions for additional features to be listed in this document is welcome.

The Evaluation and Repair Tools Working Group (ERT WG) invites discussion and feedback on this document by web accessibility evaluation tool developers, web authoring and quality assurance tool developers, evaluators, researchers, and others with interest in web accessibility evaluation tools. In particular, ERT WG is looking for feedback on:

Please send comments on this Developers' Guide to Features of Web Accessibility Evaluation Tools document by 15 August 2014 to public-wai-ert-tools@w3.org (publicly visible mailing list archive).

Publication as a First Public Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document has been produced by the Evaluation and Repair Tools Working Group (ERT WG), as part of the Web Accessibility Initiative (WAI) Technical Activity.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.


Table of Contents


1 Introduction

Designing, developing, monitoring, and managing a website typically involves a variety of tasks and people who use different types of tools. For example, a web developer might use an integrated development environment (IDE) to create templates for a content management system (CMS), while a web content author will typically use the content-editing facility provided by the CMS to create and edit the web pages. Ideally all these tools should provide features to support everyone involved throughout the process of evaluating accessibility. For example, an IDE could provide functionality to check document fragments so that the developer can test individual web page components during their development, and a CMS could provide functionality to customize the accessibility checks that are automatically carried out to help monitor the quality of the website. This document lists and describes the types of features that can be provided by tools to support accessibility evaluation in a variety of situations and contexts.

1.1 Evaluation Tools

In the context of this document, an evaluation tool is a (web-based or non-web-based) software application that enables its users to evaluate web content according to specific quality criteria, such as web accessibility requirements. This includes but is not limited to the following (non-mutually-exclusive) types of tools:

Note that these terms are not mutually exclusive. A web accessibility evaluation tool is a particular type of web quality assurance tool. In other cases an evaluation tool could be considered to be a web authoring tool, for example, if it provides repair functionality that modifies the content. Also, a web quality assurance tool might not check for accessibility criteria but might provide other functionality, such as managing quality assurance processes and reporting evaluation results, which may be useful for web accessibility evaluation. This document refers to any of these tools collectively as evaluation tools.

W3C Web Accessibility Initiative (WAI) provides a list of web accessibility evaluation tools that can be searched according to different criteria such as the features listed in this document.

2 Features of an accessibility evaluation tool

The features of an accessibility evaluation tool are presented in this section from different perspectives: the resource to be evaluated (i.e., web content and its linked resources, which enable its rendering in the user agent), the evaluation requirements, the reporting customization capabilities of the tool and other tool usage characteristics like the integration into the development and editing workflow of the user.

The list of accessibility evaluation features described below is not exhaustive. It may be neither possible nor desired for a single tool to implement all of the listed features. For example, tools that are specifically designed to assist designers in creating web page layouts would likely not incorporate features for evaluating the code of web applications. As mentioned in the abstract, the features presented in this section are provided as a reference. This document does not prescribe any of them to developers of accessibility evaluation tools. Developers can use this list to identify features that are relevant to their tools to plan their implementation. Also others interested in acquiring and using evaluation tools can use this document to learn about relevant features to look for.

2.1 Retrieving and rendering web content

This category includes features that help to retrieve and render different types of web content. There are tools that may retrieve the content to be analyzed from the file system or from a database. However, the majority of them do it via a network connection through the HTTP(S) protocol. This section focuses mostly on this latter scenario.

Due to the characteristics of the HTTP(S) protocol, the rendering of a web resource implies the manipulation and storage of many other components associated with it, like request and response headers, session information, cookies, authentication information, etc. These associated components are also considered in the following.

2.1.1 Resource formats

Although the majority of web resources are HTML documents, there are many other types of resources that need to be considered when analyzing web accessibility. For example, resources like CSS stylesheets or JavaScript scripts allow the modification of markup documents in the user agent when they are loaded or via user interaction. Many accessibility tests are the result of the interpretation of those resources and their combinations, and are therefore important for an accessibility evaluation. Accessibility evaluation tools should state which types of resource formats they support.

The most common resource formats include:

2.1.2 Character encodings

This feature identifies which character encodings are supported by the evaluation tool. Web content can be transmitted using different character encodings and sets. The correct rendering of web resources for their evaluation depends upon the correct interpretation of these encodings. With HTML5 [HTML5] there is a push to use UTF-8 as the default encoding and abandon other legacy encodings. However, it is recommended that evaluation tools offer support for other legacy encodings.

More information about this topic can be found in the W3C Internationalization Activity [W3Ci18n].

2.1.3 Content language

Because the web is a multilingual and multicultural space in which information can be presented in different languages, evaluation tools should be in the position to handle them. From an accessibility standpoint, the language of the content is relevant to some accessibility criteria like readability. Therefore, the tool should explicitly declare for such criteria which content languages they support.

2.1.4 DOM document fragments

Many web sites are generated dynamically by combining code templates with HTML snippets that are created by website editors. Evaluation tools may be integrated into Content Management Systems (CMS) and Integrated Development Environments (IDE) to test these snippets as developers and/or editors create them.

Usually this is implemented in the evaluation tools by creating DOM document fragments [DOM] from these snippets. Evaluation tools may filter as well the accessibility tests according to their relevance to the document fragment.

In this case, it is also frequent that these resources are not transmitted via HTTP. For such cases, it is recommended to include the considerations of section 2.1.5.

2.1.5 Static code evaluation vs. rendered DOM evaluation

As mentioned earlier (see section 2.1.1), from the accessibility standpoint it is very important to consider not only the static HTML source code of a web page, but how the web page is rendered with its associated resources (i.e., media, scripts, styles) and presented to the end user. Therefore, it is recommended that the evaluation process occurs on the rendered DOM, otherwise the tool will miss the complexity of the interface presented to the end-user. This is especially relevant for web and cloud applications, which are common on the web.

The rendering process should not be underestimated and most of times requires the integration of the evaluation tool with a web browser engine. This integration can be achieved in different ways (see section 2.4.6).

2.1.6 Content negotiation

Content negotiation is a characteristic of the HTTP(S) protocol that enables web servers to customize the representation of the requested resources according to the demands of the client user agent. Because of this, the identification of resources on the web by a Uniform Resource Identifier (URI) alone may not be sufficient. From the accessibility perspective, this implies that a resource may present accessibility problems, whilst another one under the same URI may be fully accessible (for instance, when the page is requested in another language).

To support content negotiation, the testing tool customizes and stores the HTTP headers according to different criteria:

Content negotiation is supported by other elements described in the following like cookies, authentication and session information.

2.1.7 Cookies

A cookie is a name-value pair that it is stored by the user-agent [HTTPCOOKIES]. Cookies contain information relevant to the website that is being rendered and often include authentication and session information exchanged between the client and the server, which as seen before, may be relevant for content negotiation. A tool that supports cookies may store the cookie information provided by the server in an HTTP response and reuse it in subsequent requests. It may also allow the user to manually set cookie information to be used with the HTTP requests.

2.1.8 Authentication

Websites may require authentication (e.g., HTTP authentication, OpenID, etc.) to control access to given parts of the website or to present customized content to authenticated users. A tool that supports authentication allows the user to provide their credentials beforehand, so that they are used when accessing protected resources, or it prompts the user to enter her credentials upon the server request. The tool may also support the use of different credentials for different parts of a web site.

2.1.9 Session tracking

Within HTTP, session information can be used for different purposes like, e.g., implementation of security mechanisms (login information, to logout a user after a long inactivity period) or track the interaction paths of the users. Session information can be stored in the user agent local storage, in a session ID in the URL or in a cookie, for example. An evaluation tool that supports session tracking should be able to handle these different scenarios.

2.1.10 Crawling

Some evaluation tools incorporate a web crawler [WEBCRAWLER] able to extract hyperlinks out of web resources. There are many types of resources on the web that contain hyperlinks. The misconception that only HTML documents contain links may lead to wrong results in the evaluation process.

A web crawler defines an starting point and a set of options. The most common features of a web crawler (configuration capabilities) are:

2.1.11 Sampling

This feature refers to the capability of an evaluation tool to select a subset of web pages within a website according to different criteria. These criteria correspond to different parameters: random selection, user access visits, modification dates, type of content, pages with frequent user interaction (such as search forms or feedback forms), manual selection by the evaluation tool user, etc.

This feature is important for manual tests in large web sites where it is practically impossible to carry out manual accessibility tests in all of its web resources.

2.2 Testing functionality

This category includes features targeted to the configuration of the tests to be performed.

2.2.1 Selection of evaluation tests

Accessibility evaluation tools may offer the possibility to select a given subset of evaluation tests or even a single one. A typical example could be performing tests to the different conformance levels (A, AA or AAA) of the Web Content Accessibility Guidelines 2.0 or selecting individual tests for a single technique or common failure.

This feature shall not be confused with the fact that some tools are focused on testing a single characteristic of the web page, like for example, a tool to test color contrast.

2.2.2 Test modes

Accessibility evaluation tools carry out testing in different modes:

Automatic
where the test was carried out automatically by the software tool without any human intervention. For example, the tool could detect the absence of a lang attribute in a html element.
Manual
where the test was carried out by human evaluators. This includes the case where the evaluators are aided by instructions or guidance provided by software tools, but where the evaluators carried out the actual test procedure. For example, the tool may be unable of detecting natural language changes and informs the user to perform that test in the page.
Semiautomatic
where the test was partially carried out by software tools, but human input or judgment was still required to decide or help decide the outcome of the test. For example, the tool could detect an alternative text (alt attribute) for an image (img element), but it cannot judge the adequacy of the text to describe the image.

(See: Evaluation and Report Language 1.0 [EARL10] and Authoring Tool Accessibility Guidelines 2.0 [ATAG20].)

Support for automatic testing varies significantly between tools. Evaluation tools may support their users when performing semiautomatic or manual tests. This support could be introduced, for example, by highlighting in the source code or in the rendered document the areas that create accessibility problems or where human intervention is needed to evaluate the outcome of the test.

Tools may keep provenance information (i.e., which part of the report was automatically generated by the tool and which was manually modified). Few accessibility requirements can be tested automatically, thus full accessibility conformance can only be ensured by supporting evaluation tool users to carry out the tests in manual and semiautomatic mode.

2.2.3 Documenting implementation of accessibility requirements

It is recommended that accessibility evaluation tools document which accessibility criteria (for instance, at the level of failures and techniques) are implemented, thus aggregation of results (see section 2.3.6) and conformance statements (see section 2.3.7) could be better justified.

This information could also indicate which of the implementations are fully automatic, semiautomatic or require a manual evaluation (see section 2.2.2).

2.2.4 Development of own tests and test extensions

Developers and quality assurance engineers sometimes need to implement their own tests. For that purpose, some tools define an API that helps developers to create their own tests, which respond to internal demands within their organization.

2.2.5 User interaction and test automation

When evaluating accessibility of web sites and applications, it is some times convenient to create scripts, which emulate user interaction (e.g., activating interface components by clicking with the mouse, swiping with the fingers on a touch-screen or using the keyboard) that modify the status of the current page or load new resources.

There are tools that enable developers to write scripts that automate the emulation of application's and the end-users' behavior. There is an effort to standardize a common API for such tools. One of these APIs is the W3C WebDriver API [WebDriver].

2.2.6 Emulating how people with disabilities experience the web

More than a testing feature, this is an awareness raising component of some tools for its users, to emulate how people with different disabilities experience the web. For instance, the tool could linearize the web page to recreate how a screen reader could present the page content, or the tool could modify the page and its components' colors to emulate some color deficiencies.

2.3 Reporting and monitoring

This category includes features related to the ability of the tool to present, store, import, export and compare the testing results in different ways. In this document the term report must be interpreted in its widest sense. It could be a set of computer screens presenting different tables and graphics, a set of icons superimposed on top of the content displayed to the user indicating different types of errors/warnings, a HTML document or a word processor document summarizing the evaluation results, etc.

2.3.1 Machine-readable reporting formats

These are formats normally not adequate for human consumption. They are used for storage purposes in a database (see section 2.3.3) or exported so that other evaluation tools can parse and interpret its results. The most common reporting languages used are:

2.3.2 Human-readable reports

These are reports targeted to the tool users. They may be HTML or word processor documents with the test results (with tables, graphics, etc.) to be read out of the tool context, or they may be a set of application windows, which guide the tool user through the different evaluation results, presenting aggregated views when necessary (see section 2.3.6).

2.3.3 Persistence of results

The implementation of monitoring features requires that the tool has a persistence layer (a database, for example) where results could be stored and retrieved at a later stage to compare different evaluation rounds.

2.3.4 Importing evaluation results

There are cases where tool users want to filter, combine, or compare evaluation results with other tools (for instance, when tool A does not test a given problem, but tool B does it). The support for a common reporting language (see section 2.3.1) facilitates those tasks by allowing importing of information. That functionality also permits the integration of the evaluation tool into other development and testing environments.

2.3.5 Report customization

This feature allows the customization of the resulting report according to different criteria, such as the target audience, the type of results, the part of the site being analyzed, the type of content, etc. This feature may also allow the tool user to add additional comments in the report.

2.3.6 Results aggregation

The presentation of evaluation results and their aggregation is influenced by different aspects:

2.3.7 Conformance

Conformance statements are demanded by some users to quickly assess the status of their website. When issuing such conformance statements it is thus necessary to tackle the different types of accessibility techniques (i.e., common failures, sufficient techniques, etc.) and aggregate results as described in the previous section.

As described in section 2.2.2, full accessibility compliance can only be achieved when manual testing and/or semiautomated checking have been implemented.

2.3.8 Error repair guidance

The majority of web developers have little or no knowledge about web accessibility. Together with their reporting capabilities, tools may provide additional information to support the correction of the accessibility problems detected. This information may include examples, tutorials, screencasts, pointers to online resources, links to the W3C recommendations, etc. This feature may include, for example, a guided step-by-step wizard which guides the evaluator to correct the problems found (some user interface mockups can be found in the document Implementing ATAG 2.0, A guide to understanding and implementing Authoring Tool Accessibility Guidelines 2.0). Automatic repair of accessibility problems is discouraged, as it may originate non-desirable side-effects.

If the evaluation tool is part of an authoring tool as described in the Authoring Tool Accessibility Guidelines 2.0 [ATAG20], then it can support the authoring tool to meet success criterion B.3.2.1.

2.4 Tool usage

This section includes characteristics that describe the integration into the development and edition workflow of the user or are targeted to the customization of different aspects of the tool depending on its audience, like for instance, user interface language, user interface functionality, user interface accessibility, etc.

2.4.1 Workflow integration

Accessibility evaluation tools present different interfaces, which allow their integration into the standard development workflow of the user. The typical ones that can be highlighted are the following:

2.4.2 Localization and internationalization

Localization and internationalization are important to address worldwide markets. Tool users may not be able to speak English and it is necessary to present the user interface (e.g., icons, text directionality, UI layout, units, etc.) and the reports customized to other languages and cultures. As pointed out earlier, more information about this topic can be found in the W3C Internationalization Activity [W3Ci18n] and in [I18N]. From the accessibility standpoint, it is recommended to use the authorized translations of the Web Content Accessibility Guidelines.

2.4.3 Functionality customization to different audiences

Typically, evaluation tools are targeted to web accessibility experts with a deep knowledge of the topic. However, there are also tools that allow the customization of the evaluation results or even the user interface functionality to other audiences like, for instance:

The availability of such characteristics must be declared explicitly and presented in an adequate way to these target user groups.

2.4.4 Policy environments

Although there is an international effort to harmonize web accessibility standards, there are still minor differences in accessibility requirements in different countries. The tool should specify in its documentation which policy environments are supported. Most of the tools are focused on the implementation of the Web Content Accessibility Guidelines 2.0 [WCAG20], because it is the accessibility standard most commonly referenced in policies worldwide.

2.4.5 Tool accessibility

Accessibility evaluation teams may include people with disabilities. Therefore, it is important that the tool itself can be used with different assistive technologies and it is integrated with the accessibility APIs of the underlying operating system. In such cases, compliance with part A of the Authoring Tool Accessibility Guidelines 2.0 becomes an important feature to support both from the perspective of the user interface of the tool and the access to its results.

When producing evaluation reports to be read outside the tool itself (for instance, a HTML report to be read in a browser), it is important to ensure that they follow the accessibility recommendations of the Web Content Accessibility Guidelines 2.0 [WCAG20].

2.4.6 Platform

Accessibility evaluation tools present different architectures and run on different platforms. Typical platform examples are: desktop applications, browser add-ons, distributed enterprise applications (with client- and server-side components), etc. Additionally, some of them include a persistence layer in the form of a database to enable monitoring of results.

3 Example profiles of evaluation tools

This section presents 3 examples of accessibility evaluation tools. They are provided for illustration purposes and do not represent an existing product. In every subsection, we will highlight some of the key features of the tool. The table at the end of the chapter summarizes and complements these textual descriptions.

3.1 Example Tool A: Browser plug-in evaluating a rendered HTML page

Tool A is a browser plug-in, which can perform a quick automatic accessibility evaluation on a rendered HTML page. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.2 Example Tool B: Large-scale accessibility evaluation tool

Tool B is a large-scale accessibility evaluation tool used to analyze web sites with large volumes of content. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.3 Example Tool C: Accessibility evaluation tool for mobile applications

Tool C is an accessibility evaluation tool for web-based mobile applications. The tool does not support native applications, but it provides a simulation environment based upon a virtual machine environment that emulates the accessibility API of some devices. The main features of the tool are:

Table 1 presents an overview of the matching features as described in section 2.

3.4 Side-by-Side Comparison of the Example Tools

This section presents a tabular comparison the tool features described previously. They are provided for illustration purposes and do not represent an existing product.

Table 1. List of features for the example tools described in section 3.
Category Feature Tool A Tool B Tool C
Retrieving and rendering web content Resource formats HTML, CSS and JavaScript HTML, CSS and JavaScript HTML, CSS and JavaScript
Character encodings ISO-8859-1, UTF-8, UTF-16 ISO-8859-1, UTF-8 ISO-8859-1, UTF-8
Content language any language supported by these encodings: ISO-8859-1, UTF-8, UTF-16 any language supported by these encodings: ISO-8859-1, UTF-8 any language supported by these encodings: ISO-8859-1, UTF-8
DOM Document fragments no no no
Static code evaluation vs. rendered DOM evaluation rendered DOM (relies on browser capabilities) rendered DOM (rendering engine) rendered DOM (rendering engine)
Content negotiation relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Cookies relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Authentication relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Session tracking relies on browser capabilities; not configurable full support; configurable relies on browser capabilities; not configurable
Crawling no yes no
Sampling no yes no
Testing functionality Selection of evaluation tests no yes no
Test modes only automatic all all
Documenting implementation of accessibility requirements no yes no
Development of own tests and test extensions no no no
Test automation no no yes
Emulating how people with disabilities experience the web no no yes
Reporting and monitoring Machine-readable reporting formats EARL EARL none
Human-readable reports via UI icons dashboard; HTML report dashboard
Persistence of results no yes no
Importing evaluation results EARL EARL, CSV no
Report customization no comments/results added by evaluator no
Results aggregation no yes no
Conformance no yes no
Error repair guidance inline hints in report yes
Tool usage Workflow integration browser plug-in stand-alone client+server application stand-alone desktop application
Localization and internationalization en en, de, fr, es, jp en
Functionality customization to different audiences developers developers, commissioners developers
Policy environments WCAG 2.0 WCAG 2.0, Section 508 (USA), BITV 2.0 (Germany) WCAG 2.0
Tool accessibility not accessible accessible under Microsoft Windows not accessible
Platform browser add-on distributed enterprise application with an external database desktop application

4 References

ATAG20
Authoring Tool Accessibility Guidelines (ATAG) 2.0. W3C Candidate Recommendation 7 November 2013. Jan Richards, Jeanne Spellman, Jutta Treviranus (editors). Available at: http://www.w3.org/TR/ATAG20/
CSS2
Cascading Style Sheets Level 2 Revision 1 (CSS 2.1) Specification. W3C Recommendation 07 June 2011. Bert Bos, Tantek Çelik, Ian Hickson, Håkon Wium Lie (editors). Available at: http://www.w3.org/TR/CSS2/
CSS3
CSS Current Status is available at: http://www.w3.org/standards/techs/css
CSV
Common Format and MIME Type for Comma-Separated Values (CSV) Files. Y. Shafranovich. Internet Engineering Task Force (IETF). Request for Comments: 4180, 2005. Available at: http://tools.ietf.org/rfc/rfc4180.txt
DOM
W3C DOM4. W3C Last Call Working Draft 04 February 2014. Anne van Kesteren, Aryeh Gregor, Ms2ger, Alex Russell, Robin Berjon (editors). Available at: http://www.w3.org/TR/dom/
EARL10
Evaluation and Report Language (EARL) 1.0 Schema. W3C Working Draft 10 May 2011. Shadi Abou-Zahra (editor). Available at: http://www.w3.org/TR/EARL10-Schema/
ECMAScript
ECMAScript® Language Specification. Standard ECMA-262 5.1 Edition / June 2011. Available at: http://www.ecma-international.org/ecma-262/5.1/
HTML4
HTML 4.01 Specification. W3C Recommendation 24 December 1999. Dave Raggett, Arnaud Le Hors, Ian Jacobs (editors). Available at: http://www.w3.org/TR/html4/
HTML5
HTML5. A vocabulary and associated APIs for HTML and XHTML. W3C Candidate Recommendation 04 February 2014. Robin Berjon, Steve Faulkner, Travis Leithead, Erika Doyle Navara, Edward O'Connor, Silvia Pfeiffer, Ian Hickson (editors). Available at: http://www.w3.org/TR/html5/
HTTPCOOKIES
HTTP State Management Mechanism. A. Barth. Internet Engineering Task Force (IETF). Request for Comments: 6265, 2011. Available at: http://tools.ietf.org/rfc/rfc6265.txt
I18N
Internationalization and localization. Wikipedia. Available at: http://en.wikipedia.org/wiki/Internationalization_and_localization
JSON
The JSON Data Interchange Format. Standard ECMA-404 1st Edition / October 2013. Available at: http://www.ecma-international.org/publications/standards/Ecma-404.htm
ODF
Open Document Format for Office Applications (OpenDocument) Version 1.2. OASIS Standard 29 September 2011. Patrick Durusau, Michael Brauer (editors). Available at: http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.html
OOXML
Ecma international. TC45 - Office Open XML Formats. Ecma International. Available at: http://www.ecma-international.org/memento/TC45.htm
PDF
PDF Reference, sixth edition. Adobe® Portable Document Format, Version 1.7, November 2006. Adobe Systems Incorporated. Available at: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html
RFC2119
Key words for use in RFCs to Indicate Requirement Levels. IETF RFC, March 1997. Available at: http://www.ietf.org/rfc/rfc2119.txt
TABDATA
Model for Tabular Data and Metadata on the Web. W3C First Public Working Draft 27 March 2014. Jeni Tennison, Gregg Kellogg (editors). Available at: http://www.w3.org/TR/tabular-data-model/
W3Ci18n
W3C Internationalization (I18n) Activity. Available at: http://www.w3.org/International/
WAI-ARIA
Accessible Rich Internet Applications (WAI-ARIA) 1.0. W3C Recommendation 20 March 2014. James Craig, Michael Cooper (editors). Available at: http://www.w3.org/TR/wai-aria/
WCAG20
Web Content Accessibility Guidelines (WCAG) 2.0. W3C Recommendation 11 December 2008. Ben Caldwell, Michael Cooper, Loretta Guarino Reid, Gregg Vanderheiden (editors). Available at: http://www.w3.org/TR/WCAG20/
WCAG20-TECHS
Techniques for WCAG 2.0. Techniques and Failures for Web Content Accessibility Guidelines 2.0. W3C Working Group Note 8 April 2014. Michael Cooper, Andrew Kirkpatrick, Joshue O Connor (editors). Available at: http://www.w3.org/TR/WCAG20-TECHS/
WEBCRAWLER
Web crawler. Wikipedia. http://en.wikipedia.org/wiki/Web_crawler
WebDriver
WebDriver. W3C Working Draft 12 March 2013. Simon Stewart, David Burns (editors). Available at: http://www.w3.org/TR/webdriver/
XHTML10
XHTML™ 1.0 The Extensible HyperText Markup Language (Second Edition). A Reformulation of HTML 4 in XML 1.0. W3C Recommendation 26 January 2000, revised 1 August 2002. Available at: http://www.w3.org/TR/xhtml1/
XML10
Extensible Markup Language (XML) 1.0 (Fifth Edition). W3C Recommendation 26 November 2008. Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, Eve Maler, François Yergeau (editors). Available at: http://www.w3.org/TR/REC-xml/
XML11
Extensible Markup Language (XML) 1.1 (Second Edition). W3C Recommendation 16 August 2006, edited in place 29 September 2006. Tim Bray, Jean Paoli, C. M. Sperberg-McQueen, Eve Maler, François Yergeau, John Cowan (editors). Available at: http://www.w3.org/TR/xml11/

Acknowledgements

The editors would like to thank the contributions from the Evaluation and Repair Tools Working Group (ERT WG), and especially from Yod Samuel Martín, Philip Ackermann, Evangelos Vlachogiannis, Christophe Strobbe, Emmanuelle Gutiérrez y Restrepo and Konstantinos Votis.

This publication was developed with support from the WAI-ACT project, co-funded by the ICT initiative under the European Commission's Seventh Framework Programme.