Accessibility Conformance Testing Framework

Editor’s Draft,

This version:
https://w3c.github.io/wcag-act/act-framework.html
Previous Versions:
https://w3c.github.io/wcag-act/archive_act-format/2017-02-06.html
Editors:
Wilco Fiers (Deque Systems)
Maureen Kraft (IBM Corp.)

Abstract

To do

1. Introduction

There are currently many products available which aid their users in testing for conformance of accessibility standards such as WCAG 2.0. As the web develops and grows in both size and complexity, these tools are essential for managing the accessibility of resources available on the web. The volume of information and services organizations provide on the web make it often impractical to manually test for accessibility.

Accessibility experts often disagree on how accessibility requirements should be tested. These disagreements on how a requirement should be tested, lead to conflicting results of accessibility tests. This is true for both manual accessibility tests as well as for accessibility testing done through automated test tools (ATTs).

Describing how to test certain accessibility requirements is the answer to this. By describing the test procedures, the results of an accessibility test become reproducible, and the test method becomes transparent. The Accessibility Conformance Testing Framework (ACT Framework) defines the requirements of these test descriptions, known as Accessibility Conformance Testing Rules (ACT Rules).

2. Scope

The ACT Framework is created to develop rules for the conformance of web technologies, including those used in digital publishing. This includes technologies such as HTML, CSS, WAI-ARIA, SVG and more. The ACT Framework is designed to be technology agnostic, meaning it has no specific technology in mind. This also means that the ACT Framework could conceivably be used for other technologies. Whether or not this is the case depends on the specific technology.

Accessibility requirements such as Web Content Accessibility Guidelines, which is specifically designed for web content, can be tested using ACT Rules. Other accessibility requirements that would also be applicable to web technologies should also be testable with ACT Rules. Because some of those accessibility requirements may be applicable to technologies other than web technologies, the ACT Framework may not be suitable for every part of this requirement.

For example, the User Agent Accessibility Guidelines 2.0 is applicable to web-based user agents, for which ACT Rules could be developed, but other technologies can also be used to develop User Agents, which are not web-based.

3. ACT Rule Structure

3.1. Rule Outline

A rule MUST provide the following items written in plain language:

3.2. Rule Description

Each ACT Rule MUST have a description that:

4. ACT Input Data

4.1. Test Input Types

Web pages, including publications and applications go through many different stages before it is rendered to the end user. For example, PHP may be used to put various pieces of content into a template. That template is then sent as an HTML text document to a web browser, which in term parses it and turns it into a DOM tree, before rendering it to the screen. At each of these stages, accessibility tests could be run. We can divide this into two stages of testing, black-box and white-box testing.

4.1.1. Black-box testing

In black box testing the final product is tested, without looking at the code that generated it. To do black-box testing, the tester does not need access to the source file(s), which usually makes black-box testing easier to set up.

There are different types of black-box testing:

4.1.1.1. File Testing

Testing the files as they are served to the web browser (or other user agent) has its limitations. The files may be manipulated in different ways through presentation and scripting. Although this is an excellent place for parser testing.

4.1.1.2. DOM Tree Testing

After the web browser (or other user agent) has parsed the files, and executed scripts to get it into a specific state (be it the initial state or any other), tests can be run against the DOM Tree. The DOM Tree can be tested for things like correct parent-child relationships, use of required attributes or properties and more.

4.1.1.3. Browser Testing

Testing the browser is the next level up from DOM Tree Testing. In addition to building the DOM Tree, the browser styles elements in the DOM tree and positions them. This enables a rule to determine if an element is visible, which is critical for many tests. Additionally, testing things like the color contrast becomes possible at this level.

4.1.1.4. Driver Testing

By controlling the browser, events can be triggered in the page, and user interaction can be simulated. This can be done using drivers, the most common of which is currently WebDriver. With driver testing, interactions can be tested.

Example: The expand state of an element with `role=menu` can be toggled if the enter key is pressed when it has focus.

4.1.2. White-box testing

In white box testing, the source files are tested instead of the output of the build process. The advantage of testing against source files is that it gives direct feedback to a developer, instead of only after a full page can be build. Additionally, it can provide information about which parts of elements are static, and which are dynamic.

There are generally two approaches to building a web document. Either through templating, or through scripting. The source files for most (if not all) websites are either templates or scripts (or a mix of the two). This gives two new types of tests. Template testing and composition testing.

Unlike black-box testing, in white-box testing, the Accessibility Test Tool implementing the ACT Rule must be developed specifically for the template engine or component technology. I.e. An ATT built to test HTML Handlebars templates should and can not be used to test Pug templates.

4.1.2.1. Template Testing

A template is document that has open fields that are filled with pieces of content or other templates. E.g. an HTML template contains a head with a variable title, a predefined navigation, and a variable content area.

Example: `` tags with a variable `src`, *must not* have a static `alt` value.

4.1.2.2. Composition Testing

A composition is a class or component that extends other native elements or other compositions, to build a higher level component. E.g. a login form component, consists of a form, a few fields, and a label.

Example: Component properties starting with `aria-` *must* exist in the list of WAI-ARIA attributes.

4.2. Accessibility Support Data

Editor note: This section will describe data about the accessibility support of different assistive technologies should be used by rules to produce results. Where relevant, rules must be able to take in data about supported features in assistive technologies, and adjust results accordingly.

5. ACT Test Procedure

5.1. Selector

A selector is a pattern that is used as a condition to filter input data to be evaluated against the test procedure. For example, finding all nodes in the DOM tree, or finding tags that are incorrectly closed in an HTML document.

Selector syntax depends on the input type. When the input data is an HTML document or set of elements, the selector must be a CSS selector. When a formal selector syntax is not available for the input type, the selector may be described unambiguously in plain English.

5.2. Test Cases

Editor note: This section describes how rules are broken down into one or more test cases. Each test case gives some result that, when combined, provides the outcome of the rule. Additionally this section describes how rule authors should write test cases, and the mechanism of combining their outcomes.

6. ACT Output Data

6.1. ACT Data Format

Editor note: This section describes the required properties of the data output by a rule. Certain parts must be standardized to enable aggregation of results produced by different accessibility test tools. Additionally, standardizing parts of the output format is required for validating the implementation (see below). Data in the output format has to be accessible.

6.2. Rule Aggregation

Editor note: In this section we describe how the data that is returned from a rule can be combined to give a higher level view of the conformance to accessibility requirements. Rules provide very low level information, this is valuable for people working at that level, but managing accessibility of products as a whole requires a higher level understanding of the accessibility.

7. Rule Quality Assurance

7.1. Managing Exceptions

Editor note: This section will describe how a rule author should document scenarios where a rule might cause incorrect results. Such exceptions exist in nearly every rule and must be managed actively. Some exceptions can be mitigated by adjusting the rule, but others may be unavoidable. In both cases documenting such exceptions is valuable in interpreting the results of a rule.

7.2. Implementation Validation

Editor note: This section describes the requirements of tests that have to be created for a rule. Rules are abstract, high level descriptions. To ensure the implementation of rules is done correctly, validation tests have to be provided along with each rule.

7.3. Accuracy Benchmarking

Editor note: This section describes how to measure the rate of incorrect results to correct results of a rule. Measuring this accuracy, not just on test data, but on pages and components that the rules would actually be applied to, is important to give users of the test results confidence in the accuracy of this data.

7.4. Update Management

7.4.1. Version Numbers

Each ACT Rule must have it’s own version number. The version number has to follow the semantic versioning schema. Using the X.Y.Z schema in the following way:

X / Major updates:

The major version number must be increased if the change can lead to new failure results.

Y / Minor updates:

The minor version number must be increased if the test logic has been updated, which could lead to a a different result.

Z / Patch updates:

The patch version number must be increased if the change does not affect the result of a rule. This includes editorial changes, new issues on the issues list, an updated description, etc.

7.4.2. Change List

All major and minor changes to an ACT Rule must be recorded in a change log, that is part of the updated rule. The change log must at least include the changes since the last minor update, as well as a reference to the previous version.

7.4.3. Issues List

An ACT Rule may include an issues list. This list must be used to record cases in which the ACT Rule might return a failure where it should have returned a pass or vice versa. There may be several reasons why this might occur, including:

The issues list serves two purposes. For users of ACT Rules, they give insight into why an inaccurate result might have occurred, as well as provide confidence in the result of that rule. For the designer of the rule, the issues list is also useful to plan future updates to the rule. A new version of the rule might see issues resolved and the item moved to the change log.

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

References

Normative References

[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119