W3C Accessibility Guidelines Evaluation Methodology (WCAG-EM) 2.0

W3C Editor's Draft

More details about this document
This version:
https://w3c.github.io/wai-wcag-em/
Latest published version:
https://www.w3.org/TR/wcag-em/
Latest editor's draft:
https://w3c.github.io/wai-wcag-em/
History:
Commit history
Editors:
(Logius)
(Logius)
(Tetralogical)
Former editors:
Eric Velleman (Accessibility Foundation)
Shadi Abou-Zahra (W3C/WAI)
Feedback:
GitHub w3c/wai-wcag-em (pull requests, new issue, open issues)
Previous Version
https://www.w3.org/TR/2014/NOTE-WCAG-EM-20140710/

Abstract

This document describes a procedure to evaluate how well digital services and products conform to the Web Content Accessibility Guidelines (WCAG) 2.

It provides technology-agnostic guidance on defining the evaluation scope, exploring the target product, selecting representative samples from products, auditing the selected samples, and reporting the evaluation findings. It is suitable for use in different evaluation contexts, including self-assessment and third-party evaluation.

This document does not define feature-specific instructions, as the WCAG Success Criteria and supporting documents cover those. It also does not define additional WCAG 2 requirements, nor does it replace or supersede them in any way.

Status of This Document

This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.

This document builds on WCAG-EM, which was developed by the WCAG 2.0 Evaluation Methodology Task Force (Eval TF), a joint task force of the Web Content Accessibility Guidelines Working Group (WCAG WG) and Evaluation and Repair Tools Working Group (ERT WG). It provides informative guidance on evaluation in accordance with Web Content Accessibility Guidelines (WCAG) 2.2.

This document was published by the Accessibility Guidelines Working Group as an Editor's Draft.

Publication as an Editor's Draft does not imply endorsement by W3C and its Members.

This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.

This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

This document is governed by the 03 November 2023 W3C Process Document.

1. Introduction

This document describes a process to comprehensively evaluate whether a digital product conforms to the Web Content Accessibility Guidelines (WCAG) 2.

Accessibility evaluations of digital products can be necessary in many situations, such as before releasing, acquiring, or redesigning the product, and for periodic monitoring of the accessibility performance of a product over time.

Several factors can impact an evaluation:

This document takes these factors into account and highlights several considerations for evaluators. It provides a common framework for accessibility evaluations, to help apply good practice, avoid commonly made mistakes, and achieve more comparable results.

This document does not replace the need for quality assurance throughout all phases of product development. It also does not in any way add to or change the requirements defined by the normative WCAG 2 standard, nor does it provide instructions on feature by feature evaluation of web content. The methodology can be used together with techniques to meet WCAG 2 success criteria, such as the Techniques for WCAG 2.2, but does not require this or any other specific set of techniques.

1.1 Target audience

This methodology is designed for anyone who wants to follow a common approach for evaluating the conformance of digital products to WCAG 2. This includes:

1.2 Relation to WCAG 2 Conformance Claims

WCAG 2.2 defines conformance requirements for individual web pages (and in some cases, sets of web pages), but does not describe how to evaluate entire websites. It also defines how to make optional conformance claims to cover individual web pages, a series of web pages such as a multi-page form, and multiple related web pages such as a website. This applies when all web pages that are in the scope of a conformance claim have each been evaluated or created in a process that ensures that they each satisfy all the conformance requirements.

WCAG 2 conformance claims cannot be made for entire websites based upon the evaluation of a selected sub-set of web pages and functionality alone, as it is always possible that there will be unidentified conformance errors on these websites. However, in the majority of uses of this methodology only a sample of web pages and functionality from a website is selected for evaluation. Thus in the majority of situations, using this methodology alone does not result in WCAG 2 conformance claims for the target websites. Guidance on making statements about the outcomes from using this methodology is provided in Step 5.c: Provide an Evaluation Statement (Optional).

2. Using This Methodology

This methodology is used for thorough evaluation of digital products using WCAG 2. Before evaluating an entire digital product it is usually good to do a preliminary evaluation of different samples from the target product to identify obvious accessibility barriers and develop an overall understanding of the accessibility of the digital product. Easy Checks - A First Review of Web Accessibility describes such an approach for preliminary evaluation that is complementary to this methodology.

2.1 Required Expertise

Users of this methodology are assumed to have solid understanding of how to evaluate web content using WCAG 2, accessible web design, assistive technologies, and of how people with different disabilities use the Web. This includes an understanding of technologies; accessibility barriers that people with disabilities experience; assistive technologies and adaptive approaches that people with disabilities use; and evaluation techniques, tools, and methods to identify barriers for people with disabilities. In particular, it is assumed that users of this methodology are deeply familiar with all the resources listed in Background Reading.

2.2 Combined Expertise (Optional)

This methodology can be carried out by an individual evaluator with the skills described in the previous section (Required Expertise), or a team of evaluators with collective expertise. Using the combined expertise of different evaluators may sometimes be necessary or beneficial when one evaluator alone does not possess all of the required expertise. Using Combined Expertise to Evaluate Web Accessibility provides further guidance on using combined expertise of review teams, which is beyond the scope of this document.

2.3 Involving Users (Optional)

Involving people with disabilities including people with aging-related impairments (who are not experienced evaluators or part of a review team) may help identify additional accessibility barriers that are not easily discovered by expert evaluation alone. While not required for using this methodology, it is strongly recommended for evaluators to involve real people with a wide range of abilities during the evaluation process. Involving Users in Web Accessibility Evaluation provides further guidance on involving users in web accessibility evaluation, which is beyond the scope of this document.

2.4 Evaluation Tools (Optional)

This methodology is independent of any particular accessibility evaluation tool, web browser, and other software tool. While most accessibility checks are not fully automatable, evaluation tools can significantly assist evaluators during the evaluation process and contribute to more effective evaluation. For example, some accessibility evaluation tools can scan an entire digital product to help identify relevant samples for manual evaluation. Tools can also assist during manual (human) evaluation of accessibility checks. Selecting Web Accessibility Evaluation Tools provides further guidance on using tools which is beyond the scope of this document.

3. Scope of Applicability

This methodology is designed for evaluating full, self-enclosed websites. That is, for every web page it is unambiguous whether it is part of the website or not. This includes websites of organizations, entities, persons, events, products, and services.

3.1 Examples of Websites

Specific examples of websites include:

A website can be part of a larger website, such as the online shop in the preceding examples. A website can also be a clearly separable version of the website such as the mobile or Dutch language versions of the website, as shown in the preceding examples. This methodology can be applied to any such determinable website, regardless of whether or not it is part of a larger website. The exact definition of a target website to be evaluated is determined as part of Step 1.a.

3.2 Principle of Website Enclosure

When a target website is defined for evaluation, it is essential that all web pages, web page states, and functionality within the scope of this definition are considered for evaluation. Excluding such aspects of a website from the scope of evaluation would likely conflict with the WCAG 2.2 conformance requirements for full pages and complete processes, or otherwise distort the evaluation results.

3.2.1 Example of Website Enclosure

Diagram of a University Website explained in the following paragraph.

The preceding diagram shows a university website comprised of distinct areas; "Information for Students", "Information for Lecturers", "Courseware Application", and "Library Application". The "Courseware Application" includes "Physics Courses", "Medical Courses", and "History Courses" that are aggregated into the application. The university website also has individual web pages such as legal notices, sitemap, and other web pages that are common to all areas.

In the preceding example, if the university website in its entirety is defined as the target for evaluation, then all of the depicted areas are within the scope of the evaluation. This includes any aggregated and embedded content such as maps of the campus, forms for online payments, and discussion boards, including when such parts originate from third-party sources. If only a specific website area, such as the "Courseware Application", is defined as the target for evaluation then all the parts of this area are within the scope of the evaluation. In this case, the scope of evaluation would include all depicted courses, as well as the individual web pages that are common to all areas of the university. See also the definition for Common Web Pages.

3.3 Particular Types of Websites

This methodology is applicable to the broad variety of website types. The following provides considerations for particular situations, noting that websites may combine several aspects. Thus the following list is non-exclusive and non-exhaustive:

Small Websites
For websites with few web pages the sampling procedure defined in Step 3: Select a Representative Sample will likely result in selecting most or all of the web pages from the target website. In cases where all web pages can be evaluated, the sampling procedure can be skipped and the selected sample is considered to be the entire website in the remaining steps.
Web Applications
Web applications are generally composed of dynamically generated content and functionality (see web page states). Web applications tend to be more complex and interactive. Some examples of web applications include webmail clients, document editors, and online shops. Web applications may be part of a larger website but can also constitute a website of their own in the context of this methodology. That is, an individual and separable entity for evaluation.

Note: Due to the many possibilities of generating content and functionality in web applications it is sometimes not feasible to exhaustively identify every possible web page, web page state, and functionality. Web applications will typically require more time and effort to evaluate, and they will typically need larger web page samples to reflect the different types of content, functionality, and processes.

Website with Separable Areas
In some cases websites may have clearly separable areas where using one area does not require or depend on using another area of the website. For example, an organization might provide an extranet for its employees only that is linked from the public website but is otherwise separate, or it might have sub-sites for individual departments of the organization that are each clearly distinct from one another. Such separable areas can be considered as individual websites each for evaluation. In some cases there may be common web pages, such as legal notices, that need to be considered as part of each website area.

Note: Some websites provide additional or different content and functionality depending on the user (typically after a log-in). This additional content and functionality is generally part of the essential purpose and functionality of the website and is thus not considered to be a separable website area.

Website in Multiple Versions
Some websites are available in multiple versions that are independent of one another in use, that is, using one version does not require or depend on using another version of the website. For example, a website may have a mobile version and there may be versions of a website in different languages that meet this characteristic. Usually each such website version has a different set of URIs. Such website versions can be considered as individual websites for evaluation.

Note: Websites using responsive design techniques (i.e. adapting the presentation according to user hardware, software, and preferences) as opposed to redirecting the user to a different location are not considered to be independent website versions.

Website Using Responsive Design
Responsive design techniques adjust the order, flow, and sometimes behavior of the content to best suit the device on which it is used. For example, to adjust the content and functionality according to the size of the viewport, screen resolution, orientation of the screen, and other aspects of a mobile device and the context in which it is being used. In this methodology such changes to the content, functionality, appearance, and behavior are not considered to be independent website versions but rather web page states that need to be included in the evaluation scope.

Note: Considerations for mobile devices, operating systems, and assistive technologies need to be taken for websites using responsive design techniques, in particular during Step 1.c: Define an Accessibility Support Baseline.

3.4 Particular Evaluation Contexts

This methodology is designed to be flexible to facilitate its applicability in different situations and contexts. The following considerations apply to particular situations and contexts for an evaluation:

Self-Assessment of Conformance
In-house evaluators and evaluators who are part of the development process often have easier access to the website developers and maintainers, the development and hosting environments, the authoring tools, and the materials used for development and maintenance. Particularly use cases, design analysis, technical specifications and documentation, and testing resources can make evaluation more effective and should be leveraged where possible.
Third-Party Assessment of Conformance
Independent external evaluators typically have less information about the websites internal software, areas, and functionality of a website as they have not been involved in its procurement and in how the website was designed and developed. Often evaluators in these situations need to contact the website owner or developer to get necessary information that make the evaluation more effective.
Evaluating During Development
While this methodology has been primarily designed for reviewing websites that are already developed, it is critical to evaluate accessibility throughout the design and implementation stages of a website to ensure its conformance. The guidance provided in this methodology can be useful during these earlier stages of the design and development process, though some adaptation may be needed. However, it is important to be aware that evaluations carried out during these earlier stages can quickly become obsolete by implementing even minor changes. Consequently evaluations carried out during these stages should not be used for making statements nor conformance claims about the finalized website.
Evaluating Composite Websites
When evaluating websites with separable areas, such as online shop, blog area, and other sub-sites, it can be useful to first evaluate each website area separately according to this methodology, followed by an overall evaluation with samples from each website area and any common web pages. This would ensure more complete coverage of the website in its entirety as well as provide insights about how each website area performed, which may differ from one area to another.
Evaluating Aggregated Websites
Websites that are generated using content that is combined from different sources, such as portals with portlets, are usually much more challenging to evaluate because of the many different content instances that can be generated. Generally it is not possible to evaluate the content from their sources separately but rather as displayed to the users when they are combined.
Evaluating Third-Party Content
Third-party content is not under the control of the website or web service providers; for example content generated by website users in an online forum. WCAG 2 provides specific considerations for the conformance of such type of content in section Statement of Partial Conformance. In such cases evaluators will need to determine whether such content is regularly monitored and repaired (within two business days), and whether non-conforming content is clearly identified as such in all the web pages in which it appears.
Re-Running Website Evaluation
Website evaluation, according to this methodology, may be re-run after a short period; for example, when issues are identified and repaired by the website owner or website developer, or periodically to monitor progress. In such cases the evaluation can be carried out using a sample of web pages that include:
  • A sub-set of the web pages that were used in the preceding evaluation to facilitate comparability between the results;
  • A replaced sub-set of web pages from those that were used in the preceding evaluation to improve website coverage;

Unless significant changes were made to the website there is usually no need to change the size of the selected web page sample nor the approach used for sampling. The amount of replaced web pages in a fresh sample is typically about half of the initial sample, though this could be increased when web pages on a website mostly conform to WCAG 2.

Large-Scale Evaluation
Carrying out mass evaluation of many websites, for example for national or international surveying, is typically carried out by primarily using automated evaluation tools. Relatively few web pages undergo full manual inspection. Such evaluations do not usually address the necessary qualitative depth of conformance review per website for which this methodology is designed.

4. Evaluation Procedure

This section describes the stages and activities of an evaluation procedure. The stages are not necessarily sequential. Also the exact sequence of the activities carried out during the evaluation stages depends on the type of digital product, the purpose of the evaluation, and the process used by the evaluator. Some of the activities can overlap or may be carried out in parallel. The following diagram illustrates the iterations between the stages defined in this section:

Diagram about the iterations between the steps in this methodology. Explanation in the following paragraph.

The workflow diagram above depicts five sequential steps: 1. Define the evaluation scope; 2. Explore the target digital product; 3. Select a representative sample; 4. Audit the selected sample and 5. Report the findings. Each step has an arrow to the next step, and arrows back to all prior steps. This illustrates how evaluators proceed from one step to the next, and may return to any preceding step in the process as new information is revealed to them during the evaluation process.

4.1 Step 1: Define the Evaluation Scope

Methodology Requirement 1: Define the evaluation scope according to Methodology Requirement 1.a, Methodology Requirement 1.b, and Methodology Requirement 1.c, and optionally Methodology Requirement 1.d.

During this step the overall scope of the evaluation is defined. It is a fundamental step that affects the subsequent steps in the evaluation procedure. It is ideally carried out in consultation with the evaluation commissioner (who may or may not be the product's owner) to ensure common expectations about the scope of the evaluation. Initial exploration of the target product during this step may be necessary to better know specifics of the product and the required evaluation. Detailed exploration of the product is carried out in Step 2: Explore the Target Product.

4.1.1 Step 1.a: Define the Scope of the Product

Methodology Requirement 1.a: Define the target digital product according to Scope of Applicability, so that for each user interface it is unambiguous whether it is within the scope of evaluation or not.

During this step the target product (the samples and states of samples that are in scope of the evaluation) is defined. This scope of the product is defined according to the terms established in the section Scope of Applicability.

To avoid later mismatches of expectations between the evaluator, evaluation commissioner, and readers of the resulting evaluation report, it is important to define the target product so that it is unambiguous that a sample is within its scope. Using formalizations including regular expressions and listings of web addresses (URIs) is recommended where possible.

It is also important to document any particular aspects of the target product to support its identification. This includes:

  • Use of third-party content and services;
  • Mobile and language versions of the product;
  • Parts of the product, especially those that may not be easily identifiable as such, for example, an online shop that has a different web address but is still considered to be part of the target product.

4.1.2 Step 1.b: Define the Conformance Target

Methodology Requirement 1.b: Select a target WCAG 2 conformance level ("A", "AA", or "AAA") for the evaluation.

Part of initiating the evaluation process is to define the target WCAG 2 conformance level ("A", "AA", or "AAA") for evaluation. WCAG 2 Level AA is the generally accepted and recommended target.

Note: It is often useful to evaluate beyond the conformance target of the digital product to get a more complete picture of its accessibility performance. For example, while a product might not fully meet a particular conformance level, it might meet individual requirements from a higher conformance level. Having this information can help plan future improvements more effectively.

4.1.3 Step 1.c: Define an Accessibility Support Baseline

Methodology Requirement 1.c: Define the web browser, assistive technologies and other user agents for which features provided on the digital product are to be accessibility supported.

Particularly for new technologies it is not always possible to ensure that every accessibility feature provided on a digital product, such as a 'show captions' function in a media player, is supported by every possible combination of operating system, web browser, assistive technology, and other user agents. WCAG 2 does not pre-define which combinations of features and technologies must be supported as this depends on the particular context of the product, including its language, the technologies that are used to create the content, and the user agents currently available. Understanding Accessibility Support provides more guidance on the WCAG 2 concept of accessibility support.

During this step the evaluator determines the minimum set of combinations of operating systems, web browsers, assistive technologies, and other user agents that the product is expected to work with, and that is in-line with the WCAG 2 guidance on accessibility support (linked above). This step is carried out in consultation with the evaluation commissioner to ensure common expectation for the targeted level of accessibility support. The product's owner and product's developer may also have such a list of combinations that the product was designed to support, which could be a starting point for this step. Depending on the purpose of the evaluation such a list may need to be updated, for example to assess how well the product works with more current browsers.

Note: This initial definition of the baseline does not limit the evaluator from using additional operating systems, web browsers, assistive technologies and other user agents at a later point, for example to evaluate content that was not identified at this early stage of the evaluation process. In this case the baseline is extended with the additional tools that were used.

Note: For some products in closed networks, such as an intranet website, where both the users and the computers used to access the product are known, this baseline may be limited to the operating systems, web browsers and assistive technologies used within this closed network. However, in most cases this baseline is ideally broader to cover the majority of current user agents used by people with disabilities in any applicable particular geographic region and language community.

4.1.4 Step 1.d: Define Additional Evaluation Requirements (Optional)

Methodology Requirement 1.d: Define any additional evaluation requirements agreed by the evaluator and evaluation commissioner (Optional).

An evaluation commissioner may be interested in additional information beyond what is needed to evaluate the extent of conformance of the target product to WCAG 2. For example, an evaluation commissioner might be interested in:

  • Evaluation of additional user interfaces beyond what is needed to form a representative sample from the target digital product;
  • Reports of all occurrences of issues rather than representative examples of the types of issues on the target digital product;
  • Analysis of particular use cases, situations, and user groups for interacting with the target digital product;
  • Description of possible solutions to the issues encountered beyond the scope of the evaluation;
  • Evaluation involving users with disabilities;
  • Adherence to specific documentation or reporting templates.

Such additional evaluation requirements that are agreed on with the evaluator need to be clarified early on and documented. This also needs to be reflected in the resulting report, for example, to clarify how the selection of the sample was carried out.

4.2 Step 2: Explore the Target Digital Product

Methodology Requirement 2: Explore the digital product to be evaluated according to Methodology Requirement 2.a, Methodology Requirement 2.b, Methodology Requirement 2.c, Methodology Requirement 2.d, and Methodology Requirement 2.e.

During this step the evaluator explores the target product to be evaluated, to develop an initial understanding of the product and its use, purpose, and functionality. Much of this will not be immediately apparent to evaluators, in particular to those from outside the development team. In some cases it is also not possible to exhaustively identify and list all functionality, types of samples, and technologies used to realize the product and its applications. The initial exploration carried out in this step is typically refined in the later steps Step 3: Select a Representative Sample and Step 4: Audit the Selected Sample, as the evaluator learns more about the target product. Involvement of product owners and product developers can help evaluators make their explorations more effective.

Note: Carrying out initial cursory checks during this step helps identify samples that are relevant for more detailed evaluation later on. For example, an evaluator may identify samples that seem to be lacking color contrast, document structure, or consistent navigation, and note them down for more detailed evaluation later on.

Note: To carry out this step it is critical that the evaluator has access to all the relevant parts of the product. For example, it may be necessary to create accounts or otherwise provide access to restricted areas of a product that are part of the evaluation. Granting evaluators such access may require particular security and privacy precautions.

4.2.1 Step 2.a: Identify Common Samples of the Digital Product

Methodology Requirement 2.a: Identify the common samples, which may be sample states, of the target product.

Explore the target product to identify its common samples, which may also be sample states in web applications. Typically these are linked directly from the main entry point of the target product (like the home page on a website, or the start screen of an app), and often linked from the header, navigation, and footer sections of other samples. The outcome of this step is a list of all common pages or views of the target product.

4.2.2 Step 2.b: Identify Essential Functionality of the Product

Methodology Requirement 2.b: Identify an initial list of essential functionality of the target product.

Explore the target product to identify its essential functionality. While some functionality will be easy to identify, others will need more deliberate discovery. For example, it may be easier to identify the functionality for purchasing products in an online shop than the functionality provided for vendors to sell products through the shop. The outcome of this step is a list of functionality that users can perform on the product. This list will be used in the following steps to help select representative samples for evaluation.

Note: The purpose of this step is not to exhaustively identify all functionality of a product but to determine those that are essential to the purpose and goal of the target product. This will inform later selection of samples and their evaluation. Other functionality will also be included in the evaluation but through other selection mechanisms.

4.2.2.1 Examples of Product Functionality

Some examples of product functionality include:

  • Selecting and purchasing products from the web shop;
  • Completing and submitting the survey forms;
  • Registering for an account on the product.

4.2.3 Step 2.c: Identify the Variety of Sample Types

Methodology Requirement 2.c: Identify the types of samples.

Samples with varying styles, layouts, structures, and functionality often have varying support for accessibility. They are often generated by different templates and scripts, or authored by different people. They may appear differently, behave differently, and contain different content depending on the particular product user and context.

During this step the evaluator explores the target product to identify the different types of samples. The outcome of this step is a list of descriptions of the types of content identified, rather than specific instances of samples. This list will be used in the following steps to help select representative samples for evaluation.

4.2.3.1 Examples of Sample Types

Some examples of different types of samples that evaluators can look for include those:

  • …with varying styles, layout, structure, navigation, interaction, and visual design;
  • …with varying types of content such as forms, tables, lists, headings, multimedia, and scripting;
  • …with varying functional components such as date picker, lightbox, slider, and others;
  • …using varying technologies such as HTML, CSS, JavaScript, WAI-ARIA, PDF, etc.;
  • …from varying areas of the product (home page, web shop, departments, etc.) including any applications;
  • …with varying coding styles and created using varying templates (if this is known to the evaluator);
  • …authored by varying people, departments, and other entities (if this is known to the evaluator);
  • …that change appearance and behavior depending on the user, device, browser, context, and settings;
  • …with dynamic content, error messages, dialog-boxes, pop-up windows, and other interaction.

4.2.4 Step 2.d: Identify Technologies Relied Upon

Methodology Requirement 2.d: Identify the technologies relied upon to provide the product.

During this step, the technologies relied upon for conformance are identified. This includes base technologies such as HTML and CSS, auxiliary technologies such as JavaScript and WAI-ARIA, as well as specific technologies such as SMIL, SVG and PDF. The outcome of this step is a list of technologies that are relied upon according to WCAG 2. This list will be used in the following steps to help select representative samples for evaluation.

Note: Where possible, it is often also useful to identify any content management system, version, and configuration as it may be relevant to explain the evaluation results. Also any libraries and components used to create the product, such as Dojo, jQuery, and others may be relevant. Particularly for web applications, much of the accessibility support is built into libraries and components, and evaluation can become more effective and efficient when these are identified.

4.2.5 Step 2.e: Identify Other Relevant Samples

Methodology Requirement 2.e: Identify other samples that are relevant to people with disabilities and to accessibility of the digital product.

Some digital products include samples and sample states that are specifically relevant for people with disabilities and the accessibility of the digital product. The outcome of this step is a list of such samples and sample states, if they have not already been identified as part of Step 2.a: Identify Common Samples of the Digital Product.

4.2.5.1 Examples of Other Relevant Samples

Examples of other relevant samples and sample states include those:

  • …explaining the accessibility features of the digital product;
  • …with information and help on the use of the digital product;
  • …explaining settings, preferences, options, shortcuts, etc.;
  • …with contact information, directions, and support instructions.

4.3 Step 3: Select a Representative Sample

Methodology Requirement 3: Select a representative sample from the digial product according to Methodology Requirement 3.a, Methodology Requirement 3.b, and Methodology Requirement 3.c.

During this step the evaluator selects a sample that is representative of the target product to be evaluated. The purpose of this selection is to ensure that the evaluation results reflect the accessibility performance of the digital product with reasonable confidence. In cases where it is feasible to evaluate all pages or views of a digital product, which is highly recommended, this sampling procedure can be skipped and the “selected sample” in the remaining steps of this evaluation process is the entire digital product. In some cases, such as for small websites, this sampling procedure may result in selecting all pages or views states of the website, or all screens of the mobile application.

The actual size of the sample needed to evaluate a digital product depends on many factors including:

The selection carried out during this step relies initially on the exploration carried out in Step 2: Explore the Target Product. The selection is also continually refined during the following Step 4: Audit the Selected Sample, as the evaluator learns more about the particular implementation aspects of the target product.

4.3.1 Step 3.a: Include a Structured Sample

Methodology Requirement 3.a: Select samples that reflect all identified (1) common samples, (2) essential functionality, (3) types of samples, (4) technologies relied upon, and (5) other relevant samples.

Select a sample that includes:

  1. All common samples that were identified in Step 2.a: Identify Common Samples of the Digital Product:
  2. All other relevant samples that were identified in Step 2.e: Identify Other Relevant Samples;
  3. If not already reflected in the previous steps, select additional samples with:
    1. Content from each essential functionality identified in Step 2.b: Identify Essential Functionality of the Digital Product;
    2. Content from the different types of samples identified in Step 2.c: Identify the Variety of Sample Types;
    3. Content provided using the technologies identified in Step 2.d: Identify Technologies Relied Upon.

Note: An individual samples may reflect more than one of each of the criteria listed above. For example, a single sample may be representative of a particular design layout, functionality, and technologies used. The purpose of this step is to have representation of the different types of samples, functionality, and technologies that occur on the digital product. Careful selection of these representative instances can significantly reduce the required sample size while maintaining appropriate representation of the entire digital product. The number of required instances of samples depends on the particular aspects of the digital product explained in the previous section, factors influencing the sample size.

4.3.2 Step 3.b: Include a Randomly Selected Sample

Methodology Requirement 3.b: Select a random sample, and include them for auditing.

A randomly selected sample acts as an indicator to verify that the structured sample selected through the previous steps is sufficiently representative of the content provided on the website. This is an important step to improve the confidence in the overall evaluation outcome when the evaluation results from both selection approaches correlate.

The number of samples to randomly select is 10% of the structured sample selected through the previous steps. For example, if the structured sample selected for a digital product resulted in 80 samples, then the random sample size is 8 samples. (Note: The size of the structured sample is different than the size of the digital product.)

To perform this selection, randomly select unique samples from the target digital product that are not already part of the structured sample selected through the previous steps. Depending on the type of product and the access that an evaluator has for it there are different techniques that may need to be used for this selection. This may include:

  • Using a tool that will traverse the digital product and propose a list of randomly selected samples;
  • Using a script that will generate a list of all samples available on a digital product, to select from;
  • Using server logs, search engines and other creative methods to get to a random sample.

Document the samples that were randomly selected as these will need to be compared to the remaining structured sample in Step 4.c: Compare Structured and Random Samples.

Note: While the random sample need not be selected according to strictly scientific criteria, the scope of the selection needs to span the entire scope of the digital product (any samples on the digital product may be selected), and the selection of individual samples does not follow a predictable pattern. Recording the method used to generate the random sample is crucial for ensuring the reliability and replicability of the findings.

4.3.3 Step 3.c: Include Complete Processes

Methodology Requirement 3.c: Include all samples that are part of a complete process in the selected sample.

The selected sample has to include all pages or views that belong to a series presenting a complete process. When samples belong to a process, all pages or views that belong to that same process have to be included.

Use the following steps to include the necessary samples:

  1. For each sample selected through Step 3.a: Include a Structured Sample and Step 3.b: Include a Randomly Selected Sample that is part of a process, locate the starting point (sample) for the process and include it in the selected sample;
  2. For each starting point for a process, identify and record at least the default sequence of samples to complete the process. Incude these samples.
    Note: The default sequence follows the standard use case, describing the default path through the complete process. It assumes that there are no user input errors and no selection of additional options. For example, for a web shop application, the user would proceed to checkout, confirm the default payment option, provide all required payment details correctly, and complete the purchase, without changing the contents of the shopping cart, using a stored user profile, selecting alternative options for payment or shipping address, providing erroneous input, and so forth.
  3. For each process, identify and record the branch sequences of samples that are commonly accessed and critical for the successful completion of the process. Include these samples.
    Note: Branch sequences may terminate where they re-enter the default branch of the process. For example, adding a new shipping address will be registered as a critical alternative branch that leads back to the default branch of the process.

Note: In most cases it is necessary to record and specify the actions needed to proceed from one sample to the next in a sequence to complete a process so that they can be replicated later. An example of such action could be "fill out name and address, and select the 'Submit' button". In most cases the web address (URI will not be sufficient to identify the sample in a complete process. It is also useful to clearly record when samples are part of a process so that evaluators can focus their effort on the relevant changes such as elements that were added, modified, or made visible.

4.4 Step 4: Audit the Selected Sample

Methodology Requirement 4: Audit the selected sample according to Methodology Requirement 4.a, Methodology Requirement 4.b, and Methodology Requirement 4.c.

During this step the evaluator audits (detailed evaluation of) all of the samples selected in Step 3: Select a Representative Sample, and compares the structured sample to the randomly selected sample. The audit is carried out according to the five WCAG 2 conformance requirements at the target conformance level defined in Step 1.b: Define the Conformance Target.

The five WCAG 2.2 conformance requirements are:

  1. Conformance Level
  2. Full pages
  3. Complete processes
  4. Only Accessibility-Supported Ways of Using Technologies
  5. Non-Interference

Further guidance on evaluating to these conformance requirements is provided in the following sections. The WCAG 2 Layers of Guidance and Understanding Conformance provide more background and guidance on the WCAG 2 conformance requirements, which is beyond the scope of this document.

Note: Carrying out this step requires deep understanding of the WCAG 2 conformance requirements and the expertise described in section Required Expertise.

4.4.1 Step 4.a: Check All Initial Samples

Methodology Requirement 4.a: Check that each sample that is not within or the end of a complete process conforms to each of the five WCAG 2 conformance requirements at the target conformance level.

For each sample selected in Step 3: Select a Representative Sample that is not within or the end of a complete process, check its conformance with each of the five WCAG conformance requirements, with the target conformance level defined in Step 1.b: Define the Conformance Target. This includes all components of the sample without activating any functions, entering any data, or otherwise initiating a process. Such functionality and interaction, including samples that are within or the end of a complete process, will be evaluated in the subsequent step.

Note: Many samples will have components, such as the header, navigation bars, search form, and others that occur repeatedly. While the requirement is to check full pages, typically these components do not need to be re-evaluated on each occurrence unless they appear or behave differently, or when additional evaluation requirements are defined in Step 1.d: Define Additional Evaluation Requirements (Optional).

4.4.1.1 WCAG 2 Success Criteria

There are typically several ways to determine whether WCAG 2 Success Criteria have been met or not met. W3C/WAI provides one set of (non-normative) Techniques for WCAG 2.2, which documents ways of meeting particular WCAG 2 Success Criteria. It also includes documented common failures, which are known ways in which content does not meet particular WCAG 2 Success Criteria. Understanding Techniques for WCAG Success Criteria provides more guidance on the WCAG 2 concept of Techniques.

Evaluators can use such documented guidance to check whether particular web content meets or fails to meet WCAG 2 Success Criteria. Documented techniques and failures can also be useful background in evaluation reports. However, it is not required to use the particular set of techniques and failures documented by W3C/WAI. In fact, evaluators do not need to follow any techniques and failures at all. Evaluators might use other approaches to evaluate whether WCAG 2 Success Criteria have been met or not met. For example, evaluators may utilize specific testing instructions and protocols that meet the requirements for sufficient techniques, and that may be publicly documented or only available to the evaluators. More guidance on the use of techniques is provided in the previously linked Understanding Techniques for WCAG Success Criteria.

Note: WCAG 2 Success Criteria are each formulated as a "testable statement that will be either true or false when applied to specific web content". When there is no content presented to the user that relates to specific Success Criteria (for example, no video on the web page), then the Success Criteria are "satisfied" according to WCAG 2. Optionally, an evaluation report can specifically indicate Success Criteria for which there is no relevant content, for example, with "not present". Understanding Conformance provides more background and guidance.

4.4.1.2 Conforming Alternate Versions

Content on a sample might have alternate versions. For example, video content may be provided in a version with and without captions. In some cases an entire sample (or series of them) may be provided as an alternate version to an initial sample. Conformance to WCAG 2 can be achieved with the help of alternate versions that meet the requirements listed in the WCAG 2 definition for conforming alternate version. For example, a web page with video content without captions could still meet WCAG 2 by providing an alternate version for the video that qualifies to be a conforming alternate version. Understanding Conforming Alternate Versions provides further guidance on conforming alternate versions that is beyond the scope of this document.

Note: Alternate versions are not considered to be separate samples but part of the content. Samples are evaluated together with their alternate versions as one unit (full page).

4.4.1.3 Accessibility Support

Content on a sample needs to be provided in a way that is accessibility supported (either directly or through an alternate version). For example, the captions for a video need to be provided in a way that they can be displayed to users. The WCAG 2 definition for accessibility supported defines specific requirements for the use of web content technologies to qualify as accessibility-supported. Understanding Accessibility Support Web Technology Uses provides further guidance on accessibility support that is beyond the scope of this document. However, WCAG 2 does not define a particular threshold or set of software that a digital product needs to support for accessibility. The definition of such a baseline depends on several parameters including the purpose, target audience, and language of the digital product. The baseline used to evaluate a particular digital product is defined in Step 1.c: Define an Accessibility Support Baseline.

4.4.1.4 Non-Interference

Content on a sample may not conform to WCAG 2, even though the sample as a whole might still conform to WCAG 2. For example, information and functionality may be provided using web content technologies that are not yet widely supported by assistive technologies or in a way that is not supported by assistive technologies, accompanied by a conforming alternate version for the information and functionality that is accessibility supported. In this case the non-conforming content must not negatively interfere with the conforming content so that the sample can conform to WCAG 2. The WCAG 2 conformance requirement for non-interference defines specific requirements for content to qualify as non-interfering. Understanding Requirement 5 provides further guidance on non-interference that is beyond the scope of this document.

4.4.2 Step 4.b: Check All Complete Processes

Methodology Requirement 4.b: Check that all interaction for each sample that is part of a complete process conforms to each of the five WCAG 2 conformance requirements at the target conformance level.

For each complete process identified in Step 3.c: Include Complete Processes, follow the identified default and branch sequences of samples, and evaluate each according to Step 4.a: Check All Initial Samples. However, in this case it is not necessary to evaluate all content but only the content that changes along the process.

Functionality, entering data, notifications, and other interaction is part of this check. In particular it includes:

  • Interaction with forms, input elements, dialog boxes, and other components;
  • Confirmations for input, error messages, and other feedback from user interaction;
  • Behavior using different settings, preferences, devices, and interaction parameters.

4.4.3 Step 4.c: Compare Structured and Random Samples

Methodology Requirement 4.c: Check that each sample in the randomly selected sample does not show types of content and outcomes that are not represented in the structured sample.

While the individual occurrences of WCAG 2 Success Criteria will vary between the structured and randomly selected samples, the randomly selected sample should not show new types of content not present in the structured sample. Also the outcomes from evaluating the randomly selected sample should not show new findings to those of the structured sample. If the randomly selected sample shows new types of content or new evaluation findings then it is an indication that the structured sample was not sufficiently representative of the content provided on the website. In this case evaluators need to go back to Step 3: Select a Representative Sample to select additional samples that reflect the newly identified types of content and findings. Also the findings of Step 2: Explore the Target Digital Product might need to be adjusted accordingly. This step is repeated until the structured sample is adequately representative of the content provided on the digital product.

4.5 Step 5: Report the Evaluation Findings

Methodology Requirement 5: Report the evaluation findings according to Methodology Requirement 5.a and optionally Methodology Requirement 5.b, Methodology Requirement 5.c, Methodology Requirement 5.d, and Methodology Requirement 5.e.

While evaluation findings are reported at the end of the process, documenting them is carried out throughout the evaluation process to ensure verifiable outcomes. The documentation typically has varying levels of confidentiality. For example, documenting the specific methods used to evaluate individual requirements might remain limited to the evaluator while reports about the outcomes from these checks are typically made available to the evaluation commissioner. Product owners might further choose to make public statements about the outcomes from evaluation according to this methodology.

4.5.1 Step 5.a: Document the Outcomes of Each Step

Methodology Requirement 5.a: Document each outcome of the steps defined in Step 1: Define the Evaluation Scope, Step 2: Explore the Target Digital Product, Step 3: Select a Representative Sample, and Step 4: Audit the Selected Sample.

Documenting the outcomes for each of the previous steps (including all sub-sections) is essential to ensure transparency of the evaluation process, replicability of the evaluation results, and justification for any statements made based on this evaluation. This documentation does not need to be public, the level of confidentiality is usually determined by the evaluation commissioner.

Documenting the outcomes for each step includes at least the following:

Note: Depending on the desired granularity of the report documentation, the outcomes of Step 4: Audit the Selected Sample may be provided for each evaluated sample, or aggregated over the entire sample. Reports should include at least one example for each conformance requirement and WCAG 2 Success Criterion not met. It is also good practice for evaluators to indicate issues that occur repeatedly.

Reports may also include additional information depending on any additional evaluation requirements defined in Step 1.d: Define Additional Evaluation Requirements (Optional). For example, an evaluation commissioner may request a report indicating every failure occurrence for every sample, more information about the nature and the causes of the identified failures, or repair suggestions to remedy the failures.

4.5.2 Step 5.b: Record the Evaluation Specifics (Optional)

Methodology Requirement 5.b: Archive the samples audited, and record the evaluation tools, web browsers, assistive technologies, other software, and methods used to audit them (Optional).

While optional, it is good practice for evaluators to keep record of the evaluation specifics, for example to support conflict resolution in the case of dispute. This includes archiving the samples audited, and recording the evaluation tools, web browsers, assistive technologies, other software, and methods used to audit them. This recording is typically kept internal and not shared by the evaluator unless otherwise agreed on in Step 1.d: Define Additional Evaluation Requirements (Optional).

Records of the evaluation specifics could include any of the following:

  • Copies of the files and resources of the samples;
    Note: Some tools can save the dynamically generated or modified content (DOM) as displayed during the evaluation rather than the initial content of the files and resources, which is often different;
  • Screenshots (screen grabs) of the samples;
  • Description of the path to locate the samples, especially when they are part of a process;
  • Description of the settings, input, and actions used to generate or navigate to the samples. Specific test credentials (user-IDs, etc.) required to replicate a unique data set or workflow;
  • Names and versions of the evaluation tools, web browsers and add-ons, assistive technology, and other software used;
  • The methods, procedures, and techniques used to evaluate conformance to WCAG 2.

This recording may apply globally for the entire evaluation, to individual samples, or to individual checks carried out within the audited samples. A table or grid may be useful to record what was used for the different samples audited.

Note: Records of the evaluation specifics may include sensitive information such as internal code, passwords, and copies of data. They may need particular security and privacy precautions.

4.5.3 Step 5.c: Provide an Evaluation Statement (Optional)

Methodology Requirement 5.c: Provide a statement describing the outcomes of the conformance evaluation (Optional).

Reminder: In the majority of situations, using this methodology alone does not result in WCAG 2 conformance claims for the target digital product; see Relation to WCAG 2 Conformance Claims for more background.

Product owners may wish to make public statements about the outcomes from evaluations following this methodology. This can be done when at least every non-optional methodology requirement is satisfied, the conformance target defined in Step 1.b. Define the Conformance Target is satisfied by all samples audited (in Step 4: Audit the Selected Sample), and the product owner commits to ensuring the validity and maintaining the accuracy of the evaluation statement made.

An evaluation statement according to this methodology includes at least the following information:

  1. Date of when the evaluation statement was issued;
  2. Guidelines title, version and URI: "Web Content Accessibility Guidelines 2.2 at https://www.w3.org/TR/WCAG22/";
  3. Conformance level evaluated: Level A, AA or AAA, as defined in Step 1.b. Define the Conformance Target;
  4. Definition of the Digital Product as defined in Step 1.a: Define the Scope of the Digital Product;
  5. Technologies relied upon as identified in Step 2.d: Identify Technologies Relied Upon;
  6. Accessibility support baseline as defined in Step 1.c: Define an Accessibility Support Baseline.

Evaluation statements according to this methodology can also be made when only partial conformance to WCAG 2 has been achieved. In such cases the evaluation statements also include the following information:

  1. Digital product areas that do not conform to WCAG 2;
  2. Reason for not conforming to WCAG 2: "third-party content" or "lack of accessibility support for languages".

4.5.4 Step 5.d: Provide an Aggregated Score (Optional)

Methodology Requirement 5.d: Provide an Aggregated score (Optional).

While aggregated scores provide a numerical indicator to help communicate progress over time, there is currently no single metric that is known to address the required reliability, accuracy, and practicality. In fact, aggregated scores can be misleading and do not provide sufficient context and information to understand the actual accessibility of a digital product. For this and other reasons WCAG 2 does not provide a rating scheme. A W3C Research Report on Web Accessibility Metrics provides more background on on-going research, different approaches, and limitations of scoring metrics that are beyond the scope of this document. Whenever a score is provided, it is essential that the scoring approach is documented and made available to the evaluation commissioner along with the report, to facilitate transparency and repeatability.

4.5.5 Step 5.e: Provide Machine-Readable Reports (Optional)

Methodology Requirement 5.e: Provide machine-readable reports of the evaluation results (Optional).

Machine-readable reports facilitate processing the evaluation results by authoring, accessibility evaluation tools, and quality assurance tools. The Evaluation and Report Language (EARL) is a machine-readable format that was specifically designed for this purpose. It is recommended to use EARL for providing machine-readable reports. See also Understanding Metadata from WCAG 2 to learn more about uses of metadata, including machine-readable reports, such as EARL.

5. Background Reading

The information below, related to accessibility essentials, evaluation, and WCAG 2 is important for using this methodology. Evaluators using this methodology are expected to be deeply familiar with all the listed resources:

5.1 Web Accessibility Essentials

The following documents introduce the essential components of accessibility and explain how people with disabilities use the Web. They are critical for understanding the broader context of accessibility evaluation:

5.2 Evaluating Digital Products for Accessibility

These are particularly important resources that outline different approaches for evaluating digital products for accessibility:

5.3 Web Content Accessibility Guidelines (WCAG) 2

This is the internationally recognized standard explaining how to make web content more accessible to people with disabilities. The following resources are particularly important for accessibility evaluation of digital products:

5.4 ICT Accessibility

5.5 Other Standards which incorporate WCAG 2 by reference

6. Terms and Definitions

For the purposes of this document, the following terms and definitions apply:

Complete processes
Based on WCAG 2.2 Conformance Requirement for Complete Processes:

When a user interface is one of a series of user interfaces presenting a process (i.e., a sequence of steps that need to be completed in order to accomplish an activity), all user interfaces in the process conform at the specified level or better. (Conformance is not possible at a particular level if any page in the process does not conform at that level or better.)

Conformance
From WCAG 2.2 definition for "conformance":
Satisfying all the requirements of a given standard, guideline or specification.
Common user interfaces
User interfaces and user interface states that are relevant to the entire digital product. This includes the home, login, and other entry points, and, where applicable, contacts, help, legal information, and similar user interfaces that are typically linked from all other user interfaces (usually from the header, footer, or navigation menu of a user interface).

Note: A definition for user interface states is provided below.

Developer
The person, team of people, organization, in-house department, or other entity that is involved in the development process including but not limited to content authors, designers, front-end developers, back-end programmers, quality assurance testers, and project managers.
Digital product
A coherent collection of one or more related user interfaces that together provide common use or functionality. It includes Web sites, web apps, e-books, kiosk apps, mobile apps and documents (PDF, Word) etc.

Note: The focus of this methodology is on full, self-enclosed digital products. Digital products may be composed of smaller subsets of user interfaces, each of which can be considered to be an individual product. For example, a digital product may include an online shop, an area for each department within the organization, a blog area, and other areas that may each be considered to be a digital product.

Essential functionality
Functionality of a digital product that, if removed, fundamentally changes the use or purpose of the product for users. This includes information that users of a product refer to and tasks that they carry out to perform this functionality.

Note: Examples of essential functionality include “selecting and purchasing an item from an online shop”, “completing and submitting a form provided in an application”, and “registering for an account on the kiosk”.

Note: Other functionality is not excluded from the scope of evaluation. The term “essential functionality” is intended to help identify critical samples and include them among others in an evaluation.

Evaluator
The person, team of people, organization, in-house department, or other entity responsible for carrying out the evaluation.
Evaluation commissioner
The person, team of people, organization, in-house department, or other entity that commissioned the evaluation.

Note: In many cases the evaluation commissioner may be the product owner or product developer, in other cases it may be another entity such as a procurer or an accessibility monitoring survey owner.

Relied upon (Technologies)
From WCAG 2.2 definition for "relied upon":
The content would not conform if that technology is turned off or is not supported.
Sample
The entirety of a web page, document page, app screen, or a subset of the aformentioned.
Templates
From ATAG 2.0 definition for "templates":
Content patterns that are filled in by authors or the authoring tool to produce web content for end users (e.g., document templates, content management templates, presentation themes). Often templates will pre-specify at least some authoring decisions.
Owner
The person, team of people, organization, in-house department, or other entity that is responsible for the digital product.
User interface
Content and interactive content that is perceivable to a user without changes of context
User interface states
Dynamically generated user interfaces sometimes provide significantly different content, functionality, and appearance depending on the user, interaction, device, and other parameters. In the context of this methodology such user interface states can be treated as ancillary to user interfaces (recorded as an additional state of a user interfaces in a sample) or as individual user interfaces.

Note: Examples of user interface states are the individual parts of a multi-part form that are dynamically generated depending on the user's input. These individual states may need to be identified by describing the settings, input, and actions required to generate them.

Web page
From WCAG 2.2 definition for "web page":
A non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together with it by a user agent.

7. Appendices

7.1 Appendix A: Contributors

Past and present active participants of the WCAG 2.0 Evaluation Methodology Task Force (Eval TF) include: Shadi Abou-Zahra; Frederick Boland; Denis Boudreau; Amy Chen; Vivienne Conway; Bim Egan; Michael Elledge; Gavin Evans; Wilco Fiers; Detlev Fischer; Elizabeth Fong; Vincent François; Alistair Garrison; Emmanuelle Gutiérrez y Restrepo; Katie Haritos-Shea; Martijn Houtepen; Peter Korn; Maureen Kraft; Aurelien Levy; David MacDonald; Mary Jo Mueller; Donald Raikes; Corominas Ramon; Roberto Scano; Samuel Sirois; Sarah J Swierenga; Eric Velleman; Konstantinos Votis; Kathleen Wahlbin; Elle Waters; Richard Warren; Léonie Watson.

7.2 Appendix B: References

ATAG20
Richards J, Spellman J, Treviranus J, eds (2013). Authoring Tool Accessibility Guidelines 2.0. W3C. Available at: https://www.w3.org/TR/ATAG20/
Easy Checks
Lawton Henry S, ed (2014). Easy Checks - A First Review of Web Accessibility. W3C. Available at: https://www.w3.org/WAI/eval/preliminary
Essential Components of Web Accessibility
Lawton Henry S, ed (2005). Essential Components of Web Accessibility. Version 1.3. W3C. Available at: https://www.w3.org/WAI/fundamentals/components/
How People with Disabilities Use the Web
Abou-Zahra S, ed (2012). How People with Disabilities Use the Web. Draft. W3C. Available at: https://www.w3.org/WAI/people-use-web/
Involving Users in Evaluating Web Accessibility
Lawton Henry S, ed (2010). Involving Users in Evaluating Web Accessibility. W3C. Available at: https://www.w3.org/WAI/test-evaluate/involving-users/
Selecting Web Accessibility Evaluation Tools
Abou-Zahra S, ed (2005). Selecting Web Accessibility Evaluation Tools. W3C. Available at: https://www.w3.org/WAI/test-evaluate/tools/selecting/
Using Combined Expertise to Evaluate Web Accessibility
Brewer J, ed (2002). Using Combined Expertise to Evaluate Web Accessibility. W3C. Available at: https://www.w3.org/WAI/test-evaluate/combined-expertise/
UWEM
Velleman E.M, Velasco C.A, Snaprud M, eds (2007). D-WAB4 Unified Web Evaluation Methodology (UWEM 1.2 Core). Wabcluster. Available at: https://link.springer.com/chapter/10.1007/978-3-540-73283-9_21
WCAG Overview
Lawton Henry S, ed (2012). Web Content Accessibility Guidelines (WCAG) Overview. W3C. Available at: https://www.w3.org/WAI/standards-guidelines/wcag/
WCAG22
Campbell A, Adams C, Montgomery RB, Cooper M, eds (2024). Web Content Accessibility Guidelines 2.2. W3C. Available at: https://www.w3.org/TR/WCAG22/
WCAG22-TECHS
Campbell A, Adams C, Montgomery RB, Cooper M, eds (2025). Techniques and Failures for Web Content Accessibility Guidelines 2.2. W3C. Available at: https://www.w3.org/WAI/WCAG22/Techniques/
Understanding-WCAG22
Campbell A, Adams C, Montgomery RB, Cooper M, eds (2025). Understanding WCAG 2.2 - A guide to understanding and implementing Web Content Accessibility Guidelines 2.2. W3C. Available at: https://www.w3.org/WAI/WCAG22/Understanding/