Copyright © 2022-2025 World Wide Web Consortium. W3C® liability, trademark and document use rules apply.
This document describes a procedure to evaluate how well digital services and products conform to the Web Content Accessibility Guidelines (WCAG) 2.
It provides technology-agnostic guidance on defining the evaluation scope, exploring the target product, selecting representative samples from products, auditing the selected samples, and reporting the evaluation findings. It is suitable for use in different evaluation contexts, including self-assessment and third-party evaluation.
This document does not define feature-specific instructions, as the WCAG Success Criteria and supporting documents cover those. It also does not define additional WCAG 2 requirements, nor does it replace or supersede them in any way.
This section describes the status of this document at the time of its publication. A list of current W3C publications and the latest revision of this technical report can be found in the W3C standards and drafts index.
This document builds on WCAG-EM, which was developed by the WCAG 2.0 Evaluation Methodology Task Force (Eval TF), a joint task force of the Web Content Accessibility Guidelines Working Group (WCAG WG) and Evaluation and Repair Tools Working Group (ERT WG). It provides informative guidance on evaluation in accordance with Web Content Accessibility Guidelines (WCAG) 2.2.
This document was published by the Accessibility Guidelines Working Group as an Editor's Draft.
Publication as an Editor's Draft does not imply endorsement by W3C and its Members.
This is a draft document and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to cite this document as other than a work in progress.
This document was produced by a group operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent that the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 03 November 2023 W3C Process Document.
This document describes a process to comprehensively evaluate whether a digital product conforms to the Web Content Accessibility Guidelines (WCAG) 2.
Accessibility evaluations of digital products can be necessary in many situations, such as before releasing, acquiring, or redesigning the product, and for periodic monitoring of the accessibility performance of a product over time.
Several factors can impact an evaluation:
This document takes these factors into account and highlights several considerations for evaluators. It provides a common framework for accessibility evaluations, to help apply good practice, avoid commonly made mistakes, and achieve more comparable results.
This document does not replace the need for quality assurance throughout all phases of product development. It also does not in any way add to or change the requirements defined by the normative WCAG 2 standard, nor does it provide instructions on feature by feature evaluation of web content. The methodology can be used together with techniques to meet WCAG 2 success criteria, such as the Techniques for WCAG 2.2, but does not require this or any other specific set of techniques.
This methodology is designed for anyone who wants to follow a common approach for evaluating the conformance of digital products to WCAG 2. This includes:
WCAG 2.2 defines conformance requirements for individual web pages (and in some cases, sets of web pages), but does not describe how to evaluate entire websites. It also defines how to make optional conformance claims to cover individual web pages, a series of web pages such as a multi-page form, and multiple related web pages such as a website. This applies when all web pages that are in the scope of a conformance claim have each been evaluated or created in a process that ensures that they each satisfy all the conformance requirements.
WCAG 2 conformance claims cannot be made for entire websites based upon the evaluation of a selected sub-set of web pages and functionality alone, as it is always possible that there will be unidentified conformance errors on these websites. However, in the majority of uses of this methodology only a sample of web pages and functionality from a website is selected for evaluation. Thus in the majority of situations, using this methodology alone does not result in WCAG 2 conformance claims for the target websites. Guidance on making statements about the outcomes from using this methodology is provided in Step 5.c: Provide an Evaluation Statement (Optional).
This methodology is used for thorough evaluation of digital products using WCAG 2. Before evaluating an entire digital product it is usually good to do a preliminary evaluation of different samples from the target product to identify obvious accessibility barriers and develop an overall understanding of the accessibility of the digital product. Easy Checks - A First Review of Web Accessibility describes such an approach for preliminary evaluation that is complementary to this methodology.
Users of this methodology are assumed to have solid understanding of how to evaluate web content using WCAG 2, accessible web design, assistive technologies, and of how people with different disabilities use the Web. This includes an understanding of technologies; accessibility barriers that people with disabilities experience; assistive technologies and adaptive approaches that people with disabilities use; and evaluation techniques, tools, and methods to identify barriers for people with disabilities. In particular, it is assumed that users of this methodology are deeply familiar with all the resources listed in Background Reading.
This methodology can be carried out by an individual evaluator with the skills described in the previous section (Required Expertise), or a team of evaluators with collective expertise. Using the combined expertise of different evaluators may sometimes be necessary or beneficial when one evaluator alone does not possess all of the required expertise. Using Combined Expertise to Evaluate Web Accessibility provides further guidance on using combined expertise of review teams, which is beyond the scope of this document.
Involving people with disabilities including people with aging-related impairments (who are not experienced evaluators or part of a review team) may help identify additional accessibility barriers that are not easily discovered by expert evaluation alone. While not required for using this methodology, it is strongly recommended for evaluators to involve real people with a wide range of abilities during the evaluation process. Involving Users in Web Accessibility Evaluation provides further guidance on involving users in web accessibility evaluation, which is beyond the scope of this document.
This methodology is independent of any particular accessibility evaluation tool, web browser, and other software tool. While most accessibility checks are not fully automatable, evaluation tools can significantly assist evaluators during the evaluation process and contribute to more effective evaluation. For example, some accessibility evaluation tools can scan an entire digital product to help identify relevant samples for manual evaluation. Tools can also assist during manual (human) evaluation of accessibility checks. Selecting Web Accessibility Evaluation Tools provides further guidance on using tools which is beyond the scope of this document.
This methodology is designed for evaluating full, self-enclosed websites. That is, for every web page it is unambiguous whether it is part of the website or not. This includes websites of organizations, entities, persons, events, products, and services.
Specific examples of websites include:
A website can be part of a larger website, such as the online shop in the preceding examples. A website can also be a clearly separable version of the website such as the mobile or Dutch language versions of the website, as shown in the preceding examples. This methodology can be applied to any such determinable website, regardless of whether or not it is part of a larger website. The exact definition of a target website to be evaluated is determined as part of Step 1.a.
When a target website is defined for evaluation, it is essential that all web pages, web page states, and functionality within the scope of this definition are considered for evaluation. Excluding such aspects of a website from the scope of evaluation would likely conflict with the WCAG 2.2 conformance requirements for full pages and complete processes, or otherwise distort the evaluation results.
The preceding diagram shows a university website comprised of distinct areas; "Information for Students", "Information for Lecturers", "Courseware Application", and "Library Application". The "Courseware Application" includes "Physics Courses", "Medical Courses", and "History Courses" that are aggregated into the application. The university website also has individual web pages such as legal notices, sitemap, and other web pages that are common to all areas.
In the preceding example, if the university website in its entirety is defined as the target for evaluation, then all of the depicted areas are within the scope of the evaluation. This includes any aggregated and embedded content such as maps of the campus, forms for online payments, and discussion boards, including when such parts originate from third-party sources. If only a specific website area, such as the "Courseware Application", is defined as the target for evaluation then all the parts of this area are within the scope of the evaluation. In this case, the scope of evaluation would include all depicted courses, as well as the individual web pages that are common to all areas of the university. See also the definition for Common Web Pages.
This methodology is applicable to the broad variety of website types. The following provides considerations for particular situations, noting that websites may combine several aspects. Thus the following list is non-exclusive and non-exhaustive:
Note: Due to the many possibilities of generating content and functionality in web applications it is sometimes not feasible to exhaustively identify every possible web page, web page state, and functionality. Web applications will typically require more time and effort to evaluate, and they will typically need larger web page samples to reflect the different types of content, functionality, and processes.
Note: Some websites provide additional or different content and functionality depending on the user (typically after a log-in). This additional content and functionality is generally part of the essential purpose and functionality of the website and is thus not considered to be a separable website area.
Note: Websites using responsive design techniques (i.e. adapting the presentation according to user hardware, software, and preferences) as opposed to redirecting the user to a different location are not considered to be independent website versions.
Note: Considerations for mobile devices, operating systems, and assistive technologies need to be taken for websites using responsive design techniques, in particular during Step 1.c: Define an Accessibility Support Baseline.
This methodology is designed to be flexible to facilitate its applicability in different situations and contexts. The following considerations apply to particular situations and contexts for an evaluation:
Unless significant changes were made to the website there is usually no need to change the size of the selected web page sample nor the approach used for sampling. The amount of replaced web pages in a fresh sample is typically about half of the initial sample, though this could be increased when web pages on a website mostly conform to WCAG 2.
This section describes the stages and activities of an evaluation procedure. The stages are not necessarily sequential. Also the exact sequence of the activities carried out during the evaluation stages depends on the type of digital product, the purpose of the evaluation, and the process used by the evaluator. Some of the activities can overlap or may be carried out in parallel. The following diagram illustrates the iterations between the stages defined in this section:
The workflow diagram above depicts five sequential steps: 1. Define the evaluation scope; 2. Explore the target digital product; 3. Select a representative sample; 4. Audit the selected sample and 5. Report the findings. Each step has an arrow to the next step, and arrows back to all prior steps. This illustrates how evaluators proceed from one step to the next, and may return to any preceding step in the process as new information is revealed to them during the evaluation process.
Methodology Requirement 1: Define the evaluation scope according to Methodology Requirement 1.a, Methodology Requirement 1.b, and Methodology Requirement 1.c, and optionally Methodology Requirement 1.d.
During this step the overall scope of the evaluation is defined. It is a fundamental step that affects the subsequent steps in the evaluation procedure. It is ideally carried out in consultation with the evaluation commissioner (who may or may not be the product's owner) to ensure common expectations about the scope of the evaluation. Initial exploration of the target product during this step may be necessary to better know specifics of the product and the required evaluation. Detailed exploration of the product is carried out in Step 2: Explore the Target Product.
Methodology Requirement 1.a: Define the target digital product according to Scope of Applicability, so that for each user interface it is unambiguous whether it is within the scope of evaluation or not.
During this step the target product (the samples and states of samples that are in scope of the evaluation) is defined. This scope of the product is defined according to the terms established in the section Scope of Applicability.
To avoid later mismatches of expectations between the evaluator, evaluation commissioner, and readers of the resulting evaluation report, it is important to define the target product so that it is unambiguous that a sample is within its scope. Using formalizations including regular expressions and listings of web addresses (URIs) is recommended where possible.
It is also important to document any particular aspects of the target product to support its identification. This includes:
Methodology Requirement 1.b: Select a target WCAG 2 conformance level ("A", "AA", or "AAA") for the evaluation.
Part of initiating the evaluation process is to define the target WCAG 2 conformance level ("A", "AA", or "AAA") for evaluation. WCAG 2 Level AA is the generally accepted and recommended target.
Note: It is often useful to evaluate beyond the conformance target of the digital product to get a more complete picture of its accessibility performance. For example, while a product might not fully meet a particular conformance level, it might meet individual requirements from a higher conformance level. Having this information can help plan future improvements more effectively.
Methodology Requirement 1.c: Define the web browser, assistive technologies and other user agents for which features provided on the digital product are to be accessibility supported.
Particularly for new technologies it is not always possible to ensure that every accessibility feature provided on a digital product, such as a 'show captions' function in a media player, is supported by every possible combination of operating system, web browser, assistive technology, and other user agents. WCAG 2 does not pre-define which combinations of features and technologies must be supported as this depends on the particular context of the product, including its language, the technologies that are used to create the content, and the user agents currently available. Understanding Accessibility Support provides more guidance on the WCAG 2 concept of accessibility support.
During this step the evaluator determines the minimum set of combinations of operating systems, web browsers, assistive technologies, and other user agents that the product is expected to work with, and that is in-line with the WCAG 2 guidance on accessibility support (linked above). This step is carried out in consultation with the evaluation commissioner to ensure common expectation for the targeted level of accessibility support. The product's owner and product's developer may also have such a list of combinations that the product was designed to support, which could be a starting point for this step. Depending on the purpose of the evaluation such a list may need to be updated, for example to assess how well the product works with more current browsers.
Note: This initial definition of the baseline does not limit the evaluator from using additional operating systems, web browsers, assistive technologies and other user agents at a later point, for example to evaluate content that was not identified at this early stage of the evaluation process. In this case the baseline is extended with the additional tools that were used.
Note: For some products in closed networks, such as an intranet website, where both the users and the computers used to access the product are known, this baseline may be limited to the operating systems, web browsers and assistive technologies used within this closed network. However, in most cases this baseline is ideally broader to cover the majority of current user agents used by people with disabilities in any applicable particular geographic region and language community.
Methodology Requirement 1.d: Define any additional evaluation requirements agreed by the evaluator and evaluation commissioner (Optional).
An evaluation commissioner may be interested in additional information beyond what is needed to evaluate the extent of conformance of the target product to WCAG 2. For example, an evaluation commissioner might be interested in:
Such additional evaluation requirements that are agreed on with the evaluator need to be clarified early on and documented. This also needs to be reflected in the resulting report, for example, to clarify how the selection of the sample was carried out.
Methodology Requirement 2: Explore the digital product to be evaluated according to Methodology Requirement 2.a, Methodology Requirement 2.b, Methodology Requirement 2.c, Methodology Requirement 2.d, and Methodology Requirement 2.e.
During this step the evaluator explores the target product to be evaluated, to develop an initial understanding of the product and its use, purpose, and functionality. Much of this will not be immediately apparent to evaluators, in particular to those from outside the development team. In some cases it is also not possible to exhaustively identify and list all functionality, types of samples, and technologies used to realize the product and its applications. The initial exploration carried out in this step is typically refined in the later steps Step 3: Select a Representative Sample and Step 4: Audit the Selected Sample, as the evaluator learns more about the target product. Involvement of product owners and product developers can help evaluators make their explorations more effective.
Note: Carrying out initial cursory checks during this step helps identify samples that are relevant for more detailed evaluation later on. For example, an evaluator may identify samples that seem to be lacking color contrast, document structure, or consistent navigation, and note them down for more detailed evaluation later on.
Note: To carry out this step it is critical that the evaluator has access to all the relevant parts of the product. For example, it may be necessary to create accounts or otherwise provide access to restricted areas of a product that are part of the evaluation. Granting evaluators such access may require particular security and privacy precautions.
Methodology Requirement 2.a: Identify the common samples, which may be sample states, of the target product.
Explore the target product to identify its common samples, which may also be sample states in web applications. Typically these are linked directly from the main entry point of the target product (like the home page on a website, or the start screen of an app), and often linked from the header, navigation, and footer sections of other samples. The outcome of this step is a list of all common pages or views of the target product.
Methodology Requirement 2.b: Identify an initial list of essential functionality of the target product.
Explore the target product to identify its essential functionality. While some functionality will be easy to identify, others will need more deliberate discovery. For example, it may be easier to identify the functionality for purchasing products in an online shop than the functionality provided for vendors to sell products through the shop. The outcome of this step is a list of functionality that users can perform on the product. This list will be used in the following steps to help select representative samples for evaluation.
Note: The purpose of this step is not to exhaustively identify all functionality of a product but to determine those that are essential to the purpose and goal of the target product. This will inform later selection of samples and their evaluation. Other functionality will also be included in the evaluation but through other selection mechanisms.
Some examples of product functionality include:
Methodology Requirement 2.c: Identify the types of samples.
Samples with varying styles, layouts, structures, and functionality often have varying support for accessibility. They are often generated by different templates and scripts, or authored by different people. They may appear differently, behave differently, and contain different content depending on the particular product user and context.
During this step the evaluator explores the target product to identify the different types of samples. The outcome of this step is a list of descriptions of the types of content identified, rather than specific instances of samples. This list will be used in the following steps to help select representative samples for evaluation.
Some examples of different types of samples that evaluators can look for include those:
Methodology Requirement 2.d: Identify the technologies relied upon to provide the product.
During this step, the technologies relied upon for conformance are identified. This includes base technologies such as HTML and CSS, auxiliary technologies such as JavaScript and WAI-ARIA, as well as specific technologies such as SMIL, SVG and PDF. The outcome of this step is a list of technologies that are relied upon according to WCAG 2. This list will be used in the following steps to help select representative samples for evaluation.
Note: Where possible, it is often also useful to identify any content management system, version, and configuration as it may be relevant to explain the evaluation results. Also any libraries and components used to create the product, such as Dojo, jQuery, and others may be relevant. Particularly for web applications, much of the accessibility support is built into libraries and components, and evaluation can become more effective and efficient when these are identified.
Methodology Requirement 2.e: Identify other samples that are relevant to people with disabilities and to accessibility of the digital product.
Some digital products include samples and sample states that are specifically relevant for people with disabilities and the accessibility of the digital product. The outcome of this step is a list of such samples and sample states, if they have not already been identified as part of Step 2.a: Identify Common Samples of the Digital Product.
Examples of other relevant samples and sample states include those:
Methodology Requirement 3: Select a representative sample from the digial product according to Methodology Requirement 3.a, Methodology Requirement 3.b, and Methodology Requirement 3.c.
During this step the evaluator selects a sample that is representative of the target product to be evaluated. The purpose of this selection is to ensure that the evaluation results reflect the accessibility performance of the digital product with reasonable confidence. In cases where it is feasible to evaluate all pages or views of a digital product, which is highly recommended, this sampling procedure can be skipped and the “selected sample” in the remaining steps of this evaluation process is the entire digital product. In some cases, such as for small websites, this sampling procedure may result in selecting all pages or views states of the website, or all screens of the mobile application.
The actual size of the sample needed to evaluate a digital product depends on many factors including:
The selection carried out during this step relies initially on the exploration carried out in Step 2: Explore the Target Product. The selection is also continually refined during the following Step 4: Audit the Selected Sample, as the evaluator learns more about the particular implementation aspects of the target product.
Methodology Requirement 3.a: Select samples that reflect all identified (1) common samples, (2) essential functionality, (3) types of samples, (4) technologies relied upon, and (5) other relevant samples.
Select a sample that includes:
Note: An individual samples may reflect more than one of each of the criteria listed above. For example, a single sample may be representative of a particular design layout, functionality, and technologies used. The purpose of this step is to have representation of the different types of samples, functionality, and technologies that occur on the digital product. Careful selection of these representative instances can significantly reduce the required sample size while maintaining appropriate representation of the entire digital product. The number of required instances of samples depends on the particular aspects of the digital product explained in the previous section, factors influencing the sample size.
Methodology Requirement 3.b: Select a random sample, and include them for auditing.
A randomly selected sample acts as an indicator to verify that the structured sample selected through the previous steps is sufficiently representative of the content provided on the website. This is an important step to improve the confidence in the overall evaluation outcome when the evaluation results from both selection approaches correlate.
The number of samples to randomly select is 10% of the structured sample selected through the previous steps. For example, if the structured sample selected for a digital product resulted in 80 samples, then the random sample size is 8 samples. (Note: The size of the structured sample is different than the size of the digital product.)
To perform this selection, randomly select unique samples from the target digital product that are not already part of the structured sample selected through the previous steps. Depending on the type of product and the access that an evaluator has for it there are different techniques that may need to be used for this selection. This may include:
Document the samples that were randomly selected as these will need to be compared to the remaining structured sample in Step 4.c: Compare Structured and Random Samples.
Note: While the random sample need not be selected according to strictly scientific criteria, the scope of the selection needs to span the entire scope of the digital product (any samples on the digital product may be selected), and the selection of individual samples does not follow a predictable pattern. Recording the method used to generate the random sample is crucial for ensuring the reliability and replicability of the findings.
Methodology Requirement 3.c: Include all samples that are part of a complete process in the selected sample.
The selected sample has to include all pages or views that belong to a series presenting a complete process. When samples belong to a process, all pages or views that belong to that same process have to be included.
Use the following steps to include the necessary samples:
Note: In most cases it is necessary to record and specify the actions needed to proceed from one sample to the next in a sequence to complete a process so that they can be replicated later. An example of such action could be "fill out name and address, and select the 'Submit' button". In most cases the web address (URI will not be sufficient to identify the sample in a complete process. It is also useful to clearly record when samples are part of a process so that evaluators can focus their effort on the relevant changes such as elements that were added, modified, or made visible.
Methodology Requirement 4: Audit the selected sample according to Methodology Requirement 4.a, Methodology Requirement 4.b, and Methodology Requirement 4.c.
During this step the evaluator audits (detailed evaluation of) all of the samples selected in Step 3: Select a Representative Sample, and compares the structured sample to the randomly selected sample. The audit is carried out according to the five WCAG 2 conformance requirements at the target conformance level defined in Step 1.b: Define the Conformance Target.
The five WCAG 2.2 conformance requirements are:
Further guidance on evaluating to these conformance requirements is provided in the following sections. The WCAG 2 Layers of Guidance and Understanding Conformance provide more background and guidance on the WCAG 2 conformance requirements, which is beyond the scope of this document.
Note: Carrying out this step requires deep understanding of the WCAG 2 conformance requirements and the expertise described in section Required Expertise.
Methodology Requirement 4.a: Check that each sample that is not within or the end of a complete process conforms to each of the five WCAG 2 conformance requirements at the target conformance level.
For each sample selected in Step 3: Select a Representative Sample that is not within or the end of a complete process, check its conformance with each of the five WCAG conformance requirements, with the target conformance level defined in Step 1.b: Define the Conformance Target. This includes all components of the sample without activating any functions, entering any data, or otherwise initiating a process. Such functionality and interaction, including samples that are within or the end of a complete process, will be evaluated in the subsequent step.
Note: Many samples will have components, such as the header, navigation bars, search form, and others that occur repeatedly. While the requirement is to check full pages, typically these components do not need to be re-evaluated on each occurrence unless they appear or behave differently, or when additional evaluation requirements are defined in Step 1.d: Define Additional Evaluation Requirements (Optional).
There are typically several ways to determine whether WCAG 2 Success Criteria have been met or not met. W3C/WAI provides one set of (non-normative) Techniques for WCAG 2.2, which documents ways of meeting particular WCAG 2 Success Criteria. It also includes documented common failures, which are known ways in which content does not meet particular WCAG 2 Success Criteria. Understanding Techniques for WCAG Success Criteria provides more guidance on the WCAG 2 concept of Techniques.
Evaluators can use such documented guidance to check whether particular web content meets or fails to meet WCAG 2 Success Criteria. Documented techniques and failures can also be useful background in evaluation reports. However, it is not required to use the particular set of techniques and failures documented by W3C/WAI. In fact, evaluators do not need to follow any techniques and failures at all. Evaluators might use other approaches to evaluate whether WCAG 2 Success Criteria have been met or not met. For example, evaluators may utilize specific testing instructions and protocols that meet the requirements for sufficient techniques, and that may be publicly documented or only available to the evaluators. More guidance on the use of techniques is provided in the previously linked Understanding Techniques for WCAG Success Criteria.
Note: WCAG 2 Success Criteria are each formulated as a "testable statement that will be either true or false when applied to specific web content". When there is no content presented to the user that relates to specific Success Criteria (for example, no video on the web page), then the Success Criteria are "satisfied" according to WCAG 2. Optionally, an evaluation report can specifically indicate Success Criteria for which there is no relevant content, for example, with "not present". Understanding Conformance provides more background and guidance.
Content on a sample might have alternate versions. For example, video content may be provided in a version with and without captions. In some cases an entire sample (or series of them) may be provided as an alternate version to an initial sample. Conformance to WCAG 2 can be achieved with the help of alternate versions that meet the requirements listed in the WCAG 2 definition for conforming alternate version. For example, a web page with video content without captions could still meet WCAG 2 by providing an alternate version for the video that qualifies to be a conforming alternate version. Understanding Conforming Alternate Versions provides further guidance on conforming alternate versions that is beyond the scope of this document.
Note: Alternate versions are not considered to be separate samples but part of the content. Samples are evaluated together with their alternate versions as one unit (full page).
Content on a sample needs to be provided in a way that is accessibility supported (either directly or through an alternate version). For example, the captions for a video need to be provided in a way that they can be displayed to users. The WCAG 2 definition for accessibility supported defines specific requirements for the use of web content technologies to qualify as accessibility-supported. Understanding Accessibility Support Web Technology Uses provides further guidance on accessibility support that is beyond the scope of this document. However, WCAG 2 does not define a particular threshold or set of software that a digital product needs to support for accessibility. The definition of such a baseline depends on several parameters including the purpose, target audience, and language of the digital product. The baseline used to evaluate a particular digital product is defined in Step 1.c: Define an Accessibility Support Baseline.
Content on a sample may not conform to WCAG 2, even though the sample as a whole might still conform to WCAG 2. For example, information and functionality may be provided using web content technologies that are not yet widely supported by assistive technologies or in a way that is not supported by assistive technologies, accompanied by a conforming alternate version for the information and functionality that is accessibility supported. In this case the non-conforming content must not negatively interfere with the conforming content so that the sample can conform to WCAG 2. The WCAG 2 conformance requirement for non-interference defines specific requirements for content to qualify as non-interfering. Understanding Requirement 5 provides further guidance on non-interference that is beyond the scope of this document.
Methodology Requirement 4.b: Check that all interaction for each sample that is part of a complete process conforms to each of the five WCAG 2 conformance requirements at the target conformance level.
For each complete process identified in Step 3.c: Include Complete Processes, follow the identified default and branch sequences of samples, and evaluate each according to Step 4.a: Check All Initial Samples. However, in this case it is not necessary to evaluate all content but only the content that changes along the process.
Functionality, entering data, notifications, and other interaction is part of this check. In particular it includes:
Methodology Requirement 4.c: Check that each sample in the randomly selected sample does not show types of content and outcomes that are not represented in the structured sample.
While the individual occurrences of WCAG 2 Success Criteria will vary between the structured and randomly selected samples, the randomly selected sample should not show new types of content not present in the structured sample. Also the outcomes from evaluating the randomly selected sample should not show new findings to those of the structured sample. If the randomly selected sample shows new types of content or new evaluation findings then it is an indication that the structured sample was not sufficiently representative of the content provided on the website. In this case evaluators need to go back to Step 3: Select a Representative Sample to select additional samples that reflect the newly identified types of content and findings. Also the findings of Step 2: Explore the Target Digital Product might need to be adjusted accordingly. This step is repeated until the structured sample is adequately representative of the content provided on the digital product.
Methodology Requirement 5: Report the evaluation findings according to Methodology Requirement 5.a and optionally Methodology Requirement 5.b, Methodology Requirement 5.c, Methodology Requirement 5.d, and Methodology Requirement 5.e.
While evaluation findings are reported at the end of the process, documenting them is carried out throughout the evaluation process to ensure verifiable outcomes. The documentation typically has varying levels of confidentiality. For example, documenting the specific methods used to evaluate individual requirements might remain limited to the evaluator while reports about the outcomes from these checks are typically made available to the evaluation commissioner. Product owners might further choose to make public statements about the outcomes from evaluation according to this methodology.
Methodology Requirement 5.a: Document each outcome of the steps defined in Step 1: Define the Evaluation Scope, Step 2: Explore the Target Digital Product, Step 3: Select a Representative Sample, and Step 4: Audit the Selected Sample.
Documenting the outcomes for each of the previous steps (including all sub-sections) is essential to ensure transparency of the evaluation process, replicability of the evaluation results, and justification for any statements made based on this evaluation. This documentation does not need to be public, the level of confidentiality is usually determined by the evaluation commissioner.
Documenting the outcomes for each step includes at least the following:
Note: Depending on the desired granularity of the report documentation, the outcomes of Step 4: Audit the Selected Sample may be provided for each evaluated sample, or aggregated over the entire sample. Reports should include at least one example for each conformance requirement and WCAG 2 Success Criterion not met. It is also good practice for evaluators to indicate issues that occur repeatedly.
Reports may also include additional information depending on any additional evaluation requirements defined in Step 1.d: Define Additional Evaluation Requirements (Optional). For example, an evaluation commissioner may request a report indicating every failure occurrence for every sample, more information about the nature and the causes of the identified failures, or repair suggestions to remedy the failures.
Methodology Requirement 5.b: Archive the samples audited, and record the evaluation tools, web browsers, assistive technologies, other software, and methods used to audit them (Optional).
While optional, it is good practice for evaluators to keep record of the evaluation specifics, for example to support conflict resolution in the case of dispute. This includes archiving the samples audited, and recording the evaluation tools, web browsers, assistive technologies, other software, and methods used to audit them. This recording is typically kept internal and not shared by the evaluator unless otherwise agreed on in Step 1.d: Define Additional Evaluation Requirements (Optional).
Records of the evaluation specifics could include any of the following:
This recording may apply globally for the entire evaluation, to individual samples, or to individual checks carried out within the audited samples. A table or grid may be useful to record what was used for the different samples audited.
Note: Records of the evaluation specifics may include sensitive information such as internal code, passwords, and copies of data. They may need particular security and privacy precautions.
Methodology Requirement 5.c: Provide a statement describing the outcomes of the conformance evaluation (Optional).
Reminder: In the majority of situations, using this methodology alone does not result in WCAG 2 conformance claims for the target digital product; see Relation to WCAG 2 Conformance Claims for more background.
Product owners may wish to make public statements about the outcomes from evaluations following this methodology. This can be done when at least every non-optional methodology requirement is satisfied, the conformance target defined in Step 1.b. Define the Conformance Target is satisfied by all samples audited (in Step 4: Audit the Selected Sample), and the product owner commits to ensuring the validity and maintaining the accuracy of the evaluation statement made.
An evaluation statement according to this methodology includes at least the following information:
Evaluation statements according to this methodology can also be made when only partial conformance to WCAG 2 has been achieved. In such cases the evaluation statements also include the following information:
Methodology Requirement 5.d: Provide an Aggregated score (Optional).
While aggregated scores provide a numerical indicator to help communicate progress over time, there is currently no single metric that is known to address the required reliability, accuracy, and practicality. In fact, aggregated scores can be misleading and do not provide sufficient context and information to understand the actual accessibility of a digital product. For this and other reasons WCAG 2 does not provide a rating scheme. A W3C Research Report on Web Accessibility Metrics provides more background on on-going research, different approaches, and limitations of scoring metrics that are beyond the scope of this document. Whenever a score is provided, it is essential that the scoring approach is documented and made available to the evaluation commissioner along with the report, to facilitate transparency and repeatability.
Methodology Requirement 5.e: Provide machine-readable reports of the evaluation results (Optional).
Machine-readable reports facilitate processing the evaluation results by authoring, accessibility evaluation tools, and quality assurance tools. The Evaluation and Report Language (EARL) is a machine-readable format that was specifically designed for this purpose. It is recommended to use EARL for providing machine-readable reports. See also Understanding Metadata from WCAG 2 to learn more about uses of metadata, including machine-readable reports, such as EARL.
The information below, related to accessibility essentials, evaluation, and WCAG 2 is important for using this methodology. Evaluators using this methodology are expected to be deeply familiar with all the listed resources:
The following documents introduce the essential components of accessibility and explain how people with disabilities use the Web. They are critical for understanding the broader context of accessibility evaluation:
These are particularly important resources that outline different approaches for evaluating digital products for accessibility:
This is the internationally recognized standard explaining how to make web content more accessible to people with disabilities. The following resources are particularly important for accessibility evaluation of digital products:
For the purposes of this document, the following terms and definitions apply:
When a user interface is one of a series of user interfaces presenting a process (i.e., a sequence of steps that need to be completed in order to accomplish an activity), all user interfaces in the process conform at the specified level or better. (Conformance is not possible at a particular level if any page in the process does not conform at that level or better.)
Satisfying all the requirements of a given standard, guideline or specification.
Note: A definition for user interface states is provided below.
Note: The focus of this methodology is on full, self-enclosed digital products. Digital products may be composed of smaller subsets of user interfaces, each of which can be considered to be an individual product. For example, a digital product may include an online shop, an area for each department within the organization, a blog area, and other areas that may each be considered to be a digital product.
Note: Examples of essential functionality include “selecting and purchasing an item from an online shop”, “completing and submitting a form provided in an application”, and “registering for an account on the kiosk”.
Note: Other functionality is not excluded from the scope of evaluation. The term “essential functionality” is intended to help identify critical samples and include them among others in an evaluation.
Note: In many cases the evaluation commissioner may be the product owner or product developer, in other cases it may be another entity such as a procurer or an accessibility monitoring survey owner.
The content would not conform if that technology is turned off or is not supported.
Content patterns that are filled in by authors or the authoring tool to produce web content for end users (e.g., document templates, content management templates, presentation themes). Often templates will pre-specify at least some authoring decisions.
Note: Examples of user interface states are the individual parts of a multi-part form that are dynamically generated depending on the user's input. These individual states may need to be identified by describing the settings, input, and actions required to generate them.
A non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together with it by a user agent.
Past and present active participants of the WCAG 2.0 Evaluation Methodology Task Force (Eval TF) include: Shadi Abou-Zahra; Frederick Boland; Denis Boudreau; Amy Chen; Vivienne Conway; Bim Egan; Michael Elledge; Gavin Evans; Wilco Fiers; Detlev Fischer; Elizabeth Fong; Vincent François; Alistair Garrison; Emmanuelle Gutiérrez y Restrepo; Katie Haritos-Shea; Martijn Houtepen; Peter Korn; Maureen Kraft; Aurelien Levy; David MacDonald; Mary Jo Mueller; Donald Raikes; Corominas Ramon; Roberto Scano; Samuel Sirois; Sarah J Swierenga; Eric Velleman; Konstantinos Votis; Kathleen Wahlbin; Elle Waters; Richard Warren; Léonie Watson.