Copyright © 2024 the Contributors to the Sustainable Tooling And Reporting (STAR) 1.0 Specification, published by the Sustainable Web Design Community Group under the W3C Community Contributor License Agreement (CLA). A human-readable summary is available.
Sustainable Tooling And Reporting (STAR) 1.0 provides information, examples, and metrics data to complement the Web Sustainability Guidelines (WSG) 1.0 specification. Within this supplementary document, you will find implementation advice (for external groups wishing to incorporate sustainability in their work), an evaluation methodology (for testing conformance), a categorized series of techniques (potential implementations and case studies), and a test suite (metrics data on impact and machine automation capability).
As with the WSGs, these features have been inspired by the work of the Web Accessibility Initiative ([WAI]). They have also been curated with a Web sustainability methodology, with the goal of better understanding the digital landscape's role in reducing harm to the wider ecosystem (regarding people and the planet).
For the normative technical specification, see https://w3c.github.io/sustyweb/.
Help improve this page by sharing your ideas, suggestions, or comments via GitHub issues.
The following section provides advisory guidance for other specification writers and external groups wishing to incorporate digital sustainability within their work (such as W3C Working and Community Groups). We are primarily a group dedicated to fostering sustainable change in Web technologies associated with the creation of websites and applications. While it is outside of our group's scope to conduct horizontal reviews of other group's specifications, we may request or accept requests to collaborate with or assist in coordination with other groups to implement sustainable change.
The WSGs have a broad appeal, designed to impact a wide range of Web technologies and related infrastructure as appropriate. With this in mind, when first starting to look to incorporate sustainability within a body of work you may find that certain guidelines more than others will apply to practices.
When designing any fledgling body of work it's worth considering the following:
When creating specifications or industry-specific documents, cross-reference to specific WSG guidelines that are appropriate for a body of work to connect elements of content to applicable sustainability goals.
Create new sustainability guidelines that are explicitly targeted for a technology (that may be too niche to be included within the WSGs). This would provide additional Success Criteria for an audience to meet.
If you do wish to create additional targeted guidelines, please first consult the SWD-CG as we may be able to include them within the main WSGs or provide guidance to avoid conflicts with other existing guidelines.
With the above considerations taken into account, a good next step is to consider sustainability within content and treat sustainability as any other impact target (such as accessibility or performance) by trying to drive change through metrics data. If research exists to back a particular technique as more performant, it's likely to be more sustainable. Often there will be cases where evidence does not exist as Web sustainability is an evolving field and as such, a common sense approach (considering the variables that can impact people and the planet) will be the best method if metrics data cannot be provided to identify the most sustainable option.
Every body of work will have its approach to this and a progress over-perfection methodology is preferred (as doing something is ultimately better than waiting for an ideal fix), but as a general consideration for those who are creating documents that are relied upon by large numbers of individuals, the PPP model which doesn't just account for emissions and the environmental impact but also and human factors and issues surrounding good governance are an ideal template to work from. In the context of the Internet this accounts for everyday (but important) factors such as performance, accessibility, privacy by design, security, and reducing waste which all as a by-product have an environmental impact.
It's also important to note that different bodies will have their sustainability challenges, for example, those working on native APIs are more likely to encounter hardware resource consumption (energy usage), whereas language standards will be more considerate of implementors, accessibility, and improving developer workflows. As such, it's worth coordinating with other groups with aligned goals and discussing their sustainability approaches to align work and help improve theirs, accounting for different variables.
Providing sustainability guidance within work can be presented in many ways. One option is to integrate it within existing guidance or specifications (amending the content). Another is to provide notes or in-page sections dedicated to sustainability providing coverage. Or to gently guide individuals into the subject, a dedicated supplement that will be adapted or merged into the specification at the next major version (giving individuals time to adapt) could be created.
Evaluating the extent to which a website implements the Web Sustainability Guidelines (WSG) is a process involving several steps. The activities carried out within these steps are influenced by many aspects such as the type of website (e.g. static, dynamic, responsive, mobile, application, etc.); its size, complexity, and the technologies used to create the website (e.g. HTML, CSS, JS, PDF, etc.); how much knowledge the evaluators have about the process used to design and develop the website; and the main purpose for the evaluation (e.g. to issue a sustainability statement, to plan a redesign process, to perform research, etc.).
This methodology describes the steps that are common for comprehensive evaluation of the extent of websites and applications to implement WSG 1.0. It highlights considerations for evaluators to apply these steps in the context of a particular product or service. It does not replace the need for quality assurance measures that are implemented throughout the design, development, and maintenance of websites or applications to ensure their sustainability conformance. Following this methodology will help evaluators apply good practice, avoid commonly made mistakes, and achieve more comparable results. However, in the majority of situations using this methodology alone, without additional quality assurance measures, does not directly result in a sustainable product or service that meets the WSG 1.0 success criteria and guidelines.
This methodology does not in any way add to or change the requirements defined by the normative WSG 1.0 specification, nor does it provide instructions on feature-by-feature evaluation of web content. The methodology can be used in conjunction with techniques and examples for meeting WSG 1.0 success criteria, such as the techniques documented within this STAR supplement, but does not require this or any other specific set of techniques.
This methodology is intended for people who are experienced in evaluating Web sustainability using WSG 1.0 and its supporting resources. It provides guidance on good practice in defining the evaluation scope, exploring the target website, selecting representative samples from websites where it is not feasible to evaluate all content, auditing the selected samples, reporting the evaluation findings, and if necessary - implementing sustainable change. It does not specify particular web technologies, evaluation tools, web browsers, or other software to use for evaluation. It is also suitable for use in different evaluation contexts, including self-assessment and third-party evaluation.
In many situations it is necessary to evaluate the sustainability of a website or application, for example before releasing, acquiring, or redesigning the product or service, and for monitoring the sustainability of a website or application over time. This methodology is designed for anyone who wants to follow a common approach for evaluating the compliance of websites to WSG 1.0. This includes:
This methodology is used to perform thorough evaluations of websites and applications using WSG 1.0. Before this, it may be useful to undertake a preliminary evaluation to identify obvious sustainability issues and to take a progress-over-perfection approach to tackling guidelines holistically (though note that such evaluations won't be as robust as a thorough evaluation).
Different expertise may be required to evaluate a website or application, as such, one of the below or a combination of these at different stages of the project may assist with understanding and evaluating the sustainability of a website or application.
This methodology is designed for evaluating both websites and applications. This includes organizations, entities, persons, events, products, and services. Websites and applications can include publicly available or internal websites; applications, intranets, online shops, dedicated mobile websites, isolated sections of a larger website, or internationalization pages (on a subdomain for example). This methodology can apply equally to any collection of materials, regardless of whether it is a part of a larger project or a dedicated entity of its own.
When a target website or application is defined for evaluation, all pages, states, and functionality within the scope of this definition must be considered for evaluation. Excluding such aspects of a website from the scope of evaluation would likely conflict with the WSG 1.0 success criteria and conformance requirements or otherwise distort the evaluation results.
Example of Website Enclosure
In the above example, if a personal portfolio website in its entirety is defined as the target for evaluation, then all of the depicted areas are within the scope of the evaluation. This includes any aggregated and embedded content such as images of work undertaken, assets that are considered apart of the website, interactive materials created, maps to an office (if one exists), including when such parts originate from third-party sources. If only a specific website area, is defined as the target for evaluation then all the parts of this area are within the scope of the evaluation. One example could be to evaluate all of the portfolio items, as well as the individual web pages that are common to the work undertaken by the practitioner.
This methodology applies to a broad variety of website and application types. The following provides considerations for particular situations, noting that products and services may combine several aspects. Thus the following list is non-exclusive and non-exhaustive:
Note: Responsive design techniques adjust the order, flow, and sometimes behavior of the content to best suit the device on which it is used. For example, to adjust the content and functionality according to the size of the viewport, screen resolution, orientation of the screen, and other aspects of a mobile device and the context in which it is being used. In this methodology, such changes to the content, functionality, appearance, and behavior are not considered to be independent website versions but rather web page states that need to be included in the evaluation scope. As such, considerations for mobile devices, operating systems, and assistive technologies need to be taken for websites using responsive design techniques during the evaluation process.
This methodology is designed to be flexible to facilitate its applicability in different situations and contexts. The following considerations apply to particular situations and contexts for an evaluation:
This section describes the stages and activities of an evaluation procedure. The stages are not necessarily sequential. Also the exact sequence of the activities carried out during the evaluation stages depends on the type of application or website, the purpose of the evaluation, and the process used by the evaluator. Some of the activities can overlap or may be carried out in parallel.
There are five sequential steps defined in this section:
Evaluators should proceed from one step to the next, and may return to any preceding step in the process as new information is revealed to them during the evaluation process.
Due to the requirements of sustainability reporting and the success criteria contained within the WSGs, in order to conform to certain aspects of the specification it may be required to obtain or have access to internal knowledge of the website or application or a business behind the product or service. If such access is possible, this knowledge should be used by evaluators in accordance to any policies in-place regarding data use. If access however is not available or cannot be provided, the use of publicly available data may serve as a general estimation of comparability; though the accuracy of results may be in question. In addition, in cases where no replacement for data can be found, conformance evaluators should aim to identify other methods of meeting success criteria or identify the lack of knowledge as a failure point (until such a time that the organization can disclose the information publicly).
While many of the success criteria in the WSGs will withstand machine testing (automation) without human intervention, others cannot be reproduced without manual examination. In such cases, to assist with a large volume of pages, rather than sampling a subset of the website or application (which could bias the results due to the potential for missing important pages), creating a semi-automated structure around human testability to assist with such tasks through tooling (such as the use of a wizard interface) may help reduce pinch points in the evaluation process.
During this step the overall scope of the evaluation is defined. It is an important step as initial exploration of the target application or website may be necessary to better know specifics (and to ensure common expectations) of the product or service and the required evaluation.
During this step the evaluator explores the target website or application to be evaluated, to develop an initial understanding of the product or service and its use, purpose, and functionality. Much of this will not be immediately apparent to evaluators, in particular to those from outside the development team. In some cases it is also not possible to exhaustively identify and list all functionality, types of web pages, and technologies used to realize the website and its applications. The initial exploration carried out in this step is typically refined in the later steps as the evaluator learns more about the target website. Involvement of website owners and website developers can help evaluators make their explorations more effective.
Carrying out initial cursory checks during this step helps identify web pages that are relevant for more detailed evaluation later on. For example, an evaluator may identify web pages that seem to be lacking sustainable features and note them down for more detailed evaluation later on.
To carry out this step it is also critical that the evaluator has access to all the relevant parts of the website. For example, it may be necessary to create accounts or otherwise provide access to restricted areas of a website that are part of the evaluation. Granting evaluators such internal access may require particular security and privacy precautions.
In cases where it is feasible to evaluate all web pages and web page states of a website, which is highly recommended for websites under 1,000 pages, the "selected sample" in the remaining steps of this evaluation process is the entire website. In some cases, such as for small websites, this sampling procedure may result in selecting all web pages and web page states of a website.
In cases where over 1,000 pages exist and the number of pages exceeds the ability to physically test every instance (or in cases where increased complexity reduces the capability to audit pages effectively), the evaluator selects a sample of web pages and web page states that is representative of the target website or application to be evaluated. The purpose of this selection is to ensure that the evaluation results reflect the sustainability of the product or service with reasonable confidence.
During this step the evaluator audits (detailed evaluation of) all of the web pages and web page states selected. Carrying out this step requires the expertise described in section Required Expertise.
There are typically several ways to determine whether WSG Success Criteria have been met or not met. One example includes the set of (non-normative) Techniques for WSG provided within STAR that documents ways of meeting particular WSG Success Criteria using testable statements that will either be true or false when applied to specific web content. While evaluators can use such documented guidance to check whether particular web content meets or fails to meet WSG Success Criteria (and include this within reports), it is not required and evaluators may use other approaches to evaluate whether WSG Success Criteria have been met or not met.
While evaluation findings are reported at the end of the process, documenting them is carried out throughout the evaluation process to ensure verifiable outcomes. The documentation typically has varying levels of confidentiality. For example, documenting the specific methods used to evaluate individual requirements might remain limited to the evaluator while reports about the outcomes from these checks are typically made available to the evaluation commissioner. Website or application owners might further choose to make public statements about the outcomes from evaluation according to this methodology.
This section is non-normative.
WSG 1.0 guidelines and success criteria are designed to be broadly applicable to current and future web technologies, including dynamic applications, mobile, digital television, etc.
STAR techniques guide web content authors and evaluators on meeting WSG success criteria and guidelines, which include code examples, resources, and tests. Techniques are updated periodically to cover additional current best practices and changes in technologies and tools.
The three types of techniques for STAR 1.0 are explained below:
Techniques are informative, meaning they are not required. The basis for determining conformance to WSG is the success criteria from guidelines within the specification and not the techniques themselves. We also caution against requiring specific techniques or tests mentioned within this document. The only thing that should be required is meeting the WSG success criteria.
Sufficient techniques are reliable ways to meet the success criteria.
Advisory techniques are suggested ways to improve sustainability. They are often very helpful and maybe a significant way of reducing emissions or meeting primary PPP objectives.
Advisory techniques are not designated as sufficient techniques for various reasons such as:
Authors are encouraged to apply all of the techniques where appropriate to best address the widest range of sustainability benefits.
Failures are things that cause sustainability issues and fail specific success criteria. The documented failures are useful for:
Content that has a failure does not meet WSG success criteria unless an alternate version is provided without the failure.
In addition to the techniques, there are other ways to meet WSG success criteria. STAR techniques are not comprehensive and may not cover newer technologies and situations. Web content does not have to use STAR techniques to conform to WSG. Content authors can develop different techniques. For example, an author could develop a technique for an existing, or other new technology. Other organizations may develop sets of techniques to meet WSG success criteria. Any techniques can be sufficient if they satisfy the success criterion.
Publication of techniques for a specific technology does not imply that the technology can be used in all situations to create content that meets WSG success criteria and conformance requirements. Developers need to be aware of the limitations of specific technologies and provide content in a way that is sustainable on multiple levels.
The Sustainable Web Design Community Group (SWD-CG) encourages people to submit new techniques so that they can be considered for inclusion in updates of the STAR 1.0 document. Please submit techniques for consideration using GitHub issues.
Each technique has tests that help:
The tests are only for a technique, they are not tests for conformance to WSG success criteria.
While the techniques are useful for evaluating the content, evaluations must go beyond just checking the sufficient technique tests to evaluate how content conforms to WSG success criteria (considerations such as accessibility, privacy, security, etc).
Failures are particularly useful for evaluations because they do indicate non-conformance (unless an alternate version is provided without the failure).
Techniques for WSG are not intended to be used as a standalone document. Instead, it is expected that content authors will usually use our quick reference to read the WCAG success criteria and follow links from there to specific guidelines within the specification (including examples and techniques).
Some techniques may describe how to provide alternate ways for visitors to get content. Alternative content, files, and formats must also conform to WSG and meet relevant success criteria, thereby becoming sustainable.
The code examples in the techniques are intended to demonstrate only the specific point discussed in the technique. They might not demonstrate best practices for other aspects of sustainability, accessibility, usability, or coding not related to the technique. They are not intended to be copied and used as the basis for developing web content.
Many techniques point to "working examples" that are more robust and may be appropriate for copying and integrating into web content.
Each of the below can be shown or hidden by clicking on the technique you wish to display.
Applicability
This technique is Advisory to meet Success Criteria within 2.1 Undertake Systemic Impacts Mapping.
Description
Create a maintainable list of different variables that may impact the sustainability of a product or service over time. By having this list in place, everyone in the creation process can closely monitor the application or website against each variable to determine if an PPP variable present on the list will require resolution either before, during, or after a product or service is launched.
This list should be created during ideation if possible but can also be machine-generated from a pre-determined list of known PPP factors using evidence and research. The content of this material could be further tested through automation if such variables can be aligned with product capabilities however at a bare minimum, this list must be publicly visible (such as within a sustainability statement) and utilized in-house to enact sustainable change.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.
Description
Provide a mechanism for application and website owners to better curate their products and services around the needs of their visitors and those affected by what has been created and in doing so, reduce the PPP burden which can impact the sustainability of the website, especially around Social, people-centered (and user-experience) impacts.
It should be noted that for machine automation, only quantitative feedback will be able to be processed (and therefore useful) and all information gathering must be done sustainably and ethically. It should also be noted that because information is being requested, internal access may be required to formally identify certain characteristics necessary for processing.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.
Description
Provide a mechanism so that websites and applications can offer contingency processes for visitors who have constraints such as an older device, an out-of-date operating system, an unusual or out-of-date browser, or a slow Internet connection. Other contingencies exist (such as geo-blocking or mobile data costs) and these can also be taken into account if detection is possible. Each of these issues can burden the user-experience and cause added conflict along the pipeline in terms of emissions.
If detection is not available (such as in cases where the Internet is unavailable), Non-digital investigation should be conducted where possible (such as through the use of mail or telephone feedback). Once the constraint has been detected, it will often be up to the developer to create a solution that will involve compatibility features or reducing the load on the visitor's device to increase ease of access. If there are questions regarding the potential compatibility or availability of services due to a certain configuration, seek the manufacturer's advice for further guidance.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.
Description
Go beyond the remit of WCAG and accessibility by default and to encourage designers and developers of products to test their creations against a range of specific named types of disabilities to both better understand the conditions and to be able to more structurally and sustainably better test for the unique issues each disability brings to technology in terms of adaptation.
It is strongly encouraged that teams work with individuals with disabilities when attempting this task as they will have the lived experience to help you better adapt your products and services to their needs. If this isn't possible or you wish to theorize against certain pre-built profiles, you can refer to established medical texts to identify potential symptoms and refer these against issues that may cause technology issues, building solutions, and / or simulators (see color blindness) to help test for issues along the creation process.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.3 Research Non-Visitor's Needs.
Description
Utilize the tooling and resources provided by external parties to help you identify any efficiency and sustainability savings when having to use such services, or when non-digital forces come into play, such as if your product or service involves the physical delivery of goods.
Getting the PPP equation of the sustainability of such forces can be difficult to track, especially as third parties may omit data or not provide a complete picture of their scope emissions. Therefore, care must be taken when choosing providers from the offset and consideration must also be given to the impact of using the API to gather such data together.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.
Description
Reduce the cognitive load for visitors between the initial visit to a website or application and reaching their final destination where the information they are seeking is located. The path to locating such information can be routed through multiple interactions such as a search mechanism, hyperlinks, form controls (where appropriate), and progressive disclosure features (reducing the complexity involved in reaching a destination will reduce the rendering load leading to sustainability benefits).
While there is no hard and fast rule regarding the number of clicks required to meet a visitor's expectations, including clear way-finding (breadcrumbs can be machine-identified as can landing page regions to guide visitors along the path). It is therefore a sensible precaution to ensure that the steps required are well documented to avoid overwhelming a visitor. If required, click-through testing can be measured to identify bottlenecks in complex applications.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.
Description
Ensure that visitors can read the materials produced for your website or application and click on interactive content (such as links or buttons) that would otherwise be impaired by other content that due to positioning obscures the material by preventing the page from functioning correctly (wasted clicks, especially if JavaScript functions monitor them could lead to emission costs so its best to avoid unnecessary actions).
There will be occasions when content should be obscured to progressively disclose additional content, but for unintentional overlapping, obscuring deliberately for attention (unnecessary), and all cases where the visitor did not ask to be impacted, the visibility of the content should be assured (either manually or mechanically through identifying object locations and dimensions plus where conflicts arise, issue remedial advice).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.
Description
Help increase the readability of content within a website or application by breaking down larger pieces of content into smaller constituent pieces. In the same way that a book is broken down into chapters, using progressive disclosure you can reveal sections when the reader is ready, avoiding information overload (the same can be done with long tasks).
For machine testability, identification of such progressive disclosure markers can be found using the HTML details or dialog element, the CSS target selector (and its accompanying HTML ID attributes), the CSS checkbox hack, the use of JavaScript HashBangs, and also the use of state (and content) changes through various frameworks. Each of these can build a picture of how content displays on-screen during the user-experience, and some can load content only when it is requested which can reduce data transfer and rendering requests leading to sustainability improvements.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.
Description
Reduce problematic friction within the user-experience and help reduce the number of dark patterns which might occur onLoad and during the interaction process as these can cause considerable sustainability issues as a result of wasted emissions. By ensuring that the visitor remains in control of the interface and that websites and applications work as expected, a more ethical and optimized product or service is likely to result.
Machine testing can analyze JavaScript for the use of popup events, the appearance of "_blank" within hyperlinks or frames, or other functionality that exists that may produce overlays. It's important to question the acceptability of such usage for example, opening links in new windows may be acceptable for non-web formats like PDF, otherwise, it's best to advise against its use.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.7 Avoid Unnecessary or an Overabundance of Assets.
Description
Provide a mechanism in which visitors can reduce their impact by choosing to have a more basic experience (to what extent will be up to the developer) by downloading fewer external resources, scripts, and other assets to their machine to be processed and rendered. With this action, a website or application that is already optimized for sustainability can go one step further in providing a barebones format for those wishing to prioritize lowering their footprint over having added functionality.
This action could be performed by defining what assets have been added to the product or service to enhance the experience but are not of critical importance (such as background images, or decorative flourishes that can be machine identified). Other forms of decoration can also be machine-identified such as CSS flourishes to content and images, background sounds, custom cursors, custom scrollbars, etc.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.
Description
Verify that your navigation menu is crafted in such a manner that it is both accessible to visitors and user-agents and functions correctly so that when implemented it doesn't lead to issues when attempting to browse the information architecture of a website or application. If the information architecture were to fail, this would lead to PPP failings as your visitors would no longer be able to use the product or service without encountering barriers to access and risk further emissions attempting to solve the issue.
Machine testing the structure of a navigation menu will usually involve the nav, ul, or ol elements within the header of a product or service which repeats across pages and assuming that links function as expected and click ratios (sizes) are large enough on both desktop and mobile platforms, any search functionality will also need to be tested to ensure that results are provided upon submission.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.
Description
Increase the findability of the content within your website among search engines and social networks. By creating a XML sitemap in the base directory of your website, and potentially having a human-readable) HTML sitemap to supplement it, you provide an index of all the publicly available pages. This acts as a potential signpost for individuals who may find themselves seeking information but not knowing exactly where it is stored.
When creating an HTML sitemap it will be important to categorize pages into lists based on what section of the website they appear in (for human readability) rather than providing everything in a single long list. Structurally, you could use lists within lists to provide this distinction or use subheadings with individual lists. Both methods should ensure to be semantically correct and accessible to meet the human requirements of PPP targets.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.
Description
Ensure that visitors and users of an application or website can monitor changes and events that occur over time. The need to be able to track such events is critical as customers become reliant on products and services. This reliance on such features means that time will have been invested into learning how to set up and use the product, which if it fails could have a sustainability impact not just for the product or service owner, but for those reliant on its ability to function.
Every product or service will have its own update schedule and there will be no hard and fast rule in terms of how often a website or application should be offering new releases. That being said, there should be a mechanism in place to describe news events both in a syndicated format and on the website, and a system status mechanism to identify any current issues with the product or service.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.
Description
Use the least amount of permission requests required to achieve a given task. Allowing the visitor to control when they receive information is critically important but it's also important not to burden them with requests to access hardware or do things that might prove invasive (such as triggering notifications or a pop-up window).
For mechanical testability, JavaScript APIs can check using methods such as requestPermission() to identify if a website or application has been granted access to use various hardware or potentially abusable features within the web browser. Unsubscribe, and delete / freeze account links could also be identified (with internal access being granted).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.
Description
Lower the barriers to entry with content. When dealing with complex, multi-page websites it becomes increasingly important to have features in place to allow visitors to bypass repeated elements that they have already prefilled to achieve goals quicker (this is a key variable for sites who have large membership numbers).
There are functional ways to increase the flow through a website and allow visitors to accomplish a task that can also be machine-testable. This technique is most useful when dealing with complex pieces of content, multi-step products, or services that have a lot of functionality that may require shortcuts to allow faster decision-making under certain conditions.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.
Description
Allow the visitor to achieve the task they initially set out to do and not trigger unnecessary issues in the user-experience which would undermine their ability to complete the said task. Issues such as infinite scrolling can promote a need for continued browsing (which can load excessive data and have a sustainability impact on rendering) and loading overlays that deviate visitors from their path will have wider societal impacts on PPP targets which need to be considered.
Within this technique, machine testability can be considered by examining if common link pagination landmarks (previous, next, <numbers>, etc) exist within category listings and if overlays or attention-keeping features are detected for common patterns (such as those mentioned in the examples below). If screen addition mechanisms are determined to be in use within a website, this should be considered a failing mark against the related success criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.10 Use Recognized Design Patterns.
Description
Use repeating patterns that visitors are likely to recognize from visits to other websites to build a sense of comfort and identification with your products and services. By using established design patterns that are repeatable, recognizable, and testable, you can reduce the amount of confusion likely to occur within a user-interface and this will increase the speed of adaptability (which has sustainability benefits).
It should be noted that for machine testability, heuristic recognition (identification of patterns in code) will likely allow product creators to recognize the implementation of certain patterns, especially when libraries or design systems are utilized as the backbone of a product or service. Through this, recommendations can be made to improve layouts.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.
Description
Prevent the visitor from being manipulated being a recognized dark pattern that falls under the umbrella of deceptive design. While there are a large number of ways to code ethically, using any one of these named practices would constitute a failure against WSG guidelines and as such, ensuring that practices are not used is worthy of inclusion.
Machine testability for deceptive patterns will vary based on the type in use however the potential to integrate artificial intelligence to assist with more subtle uses of manipulation within tooling could be used. More obvious issues derided from code injection can be flagged and reported as problematic.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.
Description
Eliminate the PPP human (third-party emissions, privacy, and performance) issues that relate to the use of analytics software. While this technique doesn't advocate that such software needs to be removed in all instances, the potential to flag poor-performing options and recommend more optimized solutions (or request removal) will be recommended.
This should take into consideration the need for analytics in other success criteria (for research metrics) before recommending removal and identifying low-carbon options for third-party solutions (where data exists). Additional criteria based on PPP values (such as privacy and security) should also be considered.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.
Description
Proof your code and content for practices that might detract from the user-experience but are done, purely in an attempt to gain higher rankings from within search engines. While getting such a position would be beneficial, using such techniques will often get the reputation of your site on these services tarnished and your rank will suffer significantly.
Your visibility in search engines and social media matters. Visibility is how people find you on the web. If people cannot find you or sections of your website or application, they will consume resources in that effort (or trying to find a replacement). Additionally, bad SEO practices targeting only machines consume visitor resources for rendering and produce excess emissions.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.
Description
Allow for code that has been produced for an individual project to be re-used and reapplied across future work to reduce redundancy. This is most commonly seen in frameworks and libraries but isn't limited as such and can also apply to documentation and snippets.
This technique is most useful when it is housed within an open source location to foster a culture of contribution, remixing, and improving of the work. While machine testing of code to determine functionality based on its apparent need would prove problematic, the ability to use importable code can be verified.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.
Description
Guarantee that those without a working knowledge of an aspect of design or development will be provided with the information they require so that they can safely work with the files being presented to them during the creation and maintenance process.
Because emissions do not start or stop at website usage, and emissions are created during the ideation and creation process, it is crucial that all involved with the project, even clients who may not be used to the types of technology to which this specification refers to, can optimize their ability to produce high-quality output, as this will reflect in sustainable websites and applications.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.
Description
Provide internal or external developers with the information they require to understand the justification behind certain coding choices, and what individual pieces of code exist for. While such features can be itemized within the documentation, code comments are useful for providing a portable library of notes with the source code itself.
Every language will have its own descriptive mechanism for producing code comments and developers should prescribe to such best practices. Conventions could be utilized within comments such as including links to provide added context. Comments can be machine-detected by their opening and closing statements and paired with the code that follows them.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.13 Use a Design System To Prioritize Interface Consistency.
Description
Ensure that stylistic choices made by the design and development team are based upon prescribed defaults. This will encourage consistency not just within the layout but also in terms of writing style, accessibility, sustainability, and other variables being monitored.
There are already several established design systems, which provide a good baseline for what should should be included within them. Either deploy an existing open source solution or craft a custom model that meets the needs of your product or service. Machine testability will rely on identifying the design system and then verifying the materials are being used in the wild.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.
Description
Make the content within your website or application more legible to individuals who may struggle with more technical language. Techniques such as lowering the reading age, removing industry terms (or clearly defining them), and having translations for international audiences can help get your message across.
This technique is most useful when dealing with a large body of content that the visitor has to wade through to complete a task. Being able to reduce screen time by improving the ease at which visitors can comprehend a topic will ultimately allow for an experience that feels faster and discriminates less against individuals who may struggle with highly technical content.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.
Description
Avoid visitors being hit with a wall of text which risks abandonment of the page (and a wasted session with emissions attached). By taking large pieces of information and structuring them into more clean smaller chunks and utilizing a range of more friendlier features when appropriate such as lists and tables, the information will look less intimidating.
For machine testability, identifying clear headings, visual hierarchy, spacing, line breaks, lists, use of images, and other features of HTML can help calculate the relative density of the content and whether it could be better presented not just for readers but to (if used with progressive disclosure) reduce the initial impact on rendering engines which can have a sustainable impact on hardware.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.
Description
Reduce the amount of time visitors and potential visitors spend churning through data attempting to locate you (or pages relating to you) through search engines or social media. These types of requests have an emissions impact from the hardware used to deal with the requests to the rendering of the results, therefore getting the right information to visitors as quickly as possible is critical to having a sustainable product.
This list should be created as early as possible but could be machine-generated from a pre-determined list of SEO variables. The content of this material could be machine-tested against using automation if the variables can be aligned with what has been implemented. Additionally, publicly-facing social media handles that are identified within a product or service website can be detected.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.
Description
Identify whether the images you are using are firstly necessary (which may be difficult to machine test), and secondly are using a format that is web appropriate. While these considerations may seem small they can dramatically lead to performance benefits that can reduce loading times and page bloat (which is an increasing issue on today's Internet).
Implementations should consider in necessity the total image count (too many over a certain ratio could be problematic for those with bandwidth limits). Machine testability for web graphics can easily detect older formats that should be replaced immediately, formats that could potentially be changed for optimization purposes, and formats that could become vector graphics.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.
Description
Ensure that images are not only served at the correct resolution and size for the device requesting them but also to make sure that the images being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.
Compression tests can be run against every image to see if improvements can be made (above and beyond changing formats) without losing too much image quality that visibility becomes degraded and therefore a recognition issue. The use of the sizes attribute can also help provide alternative images for different resolutions as required by device requirements (though the default image should be set to a median value to balance size and weight).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.
Description
Prevent image assets included within a website or application that are either visually below the fold or hidden due to progressive disclosure from being loaded until they appear in view of the visitor's screen. This can save bandwidth and the resulting processing of the image at the client-side which otherwise would occur and may not even be used if the visitor chooses to click elsewhere or close before reaching it.
While it is preferable that only links below the fold have HTML lazy-loading attributes attached to them, there does not appear to be a penalty for including it within all images so if machine testing cannot differentiate this shouldn't be discriminated against as a failing point. Additionally, the point of the fold can change based on the type of device being used so it's worth considering this into processes.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.
Description
Ensure that visitors with bandwidth restrictions or slow connections are given a mechanism to access your content when their ability to access critical information may be stressed. Providing such options may include an in-page solution (either by default or by choice), or through the offering of a highly optimized alternative for such requests.
When providing such requests through a secondary stream (such as a low-fidelity option), this channel mustn't become as bloated as the primary channel. As such, guidelines should be drawn up regarding what can and cannot be included to ensure limitations are placed to maintain a basic level of service while not offering the full capability of the primary product.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.
Description
Ensure that you provide details about any image reduction targets and techniques within a publicly available media management and use policy. This will help you to establish the criteria you will attempt to meet across your product or service and allow visitors to identify any failings if they spot them (which can be fixed during the product lifecycle).
Machine testability for such processes can involve first ensuring that the media management and use policy exists and then checking for a section on images that exists can be a definable way of identifying that the subject is being mentioned. Content can be checked for accuracy through auditing processes and failings flagged up as issues requiring resolution to maintain compliance.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.
Description
Ensure that media will only run within a website or application on the command of the visitor and not begin consuming resources and thrashing hardware from the immediacy of the page load event. Because this involves HTML (attributes and background media), JavaScript, and images (animated GIFs), a multi-functional approach is required.
Because media can involve audio and video, implementors will need to ensure that both are treated with equal respect as they both consume resources and contribute to emissions (to varying degrees). Additional care will also need to be taken to identify background media and prevent its sudden onset unless the function of that page or application is to show a media file (with nothing else).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.
Description
Ensure that both the formats being used for media have widespread browser compatibility and that the audio and video being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.
Compression tests can be run against every video and audio file to see if improvements can be made without losing too much quality that the artifact becomes degraded beyond usefulness. Audio and video formats to be embedded with browser compatibility issues can also be machine detected and recommendations for alternatives can be offered (and implemented).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.
Description
Provide a mechanism that ensures visitors remain in control of when highly impactful (in terms of emissions) media or third-party materials begin to transmit to their devices. By having a static facade that upon clicking begins the render (and not simply hiding the video, audio or interactive element behind the static image), you ensure that the content has a lazy-load effect even if autoplay or the interaction is not active (as pre-buffering may occur during the render process).
Because there is no browser default method for facades (at the time of publication), identification of such events for machine testability will come down to them being clearly labeled using ID or class attributes (or through heuristic testing for an image that is either anchor linked or has a button attached with a JavaScript event handler to process the content switch).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.
Description
Ensure that videos are not only served at the correct resolution but also to identify assets that can help visitors make decisions before choosing to load the main video as to whether the media is correct for them. The location of these links should be connected to the embedded media, and either replace the original video upon click or downloadable for the visitor (they should not increase the number of embeds).
JavaScript can be used to detect the resolution of the device accessing the media and serve the correct media size for the visitor requesting it (ensuring smaller devices don't get larger media files). Regarding the types of links to be offered, they should provide the visitor with more choice in terms of the quality of the video being consumed or provide added context to its purpose.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.
Description
Ensure that you provide details about any audio or video reduction targets and techniques within a publicly available media management and use policy. This will help you to establish the criteria you will attempt to meet across your product or service and allow visitors to identify any failings if they spot them (which can be fixed during the product lifecycle).
Machine testability for such processes can involve first ensuring that the media management and use policy exists and then checking for a section on audio, video, or media, exists can be a definable way of identifying that the subject is being mentioned. Content can be checked for accuracy through auditing processes and failings flagged up as issues requiring resolution to maintain compliance.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.17 Take a More Sustainable Approach to Animation.
Description
Determine if animation usage within an application or website will diminish the user-experience. By calculating the number of animations that occur and identifying if this number firstly can be reduced and then running each of them individually and identifying if they cause lagging on lower specification devices, offering a resolution will help reduce the CPU and GPU burden which can consume vast visitor resources.
It should be noted that the types of animations used can dramatically affect the rendering process. Certain CSS transitions and animations are more process efficient than others, and JavaScript animation can also have differing impacts on CSS in both the choice of animation and how it has been put together. This should be considered when testing to offer low-impact animations and effects.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.17 Take a More Sustainable Approach to Animation.
Description
Provide a mechanism to bypass animated effects that may be included within the page of a website or application. The buttons provided must include a stop / opt-out button as a mandatory option but may also include an option to pause and restart play as optional elements. These buttons must be visible at page load and before animation starts thereby giving the visitor time to action them before effects begin.
It is preferable that the buttons remain visible when scrolling the page however if the animation is restricted to a certain part of a page then the buttons can also be restricted to that region and be classed as passing the criteria. There should be one universal set of buttons for all animation rather than individual options for every effect (except media such as video).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.18 Take a More Sustainable Approach to Typefaces.
Description
Avoid rendering issues surrounding emojis and symbol typefaces based on the operating system an individual may be using or the localization of their device. This can inherently cause accessibility issues which can reduce readability as well as the aforementioned visual glitches that can potentially have an impact on system hardware.
For machine testability, flagging of symbol fonts (and potentially Emojis where operating system compatibility differences may be an issue) will increase readability if resolutions are put in place. One consideration may be to avoid using symbol fonts without an explicit justification and to only use Emojis that have endured widespread operating system compatibility.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.18 Take a More Sustainable Approach to Typefaces.
Description
Optimize a typeface to ensure that when custom fonts are provided, they are implemented as sustainably as possible requiring the fewest amount of resources to download and render as possible. By placing restrictions on the number of custom typefaces and using a highly optimized format (hopefully subsetted), you will have a highly compressed file.
Multiple variables will contribute to the size of a typeface, as such, it is difficult to simply specify a hard number to aim for, but if (for example) you aren't using international characters and you can subset your font, eliminating such waste could save precious resources from the fonts character table and reduce the file size considerably (reducing the system resource load upon rendering).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.
Description
Expand the compatibility of non-native Web documents and to reduce the reliance on proprietary formats which can become an issue for visitors who may not have access to the software required to view the documents in question (either due to cost, the time expense of installing additional software, or the format no longer being maintained).
The alternative format provided should be in HTML as this can be marked up to be Web accessible (and as an open format it can be maintained to be sustainable to meet PPP targets). The choice of which format to use should be clearly identified both using text and (if possible) using iconography, and if a default format needs to be set, having the open format is best.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.
Description
Ensure cross-platform compatibility when custom fonts are utilized within a product or service. This should be done by providing at least three fallbacks across a variety of different desktop operating systems (Windows, Mac, and Linux), mobile platforms (Android and iOS), and finally offering a generic font family as a last resort to fall back upon.
Machine testing for suitable typefaces will involve a list of Websafe fonts for each platform to ensure a high probability of compatibility. Statistics about the usage of each platform can be gathered from analytics packages and used to determine which operating systems require support, from there, using a list of pre-installed fonts listed as web-safe will help you define a sustainable stack.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.
Description
Ensure that alternative text is available for descriptive images that are non-decorative and are important to the content of the website or application. Within HTML this can be provided either through the use of the alternative text attribute or through the use of figure captions (which are associated with an image providing added context).
Machine testability should flag any IMG element that has no alternative text unless it exists within a figure element that contains a figcaption element. There may be cases for decorative purposes where images do not require alternative text, however, these (arguably) should be identified as such and accessibility aids given the notification they can skip over them rather than being ignored.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.
Description
Offer a transcript of the content of an audio file or video. This is especially useful within podcasts or shows that follow the linear progress of events that can be organized into chapters. How content is chosen to be written up is at the discretion of the author but to pass the criteria it must be accessible, understandable, and content complete.
As with all generated external HTML documents, for machine testability (this will include other examples such as UX51), the generated content must itself pass the WSG guidelines in being sustainable (meeting the Success Criteria as an HTML document that generates emissions). The document can be tested against the techniques laid down in this reference.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.
Description
Enhance existing media that has been provided to ensure maximum legibility and comprehension. In doing so, the viewer is less likely to have issues in which they rewind content to attempt to understand dialog which they had difficulty understanding (which will in turn cause bursts of CPU and GPU activity during this media manipulation).
Layering additional context upon a video may initially have an additional outlay in terms of emissions (due to the loading of additional files or rendering extra content upon the screen) but because of its social and societal benefits, it meets PPP targets and as such should be prioritized. Testers should therefore seek to identify such features and flag non-availability as an issue.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.20 Provide Accessible, Usable, Minimal Web Forms.
Description
Reduce the amount of inteference that occurs when information is required to be entered on a form control within a user-interface. Identify how many elements must be filled out (and those which are not mandatory being eliminated), clearly label how such components need to be filled out (and how many steps there are), plus have links to data collection policies.
Such actions can lead to less unnecessary data transmission which is not only beneficial in terms of privacy and security (the people and societal component of PPP) but also having fewer form elements or steps to render will have an emissions reduction due to the lower amount of individual elements to render to the screen (and the visitor will spend less time on-screen filling the forms in).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.20 Provide Accessible, Usable, Minimal Web Forms.
Description
Provide a mechanism within forms during necessary data entry to allow automated tooling to provide necessary assistance. Such data entry will cause visitors to spend less time onscreen and as such will generate fewer emissions in addition to less frustration from failed attempts that would otherwise lead to potential escalation of such emissions being triggered (while also conserving bandwidth).
For machine testability, the autocomplete attribute should be disabled in all form inputs outside of those likely to be utilized / assisted by a password manager (this is especially true where the suggestions do not come from a visitor device but a third-party and additional bandwidth or rendering processes are likely to be endured). A list of such inputs can be identified from a common password manager and marked against each form for correctness.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.21 Support Non-Graphic Ways To Interact With Content.
Description
Provide a mechanism for those not using a screen as a primary method of browsing to have an equal browsing experience. This is primarily due to screens being a high emitter of emissions in terms of energy usage, but also with the increase in speech-powered devices, the need for support mechanisms to be in place has increased in recent years.
Detection of such support can be clearly identified with good semantics and high-quality content as a foundation, however for machine testability, thresholds such as having a speech stylesheet in place (that can help with issues around pronunciation and tone) and testing projects within text browsers for fundamental mechanical issues are critical to establish a pass.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.22 Provide Useful Notifications To Improve The Visitor's Journey.
Description
Provide a mechanism to notify the visitor if an issue has occurred during the form-filling process that may be remedied either before or after the submission process has occurred to resolve the issue before it reaches its intended location. Providing error detection and assistance can reduce screen time searching for answers.
Machine testability for error detection would require the submission of dummy data (incorrectly) into a form of required elements to test the durability of the process to see if instructional recommendations are given. Such prompts that either correct or guide the visitor can be deemed to pass the Success Criteria. If no guidance is offered or if forms are incorrectly labeled, leading to errors, this would qualify as a failure.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.
Description
Reduce the amount of excessive paper and ink resource mineral waste produced from the physical printing of documents by visitors of your products and services. By creating a customized print stylesheet you can optimize the use of these resources and improve readability of the layout if the document is exported into a static format such as PDF.
This technique is most useful when it covers a lot of issues that physical formats can suffer over their digital counterparts (such as no interactivity), the necessity of color usage or content, breaks in flow and pages, etc. Machine testability can examine the stylesheet for such feature handling as compliance with Success Criteria (link expansion, paper size support, CSS resets, etc).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.
Description
Ensure that non-web documents are not only served in a range of compatible (suitable) formats for the device requesting them but also to make sure that the documents being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.
Compression tests can be run against every document format to identify if improvements can be made without losing too much quality that visibility becomes degraded and therefore a recognition issue (in embedded images and media within). The primary format given to visitors should be the one with the widest compatibility for viewing within the browser plugin-free (usually PDF).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.
Description
Ensure that any document that is likely to be requested by visitors on multiple occasions is already compiled and generated at the source and available for download rather than generating a brand new asset upon each user request. This will reduce the emissions of having to repeat the same task over and over for unchanging content or materials that have little interaction potential.
For machine testability, identifying that large web assets are available from a static address such as a CDN is one method of meeting the Success Criteria, another would be to scan through scripts for generative code that creates web assets to determine if such content could be better served through recompilation. If this is the case, warnings and remedial action should be issued.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.
Description
Provide a mechanism for individuals to access downloadable or embeddable documents without the material having been embedded within the document. Embedded content has an attributable carbon cost due to the rendering of the host applications processes as well as the content and there is a chance visitors clicked a link in error.
Providing structural information about resources including direct links to files in preference to auto-loading content keeps the visitor in control, avoiding the loading and rendering of necessary resources. Machine testability of such components can test for embedded elements to avoid clearly marked-up document descriptions and direct URL links that are user-enacted.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.24 Create a Stakeholder-Focused Testing & Prototyping Policy.
Description
Identify the way new features, product ideas, and user-interface components will be tested among various stakeholder groups such as those with with accessibility issues. The main location for such an outline will be through a stakeholder-focused testing & prototyping policy and as such once implemented publicly, a link to the policy should be available.
Machine detection could use heuristics to identify key policies within the text, however at a basic level, being able to identify the policy exists, potentially within the footer of a website or application along with other policies and legal documents, and then further analysis of individual headline elements of key sections of the document, should be enough to justify a passing criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.
Description
Produce a comprehensive maintainable list of quality assurance lists across a range of different categories not limited to bugs, security, web performance, accessibility, sustainability, etc. By having this list in place, everyone involved in the creation process can closely monitor the application or website against each checkpoint to identify resolutions either before, during, or after a product or service is launched.
This list should be created during ideation if possible but can also be machine-generated from a pre-determined set of lists relating to these topics based upon evidence and research. The content of this material can potentially be further tested through automation however at a bare minimum, this list must be utilized in-house to enact sustainable change. For machine testability, if this list is not publicly visible, internal access will be required to determine creation.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.
Description
Ensure that problems relating to the sustainability of a website can be picked up with more frequency and regularity. Within the scope of website and application testing, it is important to be actively monitoring for common failure points that build over time and upon being notified of said issues provide resolutions within a reasonable timeframe.
Machine testability for this implementation will be based on the mechanism used for testing. For instance, if a product or service provider chooses to simply run routine tests on a scheduled basis this may be considered a pass as long as the time between scans is frequent enough that it can be considered active in opposition to occasional (weekly would be the widest margin).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.
Description
Provide a mechanism for eliminating any potential new flaws that may have been introduced into a website or application upon the release of a new version that will have additional updates or features that could break functionality if not implemented correctly. As such, enforcing a sustainability scan across a series of variables is critical.
For machine testability, the necessity of automation of running a scan could be triggered on the publication of each new release, or for more nuanced control (where active monitoring already exists and non-breaking features are being introduced), a scan could be triggered only where only a major release is issued when breaking changes are more likely to occur.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.26 Incorporate Performance Testing Into Each Major Release-Cycle.
Description
Encourage a routine to improve web performance within websites and applications due to the established link between web performance optimization and digital sustainability. Running regular benchmarks and working from checklists (either prefabricated or built from scratch) will encourage a schedule to identify potential bottlenecks.
The content of these testing routines can potentially be further tested through automation however at a bare minimum, this list must be utilized in-house to enact sustainable change. For machine testability, if this list is not publicly visible, internal access will be required to determine whether creation has taken place.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.26 Incorporate Performance Testing Into Each Major Release-Cycle.
Description
Provide a mechanism during the development process where businesses and individuals creating Web projects can verify that new work meets compliance targets for individual regulations. This technique is most useful when performed with each major milestone as there is a potential for feature (and compliance) breaking material to occur.
Within the scope of sustainability, this is increasingly important as there are explicit laws surrounding the subject in addition to expanded PPP remits such as accessibility, privacy, etc. In terms of machine testability the mechanics of an implementation may be more difficult than utilizing an automated checker, but wizard software can work through questions to help identify key issues.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.
Description
Ensure that visitors are made aware of (or able to find out) the limitations of a product or service before issuing a support request that will involve creating new emission sources. This compatibility policy can be listed amongst other policies on a website, however, it must contain detailed information about any factor that may have reduced capability.
For machine testability, attempt to first establish that the policy exists and then try using heuristics (or identification of the headlines) to locate sections on what is both actively supported (those conditions tested upon), and those that are confirmed as unsupported (those conditions known to be broken but will not be fixed with reasons given). Ensure that a support method is also provided.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.
Description
Provide a mechanism for introducing visitors to new features and capabilities within websites and applications upon a major release being triggered. This would include compatibility changes, any alterations that affect workflow, and bugs that have been resolved. It applies equally to both applications on upgrade paths and website redesigns.
The importance of re-orientating users around breaking changes is crucial as modifications made could lead to errors in usage, increases in technical support, problematic friction in usability or accessibility, or introduction of issues that may lead to multiple impacting PPP factors that could be resolved through training, answered questions, and well-signposted information architecture.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.
Description
Ensure that compatibility issues resulting from visitor constraints (such as operating system or browser age / connection speed, availability, or cost) are factored into the testing process at each new release. Creators must have processes in place whether through checklists that are pre-created or crafted from scratch to ensure that a regime is in place.
In terms of machine testability, some tools can provide virtualized emulations of certain operating systems and thereby load products and services to undertake tests (or screenshots) to examine compatibility. Data also exists regarding mobile data charges and connection speeds which can be used to emulate or identify the compatibility costs associated with projects.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.
Description
Identify a web application that exists and then determine if turning the web application into a progressive web application would be more beneficial than providing a native offering. This would need to take into account current web capabilities (such as ServiceWorkers), methods of delivery (such as WASM), and the human aspect (audience requirements).
Machine testability can firstly examine the existing state of an application to determine how sustainable it is and what technology stack was used to develop it (and how sustainable it already performs for all), from there, data points can be used to identify an approximate cost of implementation for both a native or highly optimized web application endpoint and options can be provided.
Examples
Tests
Procedure
Expected Results
Each of the below can be shown or hidden by clicking on the technique you wish to display.
Applicability
This technique is Advisory to meet Success Criteria within 3.1 Identify Relevant Technical Indicators.
Description
Identify any technical indicators which may be of use within a sustainability budget or those that can be immediately identified as exceeding a recommended level for an average page size thereby indicating that the document should be split or broken down into either two or more pieces to reduce the impact of the experience upon the visitor.
Factors that may be taken into account could include an extremely long document that would require excessive scrolling (based on either the visual spacing or the number of DOM elements involved), the number of DOM elements could also be a factor as too many could produce unwarranted rendering loads, also excessive HTTP requests can produce a lot of overhead.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.1 Identify Relevant Technical Indicators.
Description
Distinguish different technologies and the role they play, identifying which are the most resource-intensive and providing either a mechanism to reduce the intensity of heavy payloads or to load-balance the most demanding aspects of these features. It's important to consider that in terms of rendering, data transfer is not the only consideration.
In terms of machine testability, calculating the percentage of HTML, CSS, JavaScript, images, media, etc, isn't enough. It's critical to weigh and calculate the energy requirements of each aspect of those languages on an atomic level to identify potential rendering issues from the browser as these will impact upon hardware (CPU, GPU, RAM, and other variables) having PPP implications.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.2 Minify Your HTML, CSS, and JavaScript.
Description
Provide a mechanism in which your production-ready source code can have unnecessary data such as code comments, whitespace data, and machine-detectable redundancy removed to deliver the smallest file payload possible to the visitor. This will improve both the speed of your site and lower screen usage (wait) time.
In terms of machine testability, the functions of a minification tool can be replicated for the languages HTML, CSS, and JavaScript using scripts and these can identify improvements to be made. Function names can be uglified (shortened to the smallest value) to reduce the size of the payload further and reporting tools can make recommendations based upon feedback.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.3 Use Code-Splitting Within Projects.
Description
Provide a mechanism where modularization can occur to reduce the overall payload of applications, libraries, and frameworks for the Web. By using code-splitting and modules where isolating code can take place (machine-testability can detect this), large components can be successfully broken down into pieces that will be delivered as required.
This technique is most useful when dealing with significant-sized or complex pieces of production code that may not be required to be delivered in a single volume. If functionality will be used on-demand (for example), the payload to activate and run the functionality could be fetched and run when it is needed instead of the entire applications library being gathered upon page load.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.4 Apply Tree Shaking To Code.
Description
Eliminate redundancy within an application or website that may have been either introduced previously or through error. By using browser development tooling such as DevTools coverage or techniques like tree shaking any code which is no longer associated with functionality can be identified as potentially fit for removal (always verify this is accurate).
For machine testability, redundant (orphaned / unused) code can be identified by its lack of association with existing components within the web page or application. Consideration will also need to be given to components that may be generated mid or post-render in addition to styles that only trigger when a certain state occurs (such as through hover or the target pseudo selector).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.5 Ensure Your Solutions Are Accessible.
Description
Provide a mechanism for ensuring that a website meets a baseline level of accessibility compliance as recommended by the Web Content Accessibility Guidelines (WCAG). Having an inclusive product or service is at this point (in most places) a legal requirement and meets PPP (People) and societal factors in addition, so it becomes a sustainability compliance target.
Machine testability for accessibility exists on some level already through automated testing tools and this can potentially be integrated into a custom white-label product (or created from scratch) if required, by identifying the criteria set out in WCAG A-AAA and attempting to seek machine testability with the guidelines and success criteria (as you are doing with the WSGs).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.5 Ensure Your Solutions Are Accessible.
Description
Provide a mechanism to identify firstly if ARIA support is required for assistance devices and secondly, if it is required, to implement that support if and only if it will enhance the solution to aid the accessibility of the product or service being given. The use of ARIA for code enrichment should only be utilized if no alternative can be utilized.
For machine testability, the use of heuristics to examine the code structure and if certain components require additional semantic clarification is technically possible. Certain HTML elements for example will have built-in semantic value and require no contextual clarifications while custom elements or components of complexity (such as those used in applications) may require enrichment.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.6 Avoid Code Duplication.
Description
Provide a mechanism within projects to eliminate redundancy within your coding methodology. There will be occasions where the same solution is required for multiple events and in preference to repeating the code to achieve the same effect, referencing the code to perform on each occasion is more optimal as it reduces duplication and repetition.
In terms of machine testability, identifying a coding methodology within languages like CSS can be easily accomplished by the way naming schemes are formed (such as the BEM pattern). In terms of repeat coding, using DRY, repeated CSS property and value pairs (or in JavaScript, code that repeats the same action) can be identified and flagged for rearrangement.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
First identify any third-party components within a website or application, then analyze that feature as if it were a first-party product or service (against the WSGs) for sustainability. If the product or service is determined to be highly impactful in a negative way, the third-party should be replaced or removed; otherwise, it can be optimized, or stay as-is.
As third-party components are hosted externally, machine testing these elements should involve isolating these components and testing them as separate from the product or service. This can be factored into any decision-making regarding inclusion as highly performant and sustainable third-party materials will ultimately be low impact (and the opposite is true for others).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
Provide a mechanism to delay third-party content from loading to the screen until the visitor has requested it. Because third-party content is sourced from outside the origin domain, the sustainability impacts of the third-party code are outside of the control of a project, and thus click-to-load delay screens using the import-on-interaction pattern is critical.
The pattern used to either switch in the third-party content or load it on-demand can be machine-identified and if common third-party library resources are identified as loading (or leaking) upon a visit, this can and should be flagged up as a failure of the success criteria in functioning upon the visitor request (as such external requests should be within the visitor control).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
Reduce the overhead of libraries and frameworks which may be used by the product or service but could be offered through lighter or less production-heavy alternatives. This will have significant sustainability benefits as replacing a heavyweight framework where only a small proportion of features are used with one that is potentially 1/5th of the size could not only improve performance but reduce the rendering load.
This will require testability tooling to have both a library of existing frameworks and libraries - including smaller single-purpose ones (with the functionality they contain) and the ability to identify within code the feature set that is being utilized by a product or service. Through isolating in-use capabilities, better recommendations can be made which could reduce the project payload.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
Identify the location of website content and determine the most suitable pathway for it to be consumed sustainably. With the rise in content being hosted on third-party blogging platforms, there is a risk of significant content loss if the platform disappears and therefore a need for self-hosting exists (especially to maintain control of sustainability impacts).
For machine testability, identifying the source of the content is a key priority, then determining the impact of any third-party (the risk of content loss along with any sustainability impacts through WSG testing). Finally, this should be weighed up against the impact of self-hosting (and any potential negative consequences such as content moderation requirements that may occur).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
Provide a mechanism for interacting with third-party products and services through your own project without having to rely on a third-party solution that will inevitably have tied sustainability impacts that are bound to a third-party service. Creating these custom repeatable use objects can have either a single function use, or can serve multiple functions.
Identification of third-party solutions can be found through heuristics of source code and recommendations can be made to produce custom first-party solutions for sustainably impactful services. Additionally, custom solutions can often be identified based on either the goal they aim to achieve or the label they are given within HTML ID or class names that can be identified.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.
Description
Provide visitors with the optimum level of control over sustainability impacts by not just allowing them to identify and load third-party controls upon click but to also control third-party services, libraries, and tooling used by the product or service (even at load) via a preference panel where individual third-parties (which may be impactful) can be refused access.
This mechanism if using a commonly accepted scheme could be detected by a machine. If the preference is presented upon load and not set by default and if the visitor has the option of selecting individual services (one at a time), accepting all, and denying all, then it can be deemed that the functionality is working as expected (as long as third-party services obey that scheme).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.
Description
Ensure that HTML uses accurate semantics (Semantic HTML). Use the correct semantic element for the correct purpose and avoid common syntax mistakes that could be easily correctable. In terms of web sustainability it is important to have code that remains as likely to stay functional in the future as possible.
For machine testability, examining code for correct element use is a primary step. Other features found in validation services such as ensuring elements are closed correctly and that HTML entities are correctly marked up help avoid rare issues that might occur during the rendering process such as web browsers accidentally interpreting data incorrectly.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.
Description
Provide a mechanism for eliminating further redundancy from source code which may not be strictly required for semantic reasons but can help reduce the amount of data being transferred and thereby improve web performance. This can be done by removing any optional code from a document that will not affect the rendering of the page.
This technique is most useful when subscribing to performance budgets and attempting to reach the smallest file size possible is of critical importance to reach PPP targets. Such optional code removal can be identified by machine and recommendations for where optimization can occur can be provided but there may be occasions (as with minification) in non-production to retain the data.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.
Description
Identify and eliminate any non-standard coding practices including but not limited to proprietary additions included by web browsers to test new functionality before potentially including it fully within the web browser (this isn't so common today but remnants still exist). As older code is often not as highly optimized for performance, it can take longer to render and thrash hardware causing higher emissions so is worth resolving.
For machine testability, lists of non-standard syntax are available within common specifications which can be used to identify within source code. In addition, browser-specific code can be identified by its prefix or on lists of techniques and hacks to resolve browser bugs. If replacements for these techniques can be offered, provide them, otherwise recommend removal from your code.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.9 Resolve Render Blocking Content.
Description
Provide a mechanism for JavaScript (and if possible StyleSheets) to be loaded asynchronously or deferred to avoid acting as render blocking events which can delay the loading of content. This will lessen the initial impact on visitors' hardware and thus can have PPP and web performance benefits which should be taken into account.
This technique is most useful when it is applied to all materials which will run upon the page load. In terms of testing for this technique, scripts can be identified by the attribute being provided within the HTML code. If the attribute is not provided, guidance can be offered to question if this was intentional or machine testing could analyze the code to verify if an issue will occur.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.9 Resolve Render Blocking Content.
Description
Provide a mechanism for assets to be loaded at the correct time in the rendering process. With the ability to use preload, prefetch, and preconnect we can take resources such as web fonts and scripts that are necessary for the product or service to successfully be able to be displayed and ensure they are prioritized over other web assets.
Identifying that the asset being chosen for this mechanism is correct, linked to correctly, and has the right mechanism in place is of the highest priority for machine testers to ensure that the rendering process is not interrupted. If a low-priority object is given high priority it could delay the website or application from reaching the visitor and increase screen time.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.
Description
Provide a mechanism for social networks and search engines to first identify your website or application and then to be able to showcase it successfully within their products and services. Because each social network and search engine has its own requirements, it will involve a combination of different metadata and semantic markup to achieve results.
For machine testability, toolmakers will need to maintain a list of the most popular products and services with which they wish to maintain compatibility. From there they will need to work through their requirements to ensure that the patterns expected are included within pages (and that they match the expectations of the providers so that results will ensure visitor findability).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.
Description
Provide a mechanism of maintaining findability within products and services for search engines but also to attempt to (successfully or otherwise) reduce the amount of traffic from bad actors or unethical / unsustainable products that may impact your wider projects and service-users. This will be produced using the robots.txt document.
This technique requires that the robots.txt file be present within the base directory of a website and be formatted according to the commonly agreed upon Robots Exclusion Standard. For the Success Criteria, listing bad actors and unethical / unsustainable products is considered optional however if such a list can be maintained and adhered to, it's worthy of inclusion.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.
Description
Provide mechanisms to assist the visitor with finding and navigating through content within the page or application. This can come in the form of links that allow you to bypass blocks of content, which is especially useful when large vocal areas of navigation or other content exist. It could also be in the form of keyboard shortcuts that activate certain features within a complex application rather than having to click multiple steps.
Machine testability for such features can identify events within JavaScript or features that use common patterns in code that are recognized as helpful signposting features. If such features are present then the success criteria can be marked as compliant but if none are found it could be an indicator, especially in complex websites or applications that such features are needed.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.
Description
Ensure that input types correctly match the type of content being placed within them, that content types like passwords are handled in a way that visitors can easily reveal them, and that the patterns attribute is used correctly to help reduce errors during data entry. In doing so, problematic friction encountered during form filling can be reduced as can erroneous submissions.
Machine testability for this criteria will involve analyzing the components of forms to ensure they are well formed and that (for example) any regular expressions used in patterns will not produce an erroneous result. It is also important that functionality within forms perform well on mobile devices as well as desktop so consideration must be given to the choice of input for tasks.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.
Description
Provide a mechanism for accessibility tooling to be able to accurately describe the content of form features and to provide visual aids for visitors aiming to identify what information is required to be entered. This technique requires all interactive elements within the form to have an associated label to describe the purpose and / or role of the item.
Labels should be presented directly beside the element in question to imply association and if multiple associations are required, a grouping element with a label can be provided. Machines should be able to identify the relationship between the label and the object through the syntax and if objects without labels exist, a failure statement can be flagged up.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.
Description
Provide a mechanism in which visitors can easily take content from third-party sources and use it within your product or service. Techniques such as the ability to paste into forms or the ability to drag and drop can act as shortcuts to avoid retyping or recreating (using system resources) which may be energy-intensive or time-intensive for the visitor.
Identifying mechanisms that may prevent the ability to import third-party content such as blocking pasting content or disabling the ability to drag and drop should be detected and flagged (unless a reason for this can be justified within the code). Mechanisms that aid the ability to import third-party content such as import or paste buttons should be actively encouraged.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.
Description
Ensure that minimum required features are present to render a website or application correctly. This includes a Doctype and a series of core HTML elements. While a page can technically render without them it is considered bad practice to not include these features.
For machine testability, the ability to identify a DocType (and the version of HTML being rendered), plus ensuring that the necessary base HTML elements are present is script detectable. This can be done via the validation process, manually, or automated to ensure pages render correctly and avoid triggering quirks mode (that will affect the visual rendering of the website or application).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.
Description
Provide the necessary metadata within the head of your website or application to ensure that search engines can index your content correctly. You can use several different mechanisms to achieve this as several different formatting schemes have been provided over the years and they have varying levels of support by different search providers.
To enhance the findability of your content (which will reduce time being wasted by visitors trying to locate you), having a well-formatted series of metadata is critical. As such, testing should focus on determining whether basic meta tags are used or if another format is being used to serve data. Once detected, identify if the tags are recognized, if they aren't in common use or have been deprecated by a provider, it's worth requesting removal for the data savings.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.
Description
Provide mechanisms for search engines and social networks (and sometimes even visitors and web browsers!) to take context-rich content from your website or application and re-use it for the benefit of your product or service elsewhere. Structuring your content successfully can take place (like metadata) using one of several formats.
Because microdata uses hooks that attach to existing HTML to make it easy to identify (for search engines and third-parties), this will also make it easy to identify for machine testability. Using a pattern library of these various structural features you can not only show visitors how content could be used but using heuristics identify other content that might benefit from it.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.13 Adapt to User Preferences.
Description
Provide mechanisms through CSS that adhere to the visitor's preferences regarding how they may choose to browse a website or application. While some queries that exist in the language will hold little sustainability value, others could have PPP benefits through accessibility (societal factors), or environmental (reducing hardware or data usage).
Each of the preference queries can be machine-identified through scripts and therefore can be tested against firstly for browser support, and secondly that the project provides some kind of environmental consequence for using the query that will benefit the visitor and / or the ecosystem. The value of such queries being applied could be measured by triggering or emulating them.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.14 Develop a Mobile-First Layout.
Description
Ensure that your product or service can be classified as mobile-first and responsively designed to hopefully assure the widest remit of device types, and at least some degree of visual compatibility with your website or application. While there are many ways to approach this task, this technique is focused on connection speeds and the potential window size.
Because connection speeds can vary based on a whole range of factors (and cannot be aligned with averages due to location, mobile vs home, connection quality, etc), multiple speed ranges should be machine tested against. The same can be said for window sizes as while resolutions are common to certain devices, browsers can be resized, and unusual device types do also exist therefore CSS fluid scaling and visual breakpoints should be utilized.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.14 Develop a Mobile-First Layout.
Description
Provide a mechanism for providing features within your product or service only if they are supported and if not, providing alternatives. This can be done using feature detection and progressive enhancement which should be used as a priority over graceful degradation (as it is better to add useful extras rather than patch-critical - but broken content).
Machine testability for feature testing should identify any code within the product or service that relies upon newer functionality lacking a level of web browser support (either in competing products or older versions), testing should occur and errors should be handled. Furthermore, if technologies are not supported, the fallback mechanism should allow a basic project to run.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.14 Develop a Mobile-First Layout.
Description
Provide a mechanism for identifying when the project might be at its most resource or energy-intensive (either at the consumer level or the system level - or both), and then decision make to delay or alter when a heavy script or operation occurs to perform it when there are fewer visitors or when the user is causing less hardware intensive activity.
For machine testability, this could be particularly tricky to implement as it will rely on data to which the project owner will need to gain access such as when visitor numbers are at their lowest (or when the task can be achieved at its quickest). This may require internal access, but if granted the use of such data could allow for redesigning the site to perform better during busy periods.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.14 Develop a Mobile-First Layout.
Description
Provide a mechanism for interacting with a website or application using more usual methods, but methods that often will have a reduced overall impact (such as having a reduced energy requirement). These indirect methods such as syndication feeds or the browser reader view can even eliminate the heavy impact of rendering that can affect hardware.
This technique is most useful when it can be easily recognized by visitors and can be used instead of having to visit the main website or application. In terms of machine testability, if these low-impact techniques use a common pattern and that pattern can be easily identified within the source code then naturally it can be identified as such and can be marked as meeting the criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.15 Use Beneficial JavaScript and Its APIs.
Description
Improve the quality of JavaScript code by examining the contents and identifying any issues that could be deemed a matter of sustainability which could trigger a large load upon hardware resources (and thus put a strain on battery charge cycles). In addition to this, matters of accessibility and performance with such code can be considered.
To meet the success criteria, rewriting for performance should only be done if the act does not cost more in effort (and creator impact) than it gains in impact. For machine testability, it's critical that such actions within scripting use tools at their disposal (libraries, patterns, even AI) to heuristically identify any issues that may require resolution within the project's source code.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.16 Ensure Your Scripts Are Secure.
Description
Scan the website or application's code for issues that may otherwise potentially leave a product or service vulnerable to exploitation. This will not only include first-party code that is created by the project owners but third-party tooling imported. Any third-party library or framework found to contain harmful code should be removed before production.
Because this can occur at both the client-side and server-side internal access may be required if server-side scanning is wished to be included within machine testability, however within the scope of the success criteria, if internal access is not available or permitted, testing the code using known methods and scanning client-side code for harmful techniques should help with passing.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.
Description
Provide a mechanism for identifying if a library or framework is currently in use and if not, remove the said tooling and its unused dependencies from the production code. This will help reduce the PPP burden of the website as less bloat will reach the visitor which can have a large impact if they are on a low-powered device or a restricted data plan.
For machine testability, internal access will be required to determine if web developers or creators will require specific tooling within a project. If internal access is given, the package.json file is an ideal place to locate the packages being requested and these can be compared against the public-facing website to identify redundancy in the toolchain process.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.
Description
Provide a mechanism for determining how much of a library of framework you require for your website or application to function, and then requesting just that modularized segment to reduce the overall payload and load on the visitor's hardware during rendering. This is especially critical for third-party scripting but is also useful for CSS.
Machine testability should attempt to identify when third-party libraries are present within a codebase and if that framework or library supports the ability to load a lightweight or modular version of its features (thereby only loading what you require, when you require it), ensure that the project in question does so rather than loading an un-optimal "fully-loaded" version of the project.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.
Description
Ensure that the deliverables are current and up-to-date, thereby providing any necessary security patches or bug fixes that are required to remain operational. This is especially true when a website or application is dependent on third-party libraries or frameworks and has a complex toolchain. As such, verifying the maintenance status of work is critical.
To machine test or verify the dependency chain of a project, the source code of a project should identify (by filename or URL) the project and version. If during the tree shaking or production process all notifications of what third-party code has been used have been removed, internal access will be required to access the package.json (except heuristic fingerprinting of packages).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.18 Include Files That Are Automatically Expected.
Description
Ensure that the non-HTML files that are expected to be located within the base directory of a website can firstly be found and secondly are correctly formatted (using the right syntax) and match the expectations of the product or service that requires them to benefit the visitor with a better user-experience, increased accessibility, and sustainability.
For machine testability, tooling should aim to identify that the listed files are provided (if not flag these justifying the need for their inclusion). If they are included they should be examined to ensure they are semantically correct, especially if they are formed using a strict language such as XML. If they fail to validate correctly errors should constitute a failure to meet the success criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.19 Use Plaintext Formats When Appropriate.
Description
Provide a mechanism for adding useful contextual information to a website or application within recognized locations in plaintext format (so that it doesn't impact the rest of the product or service). These standardized formats each have a defined beneficial purpose and are considered to be low-impact (sustainably speaking) so are safe to include.
This technique is most useful when the assets are formatted as per their specification for purposes of readability, it is how individuals and machines will be expected to understand them. For machine testability, using heuristics to scan for recognized features within the text should help you identify the instructions or any features of note that could be weighed in calculations.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.20 Avoid Using Deprecated or Proprietary Code.
Description
Identify any deprecated code that is no longer recommended for use within specifications. As older code is often not as highly optimized for performance (browser makers often cease to maintain abandoned and deprecated features), it can take longer to render and thrash hardware causing higher emissions so as a general rule, is worth resolving.
For machine testability, deprecations in languages can also be found within specifications from providers like the W3C. Documentation providers like MDN also provide extremely thorough coverage of syntax that has been deprecated such as within HTML, CSS, and JavaScript. If replacements for any techniques can be offered, provide them, otherwise recommend removal from your code.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.20 Avoid Using Deprecated or Proprietary Code.
Description
Identify technologies and web standards that may be in use but have been superseded by newer technologies and web standards. In certain cases, the standards in use may still be actively supported by web browsers and if there is a sustainability reason to retain the feature, continue. Otherwise, updating the code should be considered.
As with outdated and proprietary code, web browsers tend to stop providing optimizations for outdated practices and as such, using technologies that have a newer replacement may have inherent sustainability benefits. For machine testability, standards providers list current web standards and this can be matched against in-use technologies (which often can be detected in code).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.21 Align Technical Requirements With Sustainability Goals.
Description
Provide a mechanism of choice when creators decide how to build their product or service. This technique aims to identify how they created their product or service (if possible) and make recommendations based on the sustainability of such methods. Build steps and tooling add additional complexity to a project and this should be weighed against other factors (such as individual / team ability) to enable sustainable creation toolchains.
For machine testability, internal access may be required if no traces of the creation tool have been left in the production code. If the production code does however contain fingerprints of the tool that created it, recommendations can be made to prioritize static over dynamic and flat-static over generated-static as reducing the processing effort on servers and client machines is meaningful.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.21 Align Technical Requirements With Sustainability Goals.
Description
Provide a mechanism for testing third-party plugins, extensions, and themes (if such devices are used within a product or service's creation process) for any sustainability impacts they may have. These often are included within CMS software and are external assets that can add overheads to the website or application being rendered.
For machine testability, many CMS products provide trace details of included features within the source code of a page (as conditional comments) due to these having to be loaded as third-party resources. Internal access may be required to get a full picture of everything being loaded. Third-party resources should be tested against the WSGs separately to identify sustainability issues.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.22 Use the Latest Stable Language Version.
Description
Ensure that the product or service is making use of the latest version of the chosen syntax language. As with keeping dependencies up-to-date, having the latest version of a syntax language can have sustainability benefits in terms of performance and security enhancements as well as useful, optimized new features so it's worthwhile.
For machine testability, it will be difficult to verify the latest version of a syntax language is being used because, for security reasons (to avoid exploitation), most servers will not give out such information. As such, internal access will be required or a mixture of feature detection, source code examination, and the potential for asking the user of the tooling questions to establish versions.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet Success Criteria within 3.23 Take Advantage of Native Features.
Description
First identify and then determine if custom JavaScript functions could be replaced with a more optimal and established native API call. This will also apply to custom element creation over using established native features to the browser. With the evolution of JavaScript, the opportunity to optimize code with newer cleaner techniques occurs regularly.
It is preferable to measure both the newer method of implementation with the older method to identify if one provides a more optimized (via performance and sustainability metrics) implementation than the other before replacement. Also, this technique is most useful when considering if the custom component provides additional functionality that the native component does not and if such features weigh up beneficially against the cleaner native implementation.
Examples
Tests
Procedure
Expected Results
Each of the below can be shown or hidden by clicking on the technique you wish to display.
Applicability
This technique is Advisory to meet 4.1 Choose a Sustainable Hosting Provider.
Description
Ensure that anyone who wishes to compile an impact statement (as a consumer of your services) based on conditions such as PUE, WUE, and CUE can calculate the necessary energy utilization of a provider using each of the available data based upon any variables required. Making important metrics publicly visible to both customers and visitors, increases awareness, reduces the potential for greenwashing, and allows service providers to report with proof of compliance.
For machine testability, this would require each customer account (of a hosting provider) to be equipped with a publicly visible stats page indicating resource utilization, including useful information on variables like CPU, GPU, RAM, and data usage, plus a calculation on equivalent water or other consumable resource utilization. This can be linked to from within the service status page. In addition, the methodology behind the calculations should also be provided in a centralized location.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.1 Choose a Sustainable Hosting Provider.
Description
Verify that the product or service is being served through a sustainable provider. If the hosting provider (for example) generates its electricity from renewables and can document other intensity reduction techniques (such as using natural cooling and offering auto-scaling packages), this can contribute towards passing the success criteria.
Because many factors can go towards how sustainable a provider is, machine testing will need to be vigilant in accounting for several variables. As such, using an established directory of sustainable hosts could be one way to pass or fail, or running validation checks against factors that affect the sustainability of services could be another (this may require internal access or knowledge).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.2 Optimize Browser Caching.
Description
Provide a mechanism for websites and applications to ensure that content is cached for the correct length of time, with the intention of reusability. Because certain types of content will change and need to be reloaded more frequently than others, it makes sense to cache those that change the least for the longest period (to reduce data transfer rates).
For machine testability, identifying cache response times for various file formats can be done by checking the cache-control headers for various files as they are requested from the server. If the length of time the content is being held does not appear to be long enough (or is too long for dynamic content), then recommendations can be made to improve the used formula.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.2 Optimize Browser Caching.
Description
Identify if techniques could be used to increase the performance within the user-experience. Examples of this could include identifying if cookies could (and should) be used within an interface or if the website or application could be transformed into a progressive web application to provide advantages (project-wide) such as offline availability.
Machine testability to identify existing features is relatively straightforward as it simply requires identifying through the source code where cookies, local database requests, service, or web workers are implemented and ensuring that they are both proportionate and correctly marked up. If these don't exist it will require analyzing the page (or components) for potential opportunities for use.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.3 Compress Your Files.
Description
Provide a mechanism to serve content correctly using a recognized encoding method to reduce the payload size of websites and applications. While this means there will be an additional rendering effort on the client-side (decoding the file), the increased speed in loading such files has an PPP benefit that justifies the added effort.
For machine testability, identifying content encoding methods for file formats can be done by checking the content-encoding headers for various files as they are requested from the server. This can be achieved on an individual basis or through server configuration files. If the content is not being encoded (unless it has been encoded at the source and no further optimization can be made), recommendations can be made to use a compatible compression technique.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.3 Compress Your Files.
Description
Provide additional mechanisms in place for non-technical users so that the potential for compression is just only checked once files are in place within a website or application's structure (uploaded), but also during the upload process, thereby taking the opportunity to meet sustainability criteria on-the-fly as part of the content management system.
To pass this success criterion, a CMS would need to take assets that are uploaded and identify compression that could be applied either by changing file formats or by applying algorithms to it (or both). If improvements can be made, these should replace the original files by default. Machines can verify this is occurring by identifying if the best format is used or if further compression could be applied (this could form part of a CMS sustainability rating).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.4 Use Error Pages and Redirects Carefully.
Description
Provide a mechanism for if and when visitors land on a region of your website or application that either does not exist or appears to be broken. Such encounters can be disorientating and without useful signposting and interactions to get visitors out of such events, additional unnecessary page loads or support requests can occur leading to emissions.
Machine testing for errors can simply involve triggering the events and identifying if the page loaded in response is one that is server-generated or one that has been customized by the website or application. If it has been customized does it provide useful resources to help resolve the problem? This could be measured using analytics (which may require internal access).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.4 Use Error Pages and Redirects Carefully.
Description
Reduce the amount of wasted rendering effort on the part of visitors and users of a website or application by ensuring that products and services regularly and routinely check that all of the links within pages are correct and do not need to be updated. This can be a common issue amongst large sites or those with older content (as the material can be moved or disappear online), so links must be updated to reflect this).
Regarding machine testability, verifying that all links within domains and subdomains of a product or service are correctly linked to and don't resolve in redirect notices or server / missing (not found) errors is essential for meeting this success criterion as it showcases the currency of the content. It should also be noted that links apply equally to images, media, HTML head references, and other materials referenced within the page, not just content anchor links.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.6 Automate To Fit the Needs.
Description
Provide a mechanism for identifying compromised content within a page of a website or application. As having a secure product or service is critical to meeting the societal (people) aspect of PPP and compromised websites can place unnecessary burdens on hardware in certain circumstances, being able to identify and remove such materials is essential.
Machine testability for these events will require regular monitoring of pages for common features of hijacking events such as critical links suddenly redirecting outside of the primary domain, pages being redesigned with hacking notices, or simply being taken down and replaced with other data. Maintaining a library of established dangerous links, patterns, and features can help flag such issues.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.7 Maintain a Relevant Refresh Frequency.
Description
Ensure that mechanisms are in place to hold data for only as long as it is required. This will reduce redundancy from things like stale cookies, and it will also assist older devices that may have lower capacity disk space allowances by reducing the overall size of caches which will free up important space (that otherwise reduces performance).
Identifying technologies for stashing data for a set time such as cookies, local databases, or other methods can be machine-tested. It's also worth identifying techniques that may be slower performing and making recommendations based on such implementations. Otherwise, verify that an expiry date does exist and if time has elapsed (or no date exists), flag the issue.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.9 Enable Asynchronous Processing and Communication.
Description
Guarantee that visitors who experience a website or application will have their experience delivered using a secure delivery route. Transmitting data using non-secure means (now that SSL certificates can be obtained free of charge) should never occur as there are inherent risks that can otherwise be avoided during the browsing session.
Machine testing for protocol testing will involve examining that the page is being served across HTTPS in opposition to HTTP (there should not be an occasion where both are available, especially if dynamic or interactive content exists). Furthermore, insecure protocols for non-browsing usage should be tested to verify they are disabled and replaced (if required) with more secure options.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.
Description
Offer a mechanism for visitors to gain access to a static website or application assets in a location that is closer to them than the origin host. By utilizing a content delivery network, large media files, images, fonts, and other static assets can be distributed to regional locations so that visitors can access them quicker (as the route to the file is shorter).
Machine testing for content delivery networks should take into account the balance as to whether adding assets to multiple locations (for performance) incurs more of a sustainability cost than loading them quicker. If the value in faster loading is greater, and a CDN is in use, this qualifies as a pass. If no CDN is being used, or one is in use but the added value is neutral or negative, question its impact and whether the material should be instead hosted onsite.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.
Description
Test the sustainability of the CDN against the WSGs as a service provider as a third-party service in addition to any sustainability criteria and best practices that exist externally for infrastructure and hosting. This not only applies to the provider itself but to all of the nodes that it uses to supply distribution, and any third-parties it uses as part of its extended network. As such, calculating and vetting providers can be complex.
For machine testability, providers should have information about sustainability offered within a sustainability statement along with details of compliance being met plus any sustainability features they provide that enhance their service. If this information is publicly available (and uses recognized sources), it could be parsed by machine, otherwise, a sustainable provider list should be sourced.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.
Description
Provide a mechanism for mirroring the content (website or application) as close to the visitor as possible to reduce the loading time as effectively as possible. This can be done using CDNs or by analyzing metrics data and deciding based on the locale of visitors where the origin host choice would be best placed (both can be useful methods).
Machine testing for this requires internal access to analytics logs to determine where visitor locations are and from this identify the best location to place information (if a CDN is used, this may be automatically done on your behalf meeting compliance requirements). If no internal access or data is available, general statistics about Internet usage could help identify trends.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.
Description
Prevent non-first-party dynamic resources from being loaded through a content delivery network. The justification behind this is due to browser mechanisms such as CORS and cache partitioning, unfavorable things can occur if you try to load dynamic resources from third-party websites (this is a security measure to prevent malicious code injection).
As this issue doesn't affect static resources such as HTML, CSS, or JSON, when testing against the criteria, these types of resources can be excluded. For more dynamic resources like JavaScript or server-side code, if the host URL differs from your own (this includes things like frameworks), flagging up an error and recommending self-hosting or integration with existing tooling should be done.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.
Description
Provide a mechanism that ensures all content within a website or application gets regularly reviewed and if necessary refreshed, updated, or if it has reached the end-of-life, deleted or archived. As such, publicly providing a date when events will occur should offer an incentive for providers to action these labels (and visitors can note outdated content).
Machine testability should offer a grace period for content that has passed the date when a review should take place (as circumstances can lead to dates being missed), however, failings should still be flagged as potential issues that need to be resolved. Labels within the content should be detectable by scripts that issue such dates, and if they don't exist, recommendations can be issued.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.
Description
Increase findability within content by offering mechanisms within the page for visitors to filter the results of potential content they are seeking by category, tag, or other variables. How this feature is presented to visitors can differ between implementations as there are several well-defined patterns, but they can reduce problematic friction in the user-experience.
Machine testability for such features will involve examining the page's source code to identify repeating patterns across several pages (if correctly labeled, patterns such as tags or categories may be semantically easy to find). The sustainability of such implementations must account for the impact it has upon rendering across each page, against the benefit it has to the visitor experience.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.
Description
Ensure that any third-party backup service provider is examined for sustainability impacts. This will involve examining the impact they have simply in hosting your data, but also how sustainable their platform is, as well as their business. If their model meets any obligations laid out and aligns well against the WSGs, it may meet success criteria.
To identify how sustainable a service is, a list of recommended sustainable backup providers would be an ideal source to utilize (assuming that one exists and it is itself reputable). If no list exists, checking providers' claims, sustainability statements, and measuring their service against criteria such as the WSGs can be helpful to some extent, though utilizing established standards like the GRI and recommendations around infrastructure will also help.
Examples
Tests
Procedure
Expected Results
Each of the below can be shown or hidden by clicking on the technique you wish to display.
Applicability
This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.
Description
Ensure that the product or service in question has the documentation to back up any claims relating to both ethical policy decisions and sustainability both offline and digitally relating to its practices. This can include anything within the scope of PPP and provide coverage of compliance with relevant legislation, standards, and best practices.
Using heuristics, machine testing can identify key passages within the text (potentially by the headlines) to check for various sections that should be present within a sustainability statement or code of ethics policy document. In addition, it's critical that the document exists and be easily found so verify its location and check that where it's referenced in the document is appropriate.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.
Description
Provide a mechanism within a sustainability statement to identify actions that a product or service may have undertaken that go beyond the WSGs to become sustainable. These may include work from other specifications, supplementary materials to the WSGs, or third-party tooling that has been created by the organization to improve sustainability.
Machine testability for such features would include identifying references to the documentation within a sustainability statement, also if tooling itself is aware of anything that is beyond the scope of the WSGs and feels it warrants inclusion for improving sustainability for people and the planet, this could also be identified within the tooling and calculated into the overall scoring metrics.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.
Description
Go beyond extensive documentation for a website or application and to offer optional interactive tuition for your product or service. This can take the form of guided tours, in-app assistance, or even video tutorials. The important thing is that visitors can acquaint themselves with the environment you provide and reduce problematic friction. Learning how to use a project quickly can also reduce wasted hardware utilization.
Unlike documentation, instructional material that is integrated within a product or service will be harder to machine test against (as it will be tightly merged into a codebase), however, being able to identify the settings to enable such features should be achievable (as input boxes can be read), or if such instruction is provided on a dedicated subdomain, locating that resource.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.
Description
Provide a mechanism for visitors to a website or application to validate claims of the use of terms such as green, sustainable, or eco-friendly within the product or service. This can be inclusive of when a website claims to be powered by renewable energy or claims to be more sustainable than others based on testing tools that exist on the market.
For machine testability, metrics data could be used to validate such claims if the research has been provided in the public domain. If a testing tool has been used to make the claim, these can be validated by the accuracy of the said tool and its methodology. If it is based on just first-party information, scan for artifacts that may help verify sources such as research, carbon.txt files, etc.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.2 Assign a Sustainability Representative.
Description
Verify that the website or application service provider has an individual who is responsible for ensuring the sustainability of the product or service. While they may have responsibilities other than this at the business or have to answer to those with more power or influence than them, it's still an important role that needs at least one officer.
For machine testability, check the sustainability statement for contact details to verify that the individual responsible for managing such statements, coverage, and reporting, is the lead on sustainability. If no details are available, there may be hints in staff pages for agencies or companies that can be scanned against, otherwise, flag as a potential failure point. In the case of individuals, this can be ignored as they will be responsible for all aspects of a project.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.3 Raise Awareness and Inform.
Description
Provide a mechanism for visitors of a product or service to gain greater awareness not only of digital sustainability but of the website or application's journey to become more sustainable. This can involve content creation, use of social media, or other documentation that will engage their particular audience and showcase improvements over time.
Machine testing for awareness and informing can be identified by using search terms (for example "sustainability", "PPP", or "carbon"), and identifying the postings and density of those occurrences. The more content that relates to relevant subject matter (and how current it is), the more likely campaigns to increase awareness are taking place internally and externally in that community.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.5 Estimate a Product or Service's Environmental Impact.
Description
Provide a sustainability benchmark of competitors to ensure that you maintain a regular schedule of improvements to your own work. This is something that is observed in many other aspects of business but is a great principle for having the edge over others who may want to lead the change on being environmentally friendly.
It's important to consider any third-party service in isolation when analyzing services that are independent of your own. This means that you should run that product or service against the WSGs without consideration of your own results and only compare them once testing is complete. For machine testing, any competitors could be provided by the user after the website or application has been analyzed, scanned, and compared - and those sites could be then tested.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.6 Define Clear Organizational Sustainability Goals and Metrics.
Description
Provide a driving force for sustainable change laid out in the sustainability statement. This can contain as little or as much information as you wish but it should aim to contain realistic timescales and targets aligned with roadmapped features for users to be able to realistically identify if goals are being met over the lifespan of the project.
Machine testing such criteria should at least start with the existence of the sustainability statement, and the existence of a section on sustainability goals. If these exist then heuristic testing can identify key passages within the text relating to timeframes and potential mapped features (if these are listed), or if not, then at least bullet-pointing the number of goals being provided.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.7 Verify Your Efforts Using Established Third-Party Business Certifications.
Description
Provide a mechanism for product or service owners who have achieved certifications either as an individual or as a business in a related sustainability field (with a recognized achievement or certification) to have that accounted for in the calculation of their sustainability journey. Any such certifications should be included within a statement.
For machine testability, this would require having a compiled list of recognized certifications and achievements (that are recognized as sustainable) which could then be identified by embedded links such as images for that scheme. Testing would also need to verify member's achievements. This can be used in weighted scoring to give additional marks for going beyond the WSG criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.
Description
Provide a simple mechanism to identify what reporting scheme a business may operate under (such as GRI), or if they have any available guidance regarding disclosing and reporting sustainability outside of their sustainability statement. This may come under some publicly available policy documents, or it may require internal access to locate the data.
The important step in this process is to scan through the pages of the site to identify every policy page and flag any that relate to disclosures or reporting (that mention sustainability). These will be key references that if they do not have good information architecture could be hard for the visitor to find. If such policies exist, ensure they are referenced within the sustainability statement.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.
Description
Ensure that policy decisions match up to regulatory and compliance targets for businesses and individuals aiming to become sustainable. By ensuring that you not only report your existing carbon emissions but also how you plan to reduce any remaining emissions (and be transparent with the public on this matter), you encourage trust in your brand.
For machine testability, these reports should be linked to from within the sustainability statement on your public-facing website, and if possible in an open format such as HTML (otherwise, an accessible PDF format should be used). Remember that such documents should be regularly maintained if new information that is important to your sustainability journey should come to light.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.
Description
Encourage transparency and visibility around the shift to sustainability for products and services. By actively seeking reports that have to be submitted for compliance reasons (and verifying the data if using an open standard), we can more easily monitor how well a business or individual is meeting their obligations and reaching any targets.
Because certain content within disclosure documentation may be confidential, internal access for tooling may be required if the organization in question is unwilling to provide a publicly visible anonymized version that will still offer the same useful public data (for those interested). If the data is available, this can be used within monitoring tools to track compliance checkpoints over time.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.11 Follow a Product Management and Maintenance Strategy.
Description
Provide a location where visitors and customers of your website or application can easily identify to what extent you will maintain and continue to provide updates for the product or service you offer. This agreement should include details not just for the product you offer (paid or otherwise), but for the website hosting the materials, and its assets.
For machine testability, first examine that the agreement exists and that it can be easily understood (many policy documents can be written in legalese so having a plain English version should be a priority), testing tools can also identify key passages that could be highlighted as requirements for their particular needs (such as timeframes for coverage or exclusionary features).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.12 Implement Continuous Improvement Procedures.
Description
Verify that the product or service has a proven track record of iterating over its lifetime. Naturally, the quality of the iteration (whether those changes have been improvements) will be more difficult to quantify, however, the fact that a product or service is active and being maintained is usually a good indicator of sustainable development.
Machine testing for the iteration record is easier with open source projects as a timestamped record is usually available through the repository provider, of which you can analyze the code for impactful issues. If such access is not possible, archival or caching tools for websites can help provide a useful record of iteration over time and may (using a DIFF tool) showcase iterative alterations.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.12 Implement Continuous Improvement Procedures.
Description
Provide a mechanism for users or visitors of a website or application to firstly identify when changes have taken place on a product or service and secondly be able to learn about the new features in more detail (if provided) so that they can quickly adjust themselves to changes that have been made for sustainability or other reasons (reducing confusion).
Each product or service should have an associated set of release notes that should be regularly maintained, as such, version numbers within those notes should match the version number of the available versions of the product or service. In addition, release notes should be detailed and sectioned based on criteria such as additions, removals, changes, and fixes - not just a single bullet that offers little context. Links and images with more detail can also be helpful.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.14 Establish if a Digital Product or Service Is Necessary.
Description
Provide additional compliance criteria for testing tools to ensure that sustainability goals are being met. As such testing tools can analyze websites to vet each product or service against the United Nations Sustainable Development Goals to decide if enough evidence exists within WSG compliance (and other criteria) to pass or fail applicable categories.
Because the SDGs may have scope beyond the WSGs, both individuals and testing tools must develop criteria that meet the strict requirements of each category to be considered a compliance pass or fail. As such, it may require additional input from the tester, internal access, or additional measurements and evidence gathering undertaken (with scores adjusted).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.16 Create a Supplier Standards of Practice.
Description
Both increase transparency within an organization and to ensure that visitors and customers can see how suppliers and potential suppliers in the supply chain are vetted to ensure that they meet sustainable guidance set up by the organization. This can relate to the things purchased, or even the business and the way it operates.
Machine testing a supplier policy first requires detection that the policy exists in a public area of a website or application. Once detected, the list of suppliers can be gathered and unless the provider has already been validated by a verified third-party, conditions for testing can be identified and listed by the tool. If not, consider testing that supplier as a first-party against the WSGs (if a digital service is given), otherwise, test against other compliance targets.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.16 Create a Supplier Standards of Practice.
Description
Provide a mechanism where members of the public who view a supplier policy document within your website can easily view the sustainability impact that working with each supplier will have on your product or service. This can be measured in many ways, but a detailed report of any benefits, issues, and solutions over time is helpful.
Identifying that a list of partners exists is key, if a list of partners exists then attempting to verify the impact of those services should first be found on the website itself. If the information is not found, flagging the issue and attempting to test the partners individually against the WSGs would be an ideal next step (in a similar fashion to competitor analysis) to identify problematic relationships.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.17 Share Economic Benefits.
Description
Provide a mechanism to ensure that all employees are paid a living wage. The mechanism for including this within a product or service for a business could be as simple as a living wage badge in their sustainability statement (or website footer), or for additional points, publishing transparent pay grades across the business on the website.
Heuristic testing by examining the source code of a document on the employee benefits (pay grades) on a website would be able to identify whether that business is a living wage employer, as would comparing any existing job openings with pay grades against living wage rates. If these are not available, a living wage badge may be an indicator but evidence will need to back the claim.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.17 Share Economic Benefits.
Description
Verify that a business offers a variety of benefits to its employees to make their working life better. This has coverage in the people aspect of PPP and therefore is important for sustainability. These benefits do not have to be limited to in-work additions but could be anything that employees utilize and find beneficial in or out of the workplace.
Machine testing for such benefits can be difficult as there isn't a universal list of what can be offered, however, there is a commonality between the types of things that workplaces often provide. As such, analyzing words in job openings or culture / workplace pages on websites can identify these benefits and weigh the value (potentially) based on their impact (sustainably or otherwise).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.19 Use Justice, Equity, Diversity, Inclusion (JEDI) Practices.
Description
Ensure that visitors are made aware of (or able to find out) the lengths that a product or service has gone to, ensuring that individuals with accessibility needs have been included in the design and development of the product or service. It should also aim to describe any limitations of the product or service and how to submit a support request.
For machine testability, attempt to first establish that the accessibility statement exists and then try to identify the headlines within the document to establish its content. From there you can use heuristic testing of the source code to work out what accessible features have been provided, and what potential limitations may exist for inclusive design and accessibility needs.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.20 Promote Responsible Data Practices.
Description
Ensure that visitors can access a publicly visible privacy policy and that the content is human-readable, containing references to legislation that the website or application meets for compliance purposes (and how this is achieved). As privacy falls under the people aspect of PPP conformance, this has sustainability implications and should be followed.
If the product or service is accessible or operates within multiple jurisdictions, testers should take this into account when identifying relevant legislation that should be included within the policy document. The document should be visible within the footer of a website or a section alongside other relevant policy files and documents such as the accessibility and sustainability statement.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.21 Implement Appropriate Data Management Procedures.
Description
Provide a mechanism for retrieving old information that may have expired in terms of currency, but may still have a practical use as a reference for individuals wishing to identify information that had prior use or relevance for a product or service. This technique is most useful when the information is content-focused or showcasing iterative change.
For machine testability, several criteria should be taken into account. Firstly the outdated material should be isolated either on a subdomain or a subfolder to avoid clashing with existing content. This material should also be reduced to a minimalist form (content only) to reduce sustainability impacts. Furthermore, all links should be checked to ensure that they resolve to a working reference.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.21 Implement Appropriate Data Management Procedures.
Description
Offer a mechanism for users of a product or service to allow the management of any information (data) or accounts (subscription or otherwise) they may hold. Best practices with such systems will include allowing the deletion of accounts and / or data without human intervention and providing single steps (with confirmation) to achieve this.
Machine testing such features will undoubtedly require internal access for a product or service, however, if the ability to create an account can be automated or undertaken, then the visibility of the source code within the account management system can be verified by tooling. As such, where internal access is required, gain permission, then check for the ability to delete accounts (and verify that this action works), if it does, pass the check.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.22 Promote and Implement Responsible Emerging Technology Practices.
Description
Offer a document that describes in detail any mechanisms that a product or service has in place regarding emerging technologies such as artificial intelligence, cryptocurrencies, or other impactful (sustainably speaking) aspects of the web. If none of these features are likely to be used by you or your suppliers, state so, otherwise, provide details.
For machine testability, it's important to ensure that such a document exists as emerging technologies are having an increasingly impactful effect on the sustainability of the Internet. As such, if no statement exists, flag this concern. If one does exist, check what technologies are being utilized, to what extent, and weigh these in subsequent calculations (raising issues as appropriate).
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.23 Include Responsible Financial Policies.
Description
Ensure that if an individual or organization is being endorsed or financed by an external group or organization, the influence that the group may provide is sustainably minded (and will not divert the product or service away from such activities). Ensuring all endorsements are sustainable can also help service providers avoid claims of greenwashing.
Machine testing such variables will require firstly all partnerships and sponsors to be correctly labeled as such, if they are labeled within the product or service then the tester can test each of these services against the WSGs as a first-party (an isolated service) and verify how sustainable they are in comparison. If they score poorly, a warning of conflict of interest issues can be provided.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.24 Include Organizational Philanthropy Policies.
Description
Identify any third-party projects connected with the product or service that can be verified as sustainability causes or schemes of value that may benefit either the business on its sustainability journey or directly lead to improvements for people or the planet. Schemes can include reinvesting in the ecosystem, or community, or reducing emissions.
For machine testability, this would require having a compiled list of recognized schemes (that are recognized as sustainable) which could then be identified by embedded links such as images for that scheme. Testing could also check schemes that support verifying members as required. This could be used in weighted scoring to give additional marks for going beyond our guidelines.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.
Description
Provide a clearly defined set of targets to achieve when working to either make an existing project more sustainable or create a newly sustainable website or application. Just as with performance budgets, you can define the variables that are most achievable and iterate with more variables (and improvements) to be included over a project's lifecycle.
For machine testability, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner to ensure that any results can be compared against the budget for alignment. If the budget is met, a pass can be given, if not, a warning can be issued with recommendations where potential improvements can be made.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.
Description
Provide a clearly defined set of targets to achieve when working to either make an existing project more performant or create a new website or application fast and optimized. As with a sustainability budget, you should define the variables that are most relevant to your project and consider targets that will make a sustainable difference to your visitors.
For machine testability, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner to ensure that any results can be compared against the budget for alignment. If the budget is met, a pass can be given, if not, a warning can be issued with recommendations where potential improvements can be made.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.
Description
Provide a mechanism for service providers to monitor the sustainability or performance improvements that occur over time when revising their budgets to meet stricter targets after they meet existing ones. This will require constant monitoring of existing products or services to ensure that goals are being met and that budgets are improving for the better.
As with all budget-related guidance, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner so that previous and current targets can be reviewed over time. In addition, monitoring tools should note when targets are reached and indicate the potential to achieve new goals to better their work.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.28 Use Open Source Tools.
Description
Consider the place that open source has in a product or service. If open source (for example) is used within code libraries such as frameworks, these should be identified and the license type should be verified to ensure compliance occurs (for instance, attribution). If the business in question is supportive of open source, this should also be noted publicly.
Identifying an open source license should be straightforward, especially if the product or service utilizes a public repository system like GitHub. In addition, source code can be scanned for required attribution references and if an open source policy exists within either the license agreement on the website or in its own document, this can be highlighted as working towards compliance.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.28 Use Open Source Tools.
Description
Provide a mechanism to identify any open source work an individual or business may have made which gives back to the community. Open source and working on collaborative projects is a sustainable way of helping to evolve knowledge around digital issues and as such should be referenced in meeting sustainability objectives (especially high profile work).
Machine testing open source will require first identifying a public repository associated with the website or project (this is usually indicated using a social link button). Once this has been established, identifying the projects to which the group or members are the source or a contributor to (including activity levels) can be weighed to determine compliance with the success criteria.
Examples
Tests
Procedure
Expected Results
Applicability
This technique is Advisory to meet 5.29 Create a Business Continuity and Disaster Recovery Plan.
Description
During downtime or when issues occur, it's critical that visitors or users can understand what is happening to the product or service, why it is happening (if known), and when the service is likely to be restored. This will help ensure continuity of service and if a backup or alternative can be offered, provide a way of reducing wasted resources attempting to keep reloading the non-functioning feature.
Machine testing for how visitors are kept informed and provided with alternative methods of interacting with the product or service could be through the use of a third-party provider that takes the slack when the first-party becomes unavailable, the use of archived material (identified by the content) could also be an indicator or the content provided on some kind of automatically updating channel that lists ongoing issues.
Examples
Tests
Procedure
Expected Results
This section is non-normative.
Interoperability is important to web professionals. Better interoperability among implementations means that web professionals can create websites, applications, and tooling designed to be sustainable, and ensure that it is successfully repeatable (and testable) in several environments. It means reducing the potential for Web sustainability issues to occur in complex projects and reducing the implementation time where automation and tooling can assist during the creation, development, and maintenance process. Writing tests in a way that allows them to be run in all browsers gives implementors confidence that they are shipping software that will be compatible, and consistent with other implementations.
Good test suites drive interoperability. They are a key part of making sure web standards are implemented correctly and consistently. More tests encourage more interoperability. Wrong tests drive interoperability on wrong behavior. As such, Web Sustainability needs good test suites. It's an evolving field and most of the test suites are still works in progress: so they may contain errors.
The primary focus of this test suite is to provide interoperability for tool makers (in terms of automation and compliance with the WSGs). In addition, we also aim to provide meaningful testable metrics that can be measured and help identify the true impact of the Success Criteria within the WSGs, and therefore better understand the impact that digital has on the ecosystem.
Implementors could use the dataset provided to expand upon the results and offer more nuanced research. Additionally, there is the potential for toolmakers to create more accurate products that measure the carbon impact of products. While the scope of such matters may potentially stretch beyond this group's remit, the results could feed back into - and impact further iterations of our work.
The below table contains links to our test suite results generated using a cross-section of machine-readable techniques from the previous section.
The tests themselves (along with any corresponding reports generated) are stored on GitHub in our public repository under the test-suite folder.
UX | WebDev | Hosting | Business | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Guideline | SC | 1 | 2 | 3 | 4 | 5 | 1 | 2 | 3 | 4 | 5 | 6 | 1 | 2 | 3 | 4 | 5 | 6 | 1 | 2 | 3 | 4 | 5 | 6 |
1 | PASS | PASS | PASS | PASS | FAIL | FAIL | PASS | FAIL | PASS | PASS | FAIL | FAIL | PASS | PASS | ||||||||||
2 | PASS | PASS | PASS | FAIL | FAIL | PASS | PASS | PASS | PASS | |||||||||||||||
3 | PASS | PASS | PASS | PASS | FAIL | PASS | ||||||||||||||||||
4 | FAIL | FAIL | PASS | PASS | PASS | FAIL | ||||||||||||||||||
5 | FAIL | FAIL | PASS | PASS | FAIL | FAIL | FAIL | PASS | FAIL | |||||||||||||||
6 | PASS | FAIL | PASS | PASS | PASS | FAIL | FAIL | PASS | FAIL | FAIL | FAIL | PASS | PASS | |||||||||||
7 | PASS | PASS | PASS | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | ||||||||||||||
8 | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | FAIL | FAIL | FAIL | FAIL | |||||||||||||
9 | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | FAIL | PASS | PASS | PASS | FAIL | ||||||||||||
10 | PASS | PASS | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | FAIL | ||||||||||||||
11 | PASS | FAIL | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | FAIL | FAIL | FAIL | ||||||||||||
12 | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | PASS | FAIL | PASS | FAIL | FAIL | FAIL | PASS | FAIL | PASS | FAIL | ||||||
13 | PASS | PASS | FAIL | |||||||||||||||||||||
14 | PASS | PASS | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | FAIL | FAIL | |||||||||||||
15 | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | FAIL | ||||||||||||||||
16 | PASS | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | |||||||||||||||
17 | FAIL | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | FAIL | ||||||||||||||
18 | PASS | PASS | PASS | FAIL | ||||||||||||||||||||
19 | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | PASS | FAIL | FAIL | FAIL | |||||||||||||
20 | PASS | PASS | PASS | PASS | PASS | FAIL | FAIL | |||||||||||||||||
21 | PASS | PASS | FAIL | FAIL | PASS | FAIL | PASS | PASS | ||||||||||||||||
22 | FAIL | FAIL | PASS | PASS | FAIL | PASS | FAIL | FAIL | FAIL | FAIL | ||||||||||||||
23 | PASS | PASS | PASS | PASS | PASS | PASS | FAIL | |||||||||||||||||
24 | PASS | FAIL | FAIL | FAIL | FAIL | PASS | FAIL | |||||||||||||||||
25 | PASS | PASS | PASS | FAIL | ||||||||||||||||||||
26 | PASS | PASS | FAIL | FAIL | FAIL | FAIL | ||||||||||||||||||
27 | FAIL | PASS | PASS | FAIL | PASS | FAIL | ||||||||||||||||||
28 | FAIL | PASS | FAIL | PASS | ||||||||||||||||||||
29 | PASS | PASS | PASS | FAIL | PASS | FAIL | PASS |
If attempting to create a test to include within or expand the test suite, remember to follow these review guidelines and be considerate that our tests follow the formatting structure of tests created for the CSS Working Group (for interoperability as well as convenience). For an indication of how tests should be structured, use our documented template test as a starting point.
Also, be sure to remember that qualitative as well as quantitative tests are equally valuable as long as they can be tested by machine (and thus automated in some way). As tests will provide a means of identifying whether Success Criteria can offer Automated testing over Manual interventions, notifications of this potential attribute (and methods) will be referenced within the main specification.
This section is non-normative.
The person, team of people, organization, in-house department, or other entity that commissioned the evaluation. In many cases the evaluation commissioner may be the website owner or website developer, in other cases it may be another entity.
Satisfying all the requirements of a given standard, guideline or specification.
The person, team of people, organization, in-house department, or other entity responsible for carrying out the evaluation.
Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged.
A non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together with it by a user agent.
Dynamically generated web pages sometimes provide significantly different content, functionality, and appearance depending on the user, interaction, device, and other parameters. In the context of this methodology such web page states can be treated as ancillary to web pages (recorded as an additional state of a web page in a web page sample) or as individual web pages.
A coherent collection of one or more related web pages that together provide common use or functionality. It includes static web pages, dynamically generated web pages, and mobile websites and applications.
The person, team of people, organization, in-house department, or other entity that is involved in the website development process including but not limited to content authors, designers, front-end developers, back-end programmers, quality assurance testers, and project managers.
The person, team of people, organization, in-house department, or other entity that is responsible for the website.
Additional information about participation in the Sustainable Web Design Community Group (SWD-CG) can be found within the wiki of the community group.
Alexander Dawson, Andy Blum, Francesco Fullone, Ian Jacobs, Laurent Devernay, Len Dierickx, Łukasz Mastalerz, Mike Gifford, Morgan Murrah, Thibaud Colas, Tim Frick, Tzviya Siegman, Zoe Lopez-Latorre