Sustainable Tooling And Reporting (STAR) 1.0 provides information, examples, and metrics data to complement the Web Sustainability Guidelines (WSG) 1.0 specification. Within this supplementary document, you will find implementation advice (for external groups wishing to incorporate sustainability in their work), an evaluation methodology (for testing conformance), a categorized series of techniques (potential implementations and case studies), and a test suite (metrics data on impact and machine automation capability).

As with the WSGs, these features have been inspired by the work of the Web Accessibility Initiative ([[WAI]]). They have also been curated with a Web sustainability methodology, with the goal of better understanding the digital landscape's role in reducing harm to the wider ecosystem (regarding people and the planet).

For the normative technical specification, see https://w3c.github.io/sustyweb/.

Help improve this page by sharing your ideas, suggestions, or comments via GitHub issues.

Implementation Advice

The following section provides advisory guidance for other specification writers and external groups wishing to incorporate digital sustainability within their work (such as W3C Working and Community Groups). We are primarily a group dedicated to fostering sustainable change in Web technologies associated with the creation of websites and applications. While it is outside of our group's scope to conduct horizontal reviews of other group's specifications, we may request or accept requests to collaborate with or assist in coordination with other groups to implement sustainable change.

Considerations

The WSGs have a broad appeal, designed to impact a wide range of Web technologies and related infrastructure as appropriate. With this in mind, when first starting to look to incorporate sustainability within a body of work you may find that certain guidelines more than others will apply to practices.

When designing any fledgling body of work it's worth considering the following:

When creating specifications or industry-specific documents, cross-reference to specific WSG guidelines that are appropriate for a body of work to connect elements of content to applicable sustainability goals.

Create new sustainability guidelines that are explicitly targeted for a technology (that may be too niche to be included within the WSGs). This would provide additional Success Criteria for an audience to meet.

If you do wish to create additional targeted guidelines, please first consult the SWD-CG as we may be able to include them within the main WSGs or provide guidance to avoid conflicts with other existing guidelines.

Methodology

With the above considerations taken into account, a good next step is to consider sustainability within content and treat sustainability as any other impact target (such as accessibility or performance) by trying to drive change through metrics data. If research exists to back a particular technique as more performant, it's likely to be more sustainable. Often there will be cases where evidence does not exist as Web sustainability is an evolving field and as such, a common sense approach (considering the variables that can impact people and the planet) will be the best method if metrics data cannot be provided to identify the most sustainable option.

Every body of work will have its approach to this and a progress over-perfection methodology is preferred (as doing something is ultimately better than waiting for an ideal fix), but as a general consideration for those who are creating documents that are relied upon by large numbers of individuals, the PPP model which doesn't just account for emissions and the environmental impact but also and human factors and issues surrounding good governance are an ideal template to work from. In the context of the Internet this accounts for everyday (but important) factors such as performance, accessibility, privacy by design, security, and reducing waste which all as a by-product have an environmental impact.

It's also important to note that different bodies will have their sustainability challenges, for example, those working on native APIs are more likely to encounter hardware resource consumption (energy usage), whereas language standards will be more considerate of implementors, accessibility, and improving developer workflows. As such, it's worth coordinating with other groups with aligned goals and discussing their sustainability approaches to align work and help improve theirs, accounting for different variables.

Providing sustainability guidance within work can be presented in many ways. One option is to integrate it within existing guidance or specifications (amending the content). Another is to provide notes or in-page sections dedicated to sustainability providing coverage. Or to gently guide individuals into the subject, a dedicated supplement that will be adapted or merged into the specification at the next major version (giving individuals time to adapt) could be created.

Evaluation Methodology

Evaluating the extent to which a website implements the Web Sustainability Guidelines (WSG) is a process involving several steps. The activities carried out within these steps are influenced by many aspects such as the type of website (e.g. static, dynamic, responsive, mobile, application, etc.); its size, complexity, and the technologies used to create the website (e.g. HTML, CSS, JS, PDF, etc.); how much knowledge the evaluators have about the process used to design and develop the website; and the main purpose for the evaluation (e.g. to issue a sustainability statement, to plan a redesign process, to perform research, etc.).

This methodology describes the steps that are common for comprehensive evaluation of the extent of websites and applications to implement WSG 1.0. It highlights considerations for evaluators to apply these steps in the context of a particular product or service. It does not replace the need for quality assurance measures that are implemented throughout the design, development, and maintenance of websites or applications to ensure their sustainability conformance. Following this methodology will help evaluators apply good practice, avoid commonly made mistakes, and achieve more comparable results. However, in the majority of situations using this methodology alone, without additional quality assurance measures, does not directly result in a sustainable product or service that meets the WSG 1.0 success criteria and guidelines.

This methodology does not in any way add to or change the requirements defined by the normative WSG 1.0 specification, nor does it provide instructions on feature-by-feature evaluation of web content. The methodology can be used in conjunction with techniques and examples for meeting WSG 1.0 success criteria, such as the techniques documented within this STAR supplement, but does not require this or any other specific set of techniques.

This methodology is intended for people who are experienced in evaluating Web sustainability using WSG 1.0 and its supporting resources. It provides guidance on good practice in defining the evaluation scope, exploring the target website, selecting representative samples from websites where it is not feasible to evaluate all content, auditing the selected samples, reporting the evaluation findings, and if necessary - implementing sustainable change. It does not specify particular web technologies, evaluation tools, web browsers, or other software to use for evaluation. It is also suitable for use in different evaluation contexts, including self-assessment and third-party evaluation.

Purposes for this Methodology

In many situations it is necessary to evaluate the sustainability of a website or application, for example before releasing, acquiring, or redesigning the product or service, and for monitoring the sustainability of a website or application over time. This methodology is designed for anyone who wants to follow a common approach for evaluating the compliance of websites to WSG 1.0. This includes:

Usage and Scope

This methodology is used to perform thorough evaluations of websites and applications using WSG 1.0. Before this, it may be useful to undertake a preliminary evaluation to identify obvious sustainability issues and to take a progress-over-perfection approach to tackling guidelines holistically (though note that such evaluations won't be as robust as a thorough evaluation).

Evaluators and Usage

Different expertise may be required to evaluate a website or application, as such, one of the below or a combination of these at different stages of the project may assist with understanding and evaluating the sustainability of a website or application.

  • Required Expertise Users of this methodology are assumed to have a solid understanding of how to evaluate web content using WSG 1.0 along with a reasonable understanding of sustainable web design and development, how web sustainability affects Internet infrastructure and business practices, and how sustainability issues affect both people and the planet. This includes a reasonable understanding of existing web technologies; and evaluation techniques, tools, and methods to identify sources of emissions, how to measure them accurately, and how to remedy issues that occur. In particular, it is also assumed that users of this methodology are deeply familiar with the WSG 1.0 supplementary documents
  • Combined Expertise (Optional) This methodology can be carried out by an individual evaluator with the required expertise, or a team of evaluators with collective expertise. Using the combined expertise of different evaluators may sometimes be necessary or beneficial when one evaluator alone does not possess all of the required expertise.
  • Involving Users (Optional) Involving people who are affected by real-world complications of sustainability may help identify digital issues that are not easily discovered by expert evaluation alone. While not required for using this methodology, it may sometimes be necessary for evaluators to involve real people from a wide variety of backgrounds during the evaluation process.
  • Evaluation Tools (Optional) This methodology is independent of any particular web sustainability evaluation tool, web browser (user-agent), and other software tool. While some sustainability checks in the WSGs are not fully machine-testable, evaluation tools can significantly assist evaluators during the evaluation process and contribute to more effective evaluation. For example, some web sustainability evaluation tools can scan entire products and services to help identify relevant pages for manual evaluation as well as identify those issues that fall under testability criteria. Tools can also be used to assist during manual (human) evaluation of sustainability checks, including those that aren't machine-testable.

This methodology is designed for evaluating both websites and applications. This includes organizations, entities, persons, events, products, and services. Websites and applications can include publicly available or internal websites; applications, intranets, online shops, dedicated mobile websites, isolated sections of a larger website, or internationalization pages (on a subdomain for example). This methodology can apply equally to any collection of materials, regardless of whether it is a part of a larger project or a dedicated entity of its own.

Principle of Website Enclosure

When a target website or application is defined for evaluation, all pages, states, and functionality within the scope of this definition must be considered for evaluation. Excluding such aspects of a website from the scope of evaluation would likely conflict with the WSG 1.0 success criteria and conformance requirements or otherwise distort the evaluation results.

Example of Website Enclosure

  • index.html
  • news.html
  • services.html
  • portfolio.html
  • about.html

In the above example, if a personal portfolio website in its entirety is defined as the target for evaluation, then all of the depicted areas are within the scope of the evaluation. This includes any aggregated and embedded content such as images of work undertaken, assets that are considered apart of the website, interactive materials created, maps to an office (if one exists), including when such parts originate from third-party sources. If only a specific website area, is defined as the target for evaluation then all the parts of this area are within the scope of the evaluation. One example could be to evaluate all of the portfolio items, as well as the individual web pages that are common to the work undertaken by the practitioner.

Particular Types of Websites

This methodology applies to a broad variety of website and application types. The following provides considerations for particular situations, noting that products and services may combine several aspects. Thus the following list is non-exclusive and non-exhaustive:

  • Small Websites: Small websites that are comprised of fewer pages place a lot of their sustainability impacts upon fewer pages. Some examples of this include personal portfolios, blogs, fan sites, and project websites. Small websites may be connected to a larger website, but treated as an individual and separable entity for evaluation.
  • Web Applications: Web applications are generally composed of dynamically generated content and functionality (see web page states). Web applications tend to be more complex and interactive. Some examples of web applications include webmail clients, document editors, and online shops. Web applications may be part of a larger website but can also constitute a website of their own in the context of this methodology. That is an individual and separable entity for evaluation. Due to the many possibilities of generating content and functionality in web applications, it is sometimes not feasible to exhaustively identify every possible web page, web page state, and functionality. Web applications will typically require more time and effort to evaluate, and they will typically need larger web page samples to reflect the different types of content, functionality, and processes.
  • Website in Multiple Versions: In some cases websites may have clearly separable areas where using one area does not require or depend on using another area of the website. For example, an organization might provide an extranet for its employees only that is linked from the public website but is otherwise separate, or it might have sub-sites for individual departments of the organization that are each clearly distinct from one another. Such separable areas can be considered as individual websites for evaluation. Some websites also provide additional or different content and functionality depending on the user (typically after a log-in). This additional content and functionality is generally part of the essential purpose and functionality of the website and is thus not considered to be a separable website area.
  • Website with Separable Areas: Some websites are available in multiple versions that are independent of one another in use, that is, using one version does not require or depend on using another version of the website. For example, a website may have versions of a website in different languages that meet this characteristic. Usually, each such website version has a different set of URIs. Such website versions can be considered as individual websites for evaluation. Websites using responsive design techniques (i.e. adapting the presentation according to user hardware, software, and preferences) as opposed to redirecting the user to a different location are not considered to be independent website versions.

Note: Responsive design techniques adjust the order, flow, and sometimes behavior of the content to best suit the device on which it is used. For example, to adjust the content and functionality according to the size of the viewport, screen resolution, orientation of the screen, and other aspects of a mobile device and the context in which it is being used. In this methodology, such changes to the content, functionality, appearance, and behavior are not considered to be independent website versions but rather web page states that need to be included in the evaluation scope. As such, considerations for mobile devices, operating systems, and assistive technologies need to be taken for websites using responsive design techniques during the evaluation process.

Situations and Contexts

This methodology is designed to be flexible to facilitate its applicability in different situations and contexts. The following considerations apply to particular situations and contexts for an evaluation:

  • Self-Assessment: In-house evaluators and evaluators who are part of the development process have the benefit of internal access to the team, environments, documentation, and codebases.
  • Third-Parties: Independent external evaluators will have less access to internal processes and information, but can come with expertise and experience that may not exist within internal teams.
  • Evaluating During Development: While evaluation must take place during both ideation and implementation, it's important to be aware that such evaluations can become obsolete, and constant reevaluations should occur during the development process to maintain currency.
  • Third-Party Content: Third-party content is not under the control of the website or web service providers. WSG 1.0 provides specific considerations for the conformance of such type of content. In such cases, evaluators will need to determine whether such content is regularly monitored and repaired.
  • Re-Running Website Evaluation: Website evaluation, according to this methodology, may be re-run after a short period; for example, when issues are identified and repaired by the website owner or website developer, or periodically to monitor progress.
  • Large-Scale Evaluation: Carrying out mass evaluation of many websites or applications, for example for national or international surveying, is typically carried out by primarily using automated evaluation tools. Relatively few web pages undergo full manual inspection. Such evaluations do not usually address the necessary qualitative depth of conformance review per product or service for which this methodology is designed.

Evaluation Procedure

This section describes the stages and activities of an evaluation procedure. The stages are not necessarily sequential. Also the exact sequence of the activities carried out during the evaluation stages depends on the type of application or website, the purpose of the evaluation, and the process used by the evaluator. Some of the activities can overlap or may be carried out in parallel.

There are five sequential steps defined in this section:

  1. Define the Evaluation Scope;
  2. Explore the Target Website;
  3. Test or Sample the Website;
  4. Audit the Selected Pages;
  5. Report the Evaluation Findings.

Evaluators should proceed from one step to the next, and may return to any preceding step in the process as new information is revealed to them during the evaluation process.

Internal Knowledge

Due to the requirements of sustainability reporting and the success criteria contained within the WSGs, in order to conform to certain aspects of the specification it may be required to obtain or have access to internal knowledge of the website or application or a business behind the product or service. If such access is possible, this knowledge should be used by evaluators in accordance to any policies in-place regarding data use. If access however is not available or cannot be provided, the use of publicly available data may serve as a general estimation of comparability; though the accuracy of results may be in question. In addition, in cases where no replacement for data can be found, conformance evaluators should aim to identify other methods of meeting success criteria or identify the lack of knowledge as a failure point (until such a time that the organization can disclose the information publicly).

Machine Testability

While many of the success criteria in the WSGs will withstand machine testing (automation) without human intervention, others cannot be reproduced without manual examination. In such cases, to assist with a large volume of pages, rather than sampling a subset of the website or application (which could bias the results due to the potential for missing important pages), creating a semi-automated structure around human testability to assist with such tasks through tooling (such as the use of a wizard interface) may help reduce pinch points in the evaluation process.

Step 1: Define the Evaluation Scope

During this step the overall scope of the evaluation is defined. It is an important step as initial exploration of the target application or website may be necessary to better know specifics (and to ensure common expectations) of the product or service and the required evaluation.

  1. Define the Scope of the Website: During this step the web pages (and states) that are in scope of the evaluation are defined. To avoid later mismatches of expectations between the evaluator, commissioner, and readers of the resulting evaluation report, it is important to define unambiguously what aspects of the website or application are within its scope. Documentation of URIs are recommended where possible and to list aspects of the website that support its identification such as third-party content and services, or content that may have a different web address but is still considered to be part of the target website.
  2. Define the Target: Because the WSGs favor a progress-over-perfection approach to conformance, it does not use any strict levels for compliance in meeting the guidelines. As such, during the evaluation process, evaluators should set defined targets based on a level they believe is achievable between client and the workers involved in making the website or application sustainable.
  3. Define a Sustainability Support Baseline: Depending on the type of website or application being developed, it may not always be possible to implement every aspect of the WSGs (or they may not always be applicable to every situation). WSG 1.0 does not pre-define which combinations of features and technologies must be supported as this depends on the particular context of the website or application, the web technologies that are used to create the content, and the user agents currently available. During this step the evaluator in consultation with the website owner (and or commissioner) and developer determines the minimum level of sustainability implementations or improvements to be established over a given period (or as a starting point in first-cases). It should be noted that setting a baseline doesn't imply that such expectations cannot be exceeded with new goals and including further targets to reach.
  4. Define Additional Evaluation Requirements (Optional): An evaluator or commissioner may be interested in additional information beyond what is needed to determine the extent of compliance to WSG 1.0. This may include submitted reports of issues, analysis of use-cases or interactions, descriptions of solutions beyond the scope of the evaluation, or reporting templates. Such additional requirements should be clarified early on and documented, and reported as such in any resulting report that is produced.

Step 2: Explore the Target Website

During this step the evaluator explores the target website or application to be evaluated, to develop an initial understanding of the product or service and its use, purpose, and functionality. Much of this will not be immediately apparent to evaluators, in particular to those from outside the development team. In some cases it is also not possible to exhaustively identify and list all functionality, types of web pages, and technologies used to realize the website and its applications. The initial exploration carried out in this step is typically refined in the later steps as the evaluator learns more about the target website. Involvement of website owners and website developers can help evaluators make their explorations more effective.

Carrying out initial cursory checks during this step helps identify web pages that are relevant for more detailed evaluation later on. For example, an evaluator may identify web pages that seem to be lacking sustainable features and note them down for more detailed evaluation later on.

To carry out this step it is also critical that the evaluator has access to all the relevant parts of the website. For example, it may be necessary to create accounts or otherwise provide access to restricted areas of a website that are part of the evaluation. Granting evaluators such internal access may require particular security and privacy precautions.

  1. Identify Web Pages of the Website: Explore the target website to identify its web pages, which will also include any states in web applications. The outcome of this step is a list of all web pages of the target website or application.
  2. Identify the Variety of Web Page Types: Web pages and web page states with varying styles, layouts, structures, and functionality often have varying support for sustainability. They are often generated by different templates and scripts, or authored by different people. They may appear differently, behave differently, and contain different content depending on the particular website user and context. During this step the evaluator explores the target website to identify the different types of web pages and web page states. The outcome of this step is a list of noteworthy variables that may have a baring on the results on any sustainability auditing that can be used when evaluating web pages.
  3. Identify Web Technologies Relied Upon: During this step, the web technologies relied upon within the website or applications technology stack should be identified. This includes base web technologies such as HTML and CSS, auxiliary web technologies such as JavaScript and WAI-ARIA, as well as specific web technologies such as SVG and PDF. The outcome of this step is a list of technologies that may contain sustainability issues of their own, or contribute towards overarching goals. Where possible, any content management system, libraries, components, or frameworks should also be mentioned as these can have sustainability implications that may need to be addressed during the evaluation process.
  4. Identify Other Relevant Web Pages: Some web pages and web page states including features that are specifically relevant for improving the sustainability of your website or application. The outcome of this step is a list (with relevant explanations and information) regarding how such implementations benefit a product or service's visitors and users.

Step 3: Test or Sample the Website

In cases where it is feasible to evaluate all web pages and web page states of a website, which is highly recommended for websites under 1,000 pages, the "selected sample" in the remaining steps of this evaluation process is the entire website. In some cases, such as for small websites, this sampling procedure may result in selecting all web pages and web page states of a website.

In cases where over 1,000 pages exist and the number of pages exceeds the ability to physically test every instance (or in cases where increased complexity reduces the capability to audit pages effectively), the evaluator selects a sample of web pages and web page states that is representative of the target website or application to be evaluated. The purpose of this selection is to ensure that the evaluation results reflect the sustainability of the product or service with reasonable confidence.

  • In such samples, it must be structured in a way to include all assets and content relating to the selected web page or web page state.
  • The sample selected should be randomly chosen (a blind method such as script selection could be used as an example) to avoid bias and should be comprised of 10-20% of the total website or applications structure (for example, a website comprised of 2,000 pages could test 400 pages).

Step 4: Audit the Selected Pages

During this step the evaluator audits (detailed evaluation of) all of the web pages and web page states selected. Carrying out this step requires the expertise described in section Required Expertise.

  1. Check All Initial Web Pages: For each web page and web page state, check its conformance with each of the defined success criteria in WSG 1.0 as denoted in Step 1. This includes all components of the web page or web page state, including any functionality and interactions that may affect the sustainability or state of the web page. Remember to include third-parties where necessary in processing and account for infrastructural and internal access requirements as the processing to meet success criteria requests to identify sustainability solutions occurs.
  2. Non-Interfering Verification: Ensure that any existing implementations or solutions being utilized are non-interfering. This requires that remedies do not cause compatibility issues or problems with other critical aspects such as accessibility that may otherwise prevent the page from loading or being able to be understood.

There are typically several ways to determine whether WSG Success Criteria have been met or not met. One example includes the set of (non-normative) Techniques for WSG provided within STAR that documents ways of meeting particular WSG Success Criteria using testable statements that will either be true or false when applied to specific web content. While evaluators can use such documented guidance to check whether particular web content meets or fails to meet WSG Success Criteria (and include this within reports), it is not required and evaluators may use other approaches to evaluate whether WSG Success Criteria have been met or not met.

Step 5: Report the Evaluation Findings

While evaluation findings are reported at the end of the process, documenting them is carried out throughout the evaluation process to ensure verifiable outcomes. The documentation typically has varying levels of confidentiality. For example, documenting the specific methods used to evaluate individual requirements might remain limited to the evaluator while reports about the outcomes from these checks are typically made available to the evaluation commissioner. Website or application owners might further choose to make public statements about the outcomes from evaluation according to this methodology.

  1. Document the Outcomes of Each Step: Documenting the outcomes for each of the previous steps (including all sub-sections) is essential to ensure transparency of the evaluation process, replicability of the evaluation results, and justification for any statements made based on this evaluation. Evaluation reports should include names of evaluators, dates of the evaluation, the scope and requirements (of sustainability), the pages and state being tested along with any technologies being identified and relied upon, plus the results of sustainability issues identified and any key factors that have contributed to meeting (or failing to meet) targets in the WSG success criteria. In certain instances, it may be useful to offer a list of every failure occurrence for every web page and web page state for transparency reporting along with the causes and repair suggestions to remedy the failures.
  2. Record the Evaluation Specifics (Optional): While optional, it is good practice for evaluators to keep record of the evaluation specifics, for example to support conflict resolution in the case of dispute. This includes archiving the web pages and web page states audited, and recording the evaluation tools, web browsers, other software, and methods used to audit them. A table or grid may be useful to record what was used for the different web pages and web page states audited. When such records contain sensitive information, they need particular security and privacy precautions. Records could include copies of files, assets, and resources of the web pages stated, screenshots (screen grabs), descriptions of the path to locate a page or state during a process (or information to use when testing interactive components), names and versions of tooling; or methods, procedures, and techniques used to evaluate conformance to WSG 1.0.
  3. Provide an Evaluation Statement (Optional): In the majority of situations, using this methodology alone does not result in WSG 1.0 conformance claims for the target websites. Website owners may wish to make public statements about the outcomes from evaluations following this methodology. If they wish to do so, they should follow the advisory guidance provided in the WSG conformance section in order to provide a statement relating to the level of sustainability they have achieved.

WSG Techniques

Techniques Overview

WSG 1.0 guidelines and success criteria are designed to be broadly applicable to current and future web technologies, including dynamic applications, mobile, digital television, etc.

STAR techniques guide web content authors and evaluators on meeting WSG success criteria and guidelines, which include code examples, resources, and tests. Techniques are updated periodically to cover additional current best practices and changes in technologies and tools.

The three types of techniques for STAR 1.0 are explained below:

Techniques are Informative

Techniques are informative, meaning they are not required. The basis for determining conformance to WSG is the success criteria from guidelines within the specification and not the techniques themselves. We also caution against requiring specific techniques or tests mentioned within this document. The only thing that should be required is meeting the WSG success criteria.

Technique Types

Sufficient techniques are reliable ways to meet the success criteria.

  • From an author or evaluator's perspective: If you use sufficient techniques for a given criterion correctly and it is sustainable within the context of its wider application and usage, you can be confident that you met the success criterion.

Advisory techniques are suggested ways to improve sustainability. They are often very helpful and maybe a significant way of reducing emissions or meeting primary PPP objectives.

Advisory techniques are not designated as sufficient techniques for various reasons such as:

  • They may not be sufficient to meet the full requirements of the success criteria;
  • They may be based on technology that is not yet stable;
  • They may not be testable;

Authors are encouraged to apply all of the techniques where appropriate to best address the widest range of sustainability benefits.

Failures are things that cause sustainability issues and fail specific success criteria. The documented failures are useful for:

  • Authors to know what to avoid,
  • Evaluators to use for checking if the content does not meet WSG success criteria.

Content that has a failure does not meet WSG success criteria unless an alternate version is provided without the failure.

Scope and Limitations

In addition to the techniques, there are other ways to meet WSG success criteria. STAR techniques are not comprehensive and may not cover newer technologies and situations. Web content does not have to use STAR techniques to conform to WSG. Content authors can develop different techniques. For example, an author could develop a technique for an existing, or other new technology. Other organizations may develop sets of techniques to meet WSG success criteria. Any techniques can be sufficient if they satisfy the success criterion.

Publication of techniques for a specific technology does not imply that the technology can be used in all situations to create content that meets WSG success criteria and conformance requirements. Developers need to be aware of the limitations of specific technologies and provide content in a way that is sustainable on multiple levels.

The Sustainable Web Design Community Group (SWD-CG) encourages people to submit new techniques so that they can be considered for inclusion in updates of the STAR 1.0 document. Please submit techniques for consideration using GitHub issues.

Testing Techniques

Each technique has tests that help:

  • Authors verify that they implemented the technique properly, and
  • Evaluators determine if web content meets the technique.

The tests are only for a technique, they are not tests for conformance to WSG success criteria.

  • Failing a technique test does not necessarily mean failing WSG, because the techniques are discrete (that is, they address one specific point) and they are not required.
  • Content can meet WSG success criteria in different ways other than STAR published sufficient techniques.

While the techniques are useful for evaluating the content, evaluations must go beyond just checking the sufficient technique tests to evaluate how content conforms to WSG success criteria (considerations such as accessibility, privacy, security, etc).

Failures are particularly useful for evaluations because they do indicate non-conformance (unless an alternate version is provided without the failure).

Using the Techniques

Techniques for WSG are not intended to be used as a standalone document. Instead, it is expected that content authors will usually use our quick reference to read the WCAG success criteria and follow links from there to specific guidelines within the specification (including examples and techniques).

Some techniques may describe how to provide alternate ways for visitors to get content. Alternative content, files, and formats must also conform to WSG and meet relevant success criteria, thereby becoming sustainable.

The code examples in the techniques are intended to demonstrate only the specific point discussed in the technique. They might not demonstrate best practices for other aspects of sustainability, accessibility, usability, or coding not related to the technique. They are not intended to be copied and used as the basis for developing web content.

Many techniques point to "working examples" that are more robust and may be appropriate for copying and integrating into web content.

User-Experience Design

Each of the below can be shown or hidden by clicking on the technique you wish to display.

  1. UX01-1: Produce a List of Variables To Monitor for Sustainability Impacts

    Applicability

    This technique is Advisory to meet Success Criteria within 2.1 Undertake Systemic Impacts Mapping.

    Description

    Create a maintainable list of different variables that may impact the sustainability of a product or service over time. By having this list in place, everyone in the creation process can closely monitor the application or website against each variable to determine if an PPP variable present on the list will require resolution either before, during, or after a product or service is launched.

    This list should be created during ideation if possible but can also be machine-generated from a pre-determined list of known PPP factors using evidence and research. The content of this material could be further tested through automation if such variables can be aligned with product capabilities however at a bare minimum, this list must be publicly visible (such as within a sustainability statement) and utilized in-house to enact sustainable change.

    Examples

    1. The Variables of Website Sustainability provides a comprehensive map of the various carbon traps that exist on the Internet. From business and external elements to development and rendering, it provides a visual representation of the energy usage that can trigger emissions.
    2. The (Proposed) Technology Carbon Standard is an approach to classify an organization's technology carbon footprint. It's based on the Greenhouse Gas Protocol (GHG) and aligns with Scope 1, 2, and 3 emissions.

    Tests

    Procedure

    1. Identify all machine-generated variables (hardware / software, etc).
    2. Identify all human-generated variables (business / user, etc).
    3. Create a checklist to be published within sustainability processes.
    4. Circulate among employees as part of the testing workflow.
    5. Check that all variables have been accounted for before publication.
    6. Report the findings within your sustainability statement.

    Expected Results

    1. All checks above are true.
  2. UX02-1: Use Quantitative Research To Measure the Needs of Visitors and Affected Communities

    Applicability

    This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.

    Description

    Provide a mechanism for application and website owners to better curate their products and services around the needs of their visitors and those affected by what has been created and in doing so, reduce the PPP burden which can impact the sustainability of the website, especially around Social, people-centered (and user-experience) impacts.

    It should be noted that for machine automation, only quantitative feedback will be able to be processed (and therefore useful) and all information gathering must be done sustainably and ethically. It should also be noted that because information is being requested, internal access may be required to formally identify certain characteristics necessary for processing.

    Examples

    1. Quantitative data analysis allows you to identify how your website or application is doing, find patterns in visitor behavior, and make decisions that can benefit your visitors and optimize your product or service to reduce emissions over time.
    2. Feedback forms are a great way to get qualitative measurements to reinforce the numbers to ensure that any changes you are considering are wanted, satisfy the needs of visitors, and meet the sustainability challenge.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Gather metrics data through browser APIs, analytics, or third-party software.
    3. Identify resolvable variables that are causing PPP issues.
    4. Once validated as accurate, test further using non-machine testable qualitative methods such as A / B measurement or feedback forms.

    Expected Results

    1. If #1, #2 and #3 above are true.
  3. UX02-2: Identify Visitor Constraints Using Browser Detection and the User-Agent

    Applicability

    This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.

    Description

    Provide a mechanism so that websites and applications can offer contingency processes for visitors who have constraints such as an older device, an out-of-date operating system, an unusual or out-of-date browser, or a slow Internet connection. Other contingencies exist (such as geo-blocking or mobile data costs) and these can also be taken into account if detection is possible. Each of these issues can burden the user-experience and cause added conflict along the pipeline in terms of emissions.

    If detection is not available (such as in cases where the Internet is unavailable), Non-digital investigation should be conducted where possible (such as through the use of mail or telephone feedback). Once the constraint has been detected, it will often be up to the developer to create a solution that will involve compatibility features or reducing the load on the visitor's device to increase ease of access. If there are questions regarding the potential compatibility or availability of services due to a certain configuration, seek the manufacturer's advice for further guidance.

    Examples

    1. Using UserAgent to identify visitor characteristics that might impact their experience when using your product or service such as an issue of compatibility or performance.
    2. Certain political situations may result in the inability for visitors to access your product or service hassle-free such as geo-blocking restrictions, the cost of bandwidth, or government-enacted firewalls, filters, or authentication requirements.

    Tests

    Procedure

    1. Check that all visitor constraints have been detected.
    2. Ensure that a suitable solution is in place to handle each such event.
    3. Check other contingencies that may affect the availability of your product or service.
    4. Check if a solution can be offered for contingencies, otherwise, try working with partner groups to find solutions.
    5. Check that visitors can trigger the constrained environment manually.

    Expected Results

    1. All checks above are true.
  4. UX02-3: Test Against Specific Disability Profiles for More Calibrated Accessibility

    Applicability

    This technique is Advisory to meet Success Criteria within 2.2 Assess and Research Visitor Needs.

    Description

    Go beyond the remit of WCAG and accessibility by default and to encourage designers and developers of products to test their creations against a range of specific named types of disabilities to both better understand the conditions and to be able to more structurally and sustainably better test for the unique issues each disability brings to technology in terms of adaptation.

    It is strongly encouraged that teams work with individuals with disabilities when attempting this task as they will have the lived experience to help you better adapt your products and services to their needs. If this isn't possible or you wish to theorize against certain pre-built profiles, you can refer to established medical texts to identify potential symptoms and refer these against issues that may cause technology issues, building solutions, and / or simulators (see color blindness) to help test for issues along the creation process.

    Examples

    1. The use of a color-blind filter such as the one built into browser DevTools or another third-party solution potentially helps to identify contrast issues that might not otherwise be detected within an existing website.

    Tests

    Procedure

    1. Identify visitors or individuals with disabilities who can help you become more accessible.
    2. Create testable profiles from symptom / issue lists that cause issues.
    3. Check that potential solutions don't interfere with existing functionality.
    4. Create and implement newly calibrated accessibility features.
    5. Check that the features work as implemented against all existing functionality.
    6. Report the findings within your accessibility statement.

    Expected Results

    1. All checks above are true.
  5. UX03-1: Use Third-Party APIs To Measure Any Passive External Impact

    Applicability

    This technique is Advisory to meet Success Criteria within 2.3 Research Non-Visitor's Needs.

    Description

    Utilize the tooling and resources provided by external parties to help you identify any efficiency and sustainability savings when having to use such services, or when non-digital forces come into play, such as if your product or service involves the physical delivery of goods.

    Getting the PPP equation of the sustainability of such forces can be difficult to track, especially as third parties may omit data or not provide a complete picture of their scope emissions. Therefore, care must be taken when choosing providers from the offset and consideration must also be given to the impact of using the API to gather such data together.

    Examples

    1. Using third-party tooling you can calculate the carbon emissions from logistics and delivery of physical shipping which can be integrated into your calculations. They also have a public API you can use as well.
    2. Non-users who are affected by a service can be identified by technical support teams and responses required are added to a list for the developers to machine test for compliance at the next release version.

    Tests

    Procedure

    1. Identify providers who will offer transparent PPP measurements.
    2. Check the impact of utilization of third-party APIs (if possible).
    3. Check the impact of non-users or passive users affected by your service.
    4. Check that impacted visitors are accounted for in compliance checks.
    5. Report the findings within your sustainability statement.

    Expected Results

    1. All checks above are true.
  6. UX06-1: Measure Interaction Cost To Reach Every Page in the Information Architecture

    Applicability

    This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.

    Description

    Reduce the cognitive load for visitors between the initial visit to a website or application and reaching their final destination where the information they are seeking is located. The path to locating such information can be routed through multiple interactions such as a search mechanism, hyperlinks, form controls (where appropriate), and progressive disclosure features (reducing the complexity involved in reaching a destination will reduce the rendering load leading to sustainability benefits).

    While there is no hard and fast rule regarding the number of clicks required to meet a visitor's expectations, including clear way-finding (breadcrumbs can be machine-identified as can landing page regions to guide visitors along the path). It is therefore a sensible precaution to ensure that the steps required are well documented to avoid overwhelming a visitor. If required, click-through testing can be measured to identify bottlenecks in complex applications.

    Examples

    1. While the three-click rule is a false heuristic, employing common navigation patterns within a header or footer can help visitors traverse a product when multiple click-throughs are necessary to reach a destination faster.
    2. In complex websites, where multiple levels of documents and potentially thousands of pages may exist, having breadcrumbs so that visitors can track their progress through the layers of a system (and even having dropdown navigation menus through each layer) can help with navigating the information architecture of a product or service with fewer barriers to access.

    Tests

    Procedure

    1. Check that the information architecture has been designed with clarity in mind.
    2. Check the page for identifiable way-finding processes along a route such as breadcrumbs and landing regions / page patterns.
    3. Check that the number of choices for each action doesn't exceed a set amount.
    4. Check that every action is well documented and that the expected action occurs.
    5. Check the visitor flow path for achieving set goals and optimize with shortcuts if necessary.

    Expected Results

    1. All checks above are true.
  7. UX06-3: Check for Content Obscuring Materials That Occur Upon PageLoad

    Applicability

    This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.

    Description

    Ensure that visitors can read the materials produced for your website or application and click on interactive content (such as links or buttons) that would otherwise be impaired by other content that due to positioning obscures the material by preventing the page from functioning correctly (wasted clicks, especially if JavaScript functions monitor them could lead to emission costs so its best to avoid unnecessary actions).

    There will be occasions when content should be obscured to progressively disclose additional content, but for unintentional overlapping, obscuring deliberately for attention (unnecessary), and all cases where the visitor did not ask to be impacted, the visibility of the content should be assured (either manually or mechanically through identifying object locations and dimensions plus where conflicts arise, issue remedial advice).

    Examples

    1. Advertising is often a cause of content that temporarily overlaps content with the intent of requesting clicks or attention-grabbing to monetize the material on the website or application. While such actions may be financially reasoned, they are rarely sustainably minded.
    2. Consent managers are an example of obscuring materials for the good as they ensure that visitors to the page authorize potentially invasive technologies before they become active. An example of this is the cookie consent banner that asks for their allowance before they can be used to store sensitive information on the visitor's machine.
    3. Scrollable content in which the scrollbar has been obscured, replaced (with a non-functional, obscured, or hidden replacement), or hidden entirely making it impossible to detect where additional content may occur beyond just the window (such as in overflowing boxes) will be unavailable to visitors making its use of hardware to render a waste of resources.

    Tests

    Procedure

    1. Check the page's content to ensure none is clipped by other materials or an overflow issue (with a scrollbar lacking).
    2. Check that all links and buttons are functioning.
    3. Check that progressive disclosing content only appears when requested.
    4. Check that necessary disclosing materials that obscure content can be easily removed.

    Expected Results

    1. All checks above are true.
  8. UX06-4: Break Large Pieces of Content Down Using Progressive Disclosure

    Applicability

    This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.

    Description

    Help increase the readability of content within a website or application by breaking down larger pieces of content into smaller constituent pieces. In the same way that a book is broken down into chapters, using progressive disclosure you can reveal sections when the reader is ready, avoiding information overload (the same can be done with long tasks).

    For machine testability, identification of such progressive disclosure markers can be found using the HTML details or dialog element, the CSS target selector (and its accompanying HTML ID attributes), the CSS checkbox hack, the use of JavaScript HashBangs, and also the use of state (and content) changes through various frameworks. Each of these can build a picture of how content displays on-screen during the user-experience, and some can load content only when it is requested which can reduce data transfer and rendering requests leading to sustainability improvements.

    Examples

    1. Tabs can be a simple way in which you can showcase different pieces of information to an audience. By using a mechanism such as the CSS target selector you can reveal information based on its related HTML ID.
    2. Dropdown menus such as the one you will find on many websites disclose from a single link several additional roadmapped locations which are grouped children in the information architecture of the product or service.

    Tests

    Procedure

    1. Check that progressive disclosure is used appropriately with your content or tasks to perform.
    2. Check that progress can be made from one disclosure to another.
    3. Check that each disclosure element can be closed successfully.
    4. Check that the disclosure method you use is compatible with visitor's browsers.

    Expected Results

    1. All checks above are true.
  9. UX06-5: Remove Any Non-User Initiated Pop-Up or Model Windows

    Applicability

    This technique is Advisory to meet Success Criteria within 2.6 Create a Lightweight Experience by Default.

    Description

    Reduce problematic friction within the user-experience and help reduce the number of dark patterns which might occur onLoad and during the interaction process as these can cause considerable sustainability issues as a result of wasted emissions. By ensuring that the visitor remains in control of the interface and that websites and applications work as expected, a more ethical and optimized product or service is likely to result.

    Machine testing can analyze JavaScript for the use of popup events, the appearance of "_blank" within hyperlinks or frames, or other functionality that exists that may produce overlays. It's important to question the acceptability of such usage for example, opening links in new windows may be acceptable for non-web formats like PDF, otherwise, it's best to advise against its use.

    Examples

    1. While not as common today, popup advertisements which appear in a new window or in a separate frame used to be a fixture of the early web and required heavy visitor management to close multiple banners that would trigger.

    Tests

    Procedure

    1. Check for CSS and JavaScript overlays that may occur onLoad and remove when identified.
    2. Check that any popup events can only be initiated with the visitor's consent.
    3. Check all hyperlinks for new window opening triggers (and action as appropriate).

    Expected Results

    1. All checks above are true.
  10. UX07-1: Decorative Design Elements Can Be Disabled if They Are Not Done So by Default

    Applicability

    This technique is Advisory to meet Success Criteria within 2.7 Avoid Unnecessary or an Overabundance of Assets.

    Description

    Provide a mechanism in which visitors can reduce their impact by choosing to have a more basic experience (to what extent will be up to the developer) by downloading fewer external resources, scripts, and other assets to their machine to be processed and rendered. With this action, a website or application that is already optimized for sustainability can go one step further in providing a barebones format for those wishing to prioritize lowering their footprint over having added functionality.

    This action could be performed by defining what assets have been added to the product or service to enhance the experience but are not of critical importance (such as background images, or decorative flourishes that can be machine identified). Other forms of decoration can also be machine-identified such as CSS flourishes to content and images, background sounds, custom cursors, custom scrollbars, etc.

    Examples

    1. Web browsers can change CSS cursors, and while there may be functional reasons to do so under certain circumstances, such as to indicate loading progress, zoom, or click-ability, in most circumstances, you should leave it at the system default as this is the expected behavior.
    2. CSS scrollbars can be customized or replaced but this can be problematic for visitors. Changes in scroll speed or its mechanics can affect impact readability and the way things render, and color changes can impact visibility and affect overflow regions within elements of the page. In such cases, the system default always functions better than a custom solution.

    Tests

    Procedure

    1. Check decorative images are clearly labeled using ARIA so assistive technologies can skip them.
    2. Check that CSS decoration can be disabled if it may interfere with readability.
    3. Check that custom scrollbars and cursors can be disabled as they may interfere with a visitor's ability to interact with the screen.
    4. Check that background sounds can be disabled and do not auto-load or auto-play.
    5. Check that decorative features can be disabled at the visitor's request.

    Expected Results

    1. All checks above are true.
  11. UX08-1: Making Your Navigation Menu Accessible and Well-Structured

    Applicability

    This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.

    Description

    Verify that your navigation menu is crafted in such a manner that it is both accessible to visitors and user-agents and functions correctly so that when implemented it doesn't lead to issues when attempting to browse the information architecture of a website or application. If the information architecture were to fail, this would lead to PPP failings as your visitors would no longer be able to use the product or service without encountering barriers to access and risk further emissions attempting to solve the issue.

    Machine testing the structure of a navigation menu will usually involve the nav, ul, or ol elements within the header of a product or service which repeats across pages and assuming that links function as expected and click ratios (sizes) are large enough on both desktop and mobile platforms, any search functionality will also need to be tested to ensure that results are provided upon submission.

    Examples

    1. The hamburger design is often considered the stereotype of mobile functionality, a three-lined icon that represents a menu upon clicking it reveals a full-screen list of options. Employing good patterns is key to reducing confusion and abandonment from visitors.

    Tests

    Procedure

    1. Check that the structure of the navigation menu is semantically correct.
    2. Check that the structure of the navigation menu is repeated across all pages.
    3. Check that the click sizes / ratios of anchors in navigation menus are large enough on both mobile and desktop to be comfortable.
    4. Check that the URLs provided within links do not lead to errors.
    5. Check that any search functionality provides results (not errors) upon submission.

    Expected Results

    1. All checks above are true.
  12. UX08-2: Provide a Human-Readable Sitemap To Improve the Information Architecture

    Applicability

    This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.

    Description

    Increase the findability of the content within your website among search engines and social networks. By creating a XML sitemap in the base directory of your website, and potentially having a human-readable) HTML sitemap to supplement it, you provide an index of all the publicly available pages. This acts as a potential signpost for individuals who may find themselves seeking information but not knowing exactly where it is stored.

    When creating an HTML sitemap it will be important to categorize pages into lists based on what section of the website they appear in (for human readability) rather than providing everything in a single long list. Structurally, you could use lists within lists to provide this distinction or use subheadings with individual lists. Both methods should ensure to be semantically correct and accessible to meet the human requirements of PPP targets.

    Examples

    1. The Sitemaps protocol showcases a basic XML sitemap format that can be used within the base directory of a website. It is supported by search engines like Google and can be submitted for indexing which can increase findability and potentially help visitors find your product or service quicker.
    2. An HTML sitemap is a more visually organized showcase of the pages publicly available within your product or service. The idea behind it is to be as comprehensive as possible but not to overwhelm the visitor. This might be the ideal place to use a system like progressive disclosure so that you can guide individuals through the materials you provide interactively.

    Tests

    Procedure

    1. Check for a sitemap.xml file in the base directory of a website.
    2. Check that the markup of the file is semantically correct.
    3. Check that all of the links held within don't produce any errors.
    4. Check for the existence of a publicly visible HTML sitemap.
    5. Check that the file exists alongside a sitemap.xml format.
    6. Check that the HTML sitemap is semantically correct.

    Expected Results

    1. All checks above are true.
  13. UX08-3: Provide a Method of Keeping Up-to-Date With Product or Service Updates

    Applicability

    This technique is Advisory to meet Success Criteria within 2.8 Ensure Navigation and Way-Finding Are Well-Structured.

    Description

    Ensure that visitors and users of an application or website can monitor changes and events that occur over time. The need to be able to track such events is critical as customers become reliant on products and services. This reliance on such features means that time will have been invested into learning how to set up and use the product, which if it fails could have a sustainability impact not just for the product or service owner, but for those reliant on its ability to function.

    Every product or service will have its own update schedule and there will be no hard and fast rule in terms of how often a website or application should be offering new releases. That being said, there should be a mechanism in place to describe news events both in a syndicated format and on the website, and a system status mechanism to identify any current issues with the product or service.

    Examples

    1. Apple has a newsroom that is very minimalistic yet it contains all that you would expect from a news category from a major website. It has grouped stories with useful headlines, dates, categories, and a picture. Having something similar for your website can help visitors find new material quickly and reduce churn digging through archives.
    2. Syndication feeds come in multiple different formats but the most common is the RSS format. They are compatible with readers on every platform and can be a useful way to get news to subscribers quickly. While these make regular requests to websites, they request less data and result in visitors spending less time browsing overall which is an emissions-positive event.

    Tests

    Procedure

    1. Check for a news page or category of a website.
    2. Check the news section has been updated within 12 months.
    3. Check for a syndication feed in RSS, Atom, or JSON.
    4. Check the syndication feed has been updated within 12 months.
    5. Check for other syndicated events (podcasts, OPML, etc).
    6. Check the syndication feed is semantically correct.

    Expected Results

    1. All checks above are true.
  14. UX09-1: Request Permission for Invasive Access To Interact With Content

    Applicability

    This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.

    Description

    Use the least amount of permission requests required to achieve a given task. Allowing the visitor to control when they receive information is critically important but it's also important not to burden them with requests to access hardware or do things that might prove invasive (such as triggering notifications or a pop-up window).

    For mechanical testability, JavaScript APIs can check using methods such as requestPermission() to identify if a website or application has been granted access to use various hardware or potentially abusable features within the web browser. Unsubscribe, and delete / freeze account links could also be identified (with internal access being granted).

    Examples

    1. Cookie notification messages are required by law (the EU Cookie Directive) and could be identified by a script. These are heavily featured within websites and allow visitors to prioritize privacy over the availability of functionality assuming that they are implemented correctly.
    2. Customizing or removing functionality within applications can help simplify the workflow of a visitor and increase the productivity within an interface (which will have sustainability benefits as it could reduce the weight from unnecessary features loading and visitors spending less time at the screen).

    Tests

    Procedure

    1. Check which permissions from the available APIs are being granted.
    2. Check if products or services have unsubscribe / account freezing and removal options.
    3. Check if cookies are used and if so, if notifications are triggered appropriately (with the option to disable all).
    4. Check if applications plug-in functionality and allow for customization.

    Expected Results

    1. All checks above are true.
  15. UX09-2: Encourage Features That Help Visitors Achieve Tasks

    Applicability

    This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.

    Description

    Lower the barriers to entry with content. When dealing with complex, multi-page websites it becomes increasingly important to have features in place to allow visitors to bypass repeated elements that they have already prefilled to achieve goals quicker (this is a key variable for sites who have large membership numbers).

    There are functional ways to increase the flow through a website and allow visitors to accomplish a task that can also be machine-testable. This technique is most useful when dealing with complex pieces of content, multi-step products, or services that have a lot of functionality that may require shortcuts to allow faster decision-making under certain conditions.

    Examples

    1. Read-It-Later software may be provided by third-parties but they allow for both offline access to content on a variety of platforms and customization of the materials visually based on the visitor's requirements.
    2. Quick purchase features (such as Amazon's famous "Buy Now" button) allow for the immediate purchase of a product, skipping past the shopping basket steps and using prior purchase knowledge to reduce friction.
    3. Shortcut links and patterns can be used to skip ahead or over information and / or pages when it is recognized that the information may not be relevant or required in a multi-step process (such as form filling).

    Tests

    Procedure

    1. Check for read-it-later compatibility within articles.
    2. Check for purchasable products and the option to quickly purchase (bonus points if no login is required).
    3. Check if the HTML head contains landmark links of relevance to be identified.
    4. Check for recognized landmark shortcuts (skip links, etc) and that they are functional.

    Expected Results

    1. All checks above are true.
  16. UX09-3: Avoid Mechanisms That Promote Excessive Screen Time

    Applicability

    This technique is Advisory to meet Success Criteria within 2.9 Respect the Visitor's Attention.

    Description

    Allow the visitor to achieve the task they initially set out to do and not trigger unnecessary issues in the user-experience which would undermine their ability to complete the said task. Issues such as infinite scrolling can promote a need for continued browsing (which can load excessive data and have a sustainability impact on rendering) and loading overlays that deviate visitors from their path will have wider societal impacts on PPP targets which need to be considered.

    Within this technique, machine testability can be considered by examining if common link pagination landmarks (previous, next, <numbers>, etc) exist within category listings and if overlays or attention-keeping features are detected for common patterns (such as those mentioned in the examples below). If screen addition mechanisms are determined to be in use within a website, this should be considered a failing mark against the related success criteria.

    Examples

    1. Sale promotional overlays are a common theme that displays upon loading a page and requests the visitor take notice of them before dismissing them to view the content they want to examine.
    2. Information request overlays are another model window that often appears during a browsing session such as requesting your email address for a newsletter or your details to encourage sign-up after a timed period.

    Tests

    Procedure

    1. Check if infinite scrolling is being used through pagination landmark links.
    2. Check for recognized attention-keeping patterns that obscure content.

    Expected Results

    1. All checks above are true.
  17. UX10-1: Build Using Established Design Patterns That Use Known Conventions

    Applicability

    This technique is Advisory to meet Success Criteria within 2.10 Use Recognized Design Patterns.

    Description

    Use repeating patterns that visitors are likely to recognize from visits to other websites to build a sense of comfort and identification with your products and services. By using established design patterns that are repeatable, recognizable, and testable, you can reduce the amount of confusion likely to occur within a user-interface and this will increase the speed of adaptability (which has sustainability benefits).

    It should be noted that for machine testability, heuristic recognition (identification of patterns in code) will likely allow product creators to recognize the implementation of certain patterns, especially when libraries or design systems are utilized as the backbone of a product or service. Through this, recommendations can be made to improve layouts.

    Examples

    1. A website layout could follow a conventional design pattern aimed at a recognized reading flow such as the F or Z pattern. They are based on established scientific studies of how people first identify and then read content on a screen. The mechanics behind such layouts are designed using CSS and can be identified based on the grid or flexbox pattern used to reduce the time visitors spend scanning the screen.
    2. Individual components that aim to be repeatable conventions across products and services can take many forms (some are often provided as third-party services due to the build complexity involved) and should aim to be visually recognizable even if stylistic differences occur for branding which will help reduce wasted screen time.

    Tests

    Procedure

    1. Check for common in-page design patterns or libraries.
    2. Verify page components and overall design against established conventions.
    3. Check for and implement solutions for layout flow issues that could be optimized using an established pattern.

    Expected Results

    1. All checks above are true.
  18. UX11-1: Eliminate Deceptive Design Patterns or Manipulative Techniques

    Applicability

    This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.

    Description

    Prevent the visitor from being manipulated being a recognized dark pattern that falls under the umbrella of deceptive design. While there are a large number of ways to code ethically, using any one of these named practices would constitute a failure against WSG guidelines and as such, ensuring that practices are not used is worthy of inclusion.

    Machine testability for deceptive patterns will vary based on the type in use however the potential to integrate artificial intelligence to assist with more subtle uses of manipulation within tooling could be used. More obvious issues derided from code injection can be flagged and reported as problematic.

    Examples

    1. Disabling interface functionality such as the ability to highlight / select the text, right-click on content, copy the content to the clipboard, print, or paste passwords from management software can cause significant problematic friction.

    Tests

    Procedure

    1. Check for bad practices such as those mentioned in the examples.
    2. Check against a library of deceptive design patterns.

    Expected Results

    1. All checks above are true.
  19. UX11-3: Determine if Analytics Can Be Sustainably Improved or Removed

    Applicability

    This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.

    Description

    Eliminate the PPP human (third-party emissions, privacy, and performance) issues that relate to the use of analytics software. While this technique doesn't advocate that such software needs to be removed in all instances, the potential to flag poor-performing options and recommend more optimized solutions (or request removal) will be recommended.

    This should take into consideration the need for analytics in other success criteria (for research metrics) before recommending removal and identifying low-carbon options for third-party solutions (where data exists). Additional criteria based on PPP values (such as privacy and security) should also be considered.

    Examples

    1. Some analytics providers have made steps to become more sustainable which would potentially make them a better replacement for bulkier, more intrusive packages.

    Tests

    Procedure

    1. Check to determine if an analytics script is being used within the page.
    2. Check against a list of suppliers of software to determine viable alternatives.
    3. Check to determine if data being gathered complies with GDPR and other privacy legislation.
    4. If no viable alternative exists and it still performs poorly, recommend removal.

    Expected Results

    1. All checks above are true.
  20. UX11-4: Eliminate Any Manipulative SEO Techniques From Your Source Code

    Applicability

    This technique is Advisory to meet Success Criteria within 2.11 Avoid Manipulative Patterns.

    Description

    Proof your code and content for practices that might detract from the user-experience but are done, purely in an attempt to gain higher rankings from within search engines. While getting such a position would be beneficial, using such techniques will often get the reputation of your site on these services tarnished and your rank will suffer significantly.

    Your visibility in search engines and social media matters. Visibility is how people find you on the web. If people cannot find you or sections of your website or application, they will consume resources in that effort (or trying to find a replacement). Additionally, bad SEO practices targeting only machines consume visitor resources for rendering and produce excess emissions.

    Examples

    1. Keyword stuffing is a practice in which the terms you want to be recognized for get repeated throughout your documents. While mentioning it occasionally is good, going over the top can hurt readability and bloat pages further, which is why it isn't considered sustainable.
    2. Hiding content, for example, using small font sizes, making the text the same color as the background, hiding text behind images, or generating CSS or JavaScript content purely for search engines is considered bad practice.

    Tests

    Procedure

    1. Check for the keywords META tag and recommend its removal.
    2. Check for the refresh META tag and recommend replacement with a page redirect header.
    3. Check for hidden content as identified in the above example and promote visibility or removal.
    4. Check for duplicate content and flag it for removal.

    Expected Results

    1. All checks above are true.
  21. UX12-1: Provide Deliverables in Reusable, Self-Isolated Components

    Applicability

    This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.

    Description

    Allow for code that has been produced for an individual project to be re-used and reapplied across future work to reduce redundancy. This is most commonly seen in frameworks and libraries but isn't limited as such and can also apply to documentation and snippets.

    This technique is most useful when it is housed within an open source location to foster a culture of contribution, remixing, and improving of the work. While machine testing of code to determine functionality based on its apparent need would prove problematic, the ability to use importable code can be verified.

    Examples

    1. A library like Bootstrap can be imported as a whole into a project, but it can also be stripped back, eliminating all but the components that are in use by the website or application powering its functionality.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check to determine if these deliverables are housed in an open source environment.
    3. Check projects for multiple CSS files (or use of @imports).
    4. Check JavaScript for the use of imports or self-isolating functions.
    5. Check documentation for section separation as markdown files.

    Expected Results

    1. All checks above are true.
  22. UX12-2: Provide Documentation To Guide the Deliverables

    Applicability

    This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.

    Description

    Guarantee that those without a working knowledge of an aspect of design or development will be provided with the information they require so that they can safely work with the files being presented to them during the creation and maintenance process.

    Because emissions do not start or stop at website usage, and emissions are created during the ideation and creation process, it is crucial that all involved with the project, even clients who may not be used to the types of technology to which this specification refers to, can optimize their ability to produce high-quality output, as this will reflect in sustainable websites and applications.

    Examples

    1. Many large projects have manuals or documentation that describe in detail how to work with the creations of a group of people either during the handoff (from one department to another) stage or during the completion (from the creators to the clients) stage.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check that the instructions, manual, or documentation exist.
    3. If the documentation does not exist, flag a request to produce materials.
    4. Check for multiple format types such as HTML, PDF, checklists, slideshow, and video.

    Expected Results

    1. All checks above are true.
  23. UX12-3: Create Code Comments To Describe the Functionality of Features

    Applicability

    This technique is Advisory to meet Success Criteria within 2.12 Document and Share Project Outputs.

    Description

    Provide internal or external developers with the information they require to understand the justification behind certain coding choices, and what individual pieces of code exist for. While such features can be itemized within the documentation, code comments are useful for providing a portable library of notes with the source code itself.

    Every language will have its own descriptive mechanism for producing code comments and developers should prescribe to such best practices. Conventions could be utilized within comments such as including links to provide added context. Comments can be machine-detected by their opening and closing statements and paired with the code that follows them.

    Examples

    1. In the source code for Semantic Web Hook, code comments are available for the un-minimized version to help developers identify the purpose behind each function (so they can decide if they require it).

    Tests

    Procedure

    1. Check for code comments within the source code of pages.
    2. Check the formatting of comments for conventions such as links, images, and bullets.
    3. If publicly facing, flag it for removal, if internal access is required or a developer release, keep it in place.

    Expected Results

    1. All checks above are true.
  24. UX13-1: Implement an Up-to-Date Design System

    Applicability

    This technique is Advisory to meet Success Criteria within 2.13 Use a Design System To Prioritize Interface Consistency.

    Description

    Ensure that stylistic choices made by the design and development team are based upon prescribed defaults. This will encourage consistency not just within the layout but also in terms of writing style, accessibility, sustainability, and other variables being monitored.

    There are already several established design systems, which provide a good baseline for what should should be included within them. Either deploy an existing open source solution or craft a custom model that meets the needs of your product or service. Machine testability will rely on identifying the design system and then verifying the materials are being used in the wild.

    Examples

    1. Atlassian Design System breaks its library down visually on the main page to give new readers a great roadmap for where reusable components can be located and how everything should be represented.
    2. Carbon Design System provides dropdown menus of its entire library of components to allow visitors to quickly and easily access the elements they require, with a firm focus on being an open source project.

    Tests

    Procedure

    1. Check for the implementation of a design system.
    2. Check what number of categories being monitored.
    3. Check if the patterns and tokens are being employed.
    4. Check when the design system was last updated.

    Expected Results

    1. All checks above are true.
  25. UX14-1: Increase the Readability of Content Using Common Metrics

    Applicability

    This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.

    Description

    Make the content within your website or application more legible to individuals who may struggle with more technical language. Techniques such as lowering the reading age, removing industry terms (or clearly defining them), and having translations for international audiences can help get your message across.

    This technique is most useful when dealing with a large body of content that the visitor has to wade through to complete a task. Being able to reduce screen time by improving the ease at which visitors can comprehend a topic will ultimately allow for an experience that feels faster and discriminates less against individuals who may struggle with highly technical content.

    Examples

    1. Transport for London has an editorial style guide that sets out to use Plain English throughout their website. The guide itself serves as a useful tool that others could adapt (including the terms in machine testability).
    2. There are lots of technical terms used in (for example) the Web industry, Jens Oliver Meiert has created the Web Glossary as a dictionary of terminology which can help clarify confusing terminology.
    3. Readability Formulas have an assessment tool that can use one of several different models to identify issues with content. As there is no agreed-upon assessment model, this can be particularly helpful.

    Tests

    Procedure

    1. Check against a list of terms for the use of Plain English.
    2. Check that terminology is inclusive and non-discriminatory.
    3. Check that technical terms are identified and well-defined.
    4. Check that the content has an appropriate reading age.
    5. Check for internationalization within content, HTML, and CSS.

    Expected Results

    1. All checks above are true.
  26. UX14-2: Break Long-Form Content Down Into a Simpler Structure

    Applicability

    This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.

    Description

    Avoid visitors being hit with a wall of text which risks abandonment of the page (and a wasted session with emissions attached). By taking large pieces of information and structuring them into more clean smaller chunks and utilizing a range of more friendlier features when appropriate such as lists and tables, the information will look less intimidating.

    For machine testability, identifying clear headings, visual hierarchy, spacing, line breaks, lists, use of images, and other features of HTML can help calculate the relative density of the content and whether it could be better presented not just for readers but to (if used with progressive disclosure) reduce the initial impact on rendering engines which can have a sustainable impact on hardware.

    Examples

    1. This article from Nielson Norman Group provides some great examples and ideas around the subject of dealing with long-form content on the Web, each of which can be both implemented and tested against.

    Tests

    Procedure

    1. Check that the heading hierarchy structure is correct (H1 > H6).
    2. Check that paragraphs don't exceed a set word length.
    3. Check the spacing between elements isn't too crowded.

    Expected Results

    1. All checks above are true.
  27. UX14-3: Produce a List of SEO Considerations To Improve Your Findability

    Applicability

    This technique is Advisory to meet Success Criteria within 2.14 Write With Purpose, in an Accessible, Easy To Understand Format.

    Description

    Reduce the amount of time visitors and potential visitors spend churning through data attempting to locate you (or pages relating to you) through search engines or social media. These types of requests have an emissions impact from the hardware used to deal with the requests to the rendering of the results, therefore getting the right information to visitors as quickly as possible is critical to having a sustainable product.

    This list should be created as early as possible but could be machine-generated from a pre-determined list of SEO variables. The content of this material could be machine-tested against using automation if the variables can be aligned with what has been implemented. Additionally, publicly-facing social media handles that are identified within a product or service website can be detected.

    Examples

    1. SEMRush has produced a checklist of SEO considerations which is based on their years of industry experience. While not everything will be able to be tested against, it's good to consider to improve overall visibility.
    2. AHREFs have created a checklist that contains a lot of advice and tips on SEO. This list also has some components which aren't testable but the advice could also lead to sustainability benefits as a side-effect.

    Tests

    Procedure

    1. Check against an SEO checklist for the state of a product or service's visibility (internal access may be required).
    2. Check for the website or application's position in major search engines.
    3. Check for social media handles and when they were last used.

    Expected Results

    1. All checks above are true.
  28. UX15-1: Implement a Range of Images of the Appropriate Format and Size

    Applicability

    This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.

    Description

    Identify whether the images you are using are firstly necessary (which may be difficult to machine test), and secondly are using a format that is web appropriate. While these considerations may seem small they can dramatically lead to performance benefits that can reduce loading times and page bloat (which is an increasing issue on today's Internet).

    Implementations should consider in necessity the total image count (too many over a certain ratio could be problematic for those with bandwidth limits). Machine testability for web graphics can easily detect older formats that should be replaced immediately, formats that could potentially be changed for optimization purposes, and formats that could become vector graphics.

    Examples

    1. Certain image formats perform better than others, AVIF and WebP will (in most cases) have better optimization rates than the original three classics for the web (PNG, JPEG, and GIF) and should be considered by default.
    2. Different devices have different breakpoints, but these shouldn't be used as hard and fast rules for every situation as there are far too many devices out there to be compatible with. It makes better sense (if serving a single image) to default to the mid-range of the average device you receive.

    Tests

    Procedure

    1. Check the image density of the page and flag pages over a certain image count.
    2. Check for older image format usage (BMP / TIFF) and require replacement.
    3. Check for PNG and JPEG images and flag potential upgrades to AVIF or WebP.
    4. Provide fallbacks using the picture element on newer image formats to ensure compatibility.
    5. Check for GIF and flag static image upgrade to AVIF or WebP, or animated images to mp4.

    Expected Results

    1. All checks above are true.
  29. UX15-2: Optimizing All Image Assets for a Variety of Different Resolutions

    Applicability

    This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.

    Description

    Ensure that images are not only served at the correct resolution and size for the device requesting them but also to make sure that the images being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.

    Compression tests can be run against every image to see if improvements can be made (above and beyond changing formats) without losing too much image quality that visibility becomes degraded and therefore a recognition issue. The use of the sizes attribute can also help provide alternative images for different resolutions as required by device requirements (though the default image should be set to a median value to balance size and weight).

    Examples

    1. DebugBear showcases how responsive images can be used to provide not just different optimized formats (for better compression) but different sizes to reduce the performance load on visitor's machines.
    2. MDN has a section on responsive images that goes into detail (with examples) showcasing how they can be created to work well across a variety of different platforms using the HTML IMG element.

    Tests

    Procedure

    1. Check for compression potential against every available image asset.
    2. Check CSS media or container queries to recreate different breakpoints in the design to identify and create the sizes of images required.
    3. Check for the use of the sizes attribute to provide responsive images.
    4. Provide optimized solutions if image problems have been detected.
    5. Check that the default image value is set to a median value.

    Expected Results

    1. All checks above are true.
  30. UX15-3: Include Lazy-Loading With Images That Load Below-the-Fold

    Applicability

    This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.

    Description

    Prevent image assets included within a website or application that are either visually below the fold or hidden due to progressive disclosure from being loaded until they appear in view of the visitor's screen. This can save bandwidth and the resulting processing of the image at the client-side which otherwise would occur and may not even be used if the visitor chooses to click elsewhere or close before reaching it.

    While it is preferable that only links below the fold have HTML lazy-loading attributes attached to them, there does not appear to be a penalty for including it within all images so if machine testing cannot differentiate this shouldn't be discriminated against as a failing point. Additionally, the point of the fold can change based on the type of device being used so it's worth considering this into processes.

    Examples

    1. Mathias Bynens created a lazy-loading demonstration that showcases the technique being used in the wild. As you scroll down the page of the website, more of the images will be loaded on demand.

    Tests

    Procedure

    1. Check for the position of the fold on the smallest screen size visitors use.
    2. Check for HTML lazy loading attributes against all images falling outside these dimensions.
    3. Check for HTML lazy loading attributes against all images in progressive disclosure boxes.

    Expected Results

    1. All checks above are true.
  31. UX15-4: Provide the Option for Images To Be Disabled or a Low Fidelity Alternative

    Applicability

    This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.

    Description

    Ensure that visitors with bandwidth restrictions or slow connections are given a mechanism to access your content when their ability to access critical information may be stressed. Providing such options may include an in-page solution (either by default or by choice), or through the offering of a highly optimized alternative for such requests.

    When providing such requests through a secondary stream (such as a low-fidelity option), this channel mustn't become as bloated as the primary channel. As such, guidelines should be drawn up regarding what can and cannot be included to ensure limitations are placed to maintain a basic level of service while not offering the full capability of the primary product.

    Examples

    1. NPR provides a text-only version of their main website to offer a basic level of service for those who may wish to access their services but have limitations on what they can access. As such, everything has been stripped back providing no interactivity, features, or other distractions.
    2. CNN provides a lightweight version of their website which does request cookie permissions (for analytics?) but the content is free of images, videos, and other components that can be render-blocking.

    Tests

    Procedure

    1. Check if an in-page "Lo-Fi" or "Eco mode" option exists, if by choice, recommend by default.
    2. Check for a low-fidelity / text-only / lightweight subdomain or path.
    3. Check that the low-fidelity option is an agreed upon percentage lighter than the main website.

    Expected Results

    1. All checks above are true.
  32. UX15-5: Add Details About Image Impacts to the Media Management and Use Policy

    Applicability

    This technique is Advisory to meet Success Criteria within 2.15 Take a More Sustainable Approach to Image Assets.

    Description

    Ensure that you provide details about any image reduction targets and techniques within a publicly available media management and use policy. This will help you to establish the criteria you will attempt to meet across your product or service and allow visitors to identify any failings if they spot them (which can be fixed during the product lifecycle).

    Machine testability for such processes can involve first ensuring that the media management and use policy exists and then checking for a section on images that exists can be a definable way of identifying that the subject is being mentioned. Content can be checked for accuracy through auditing processes and failings flagged up as issues requiring resolution to maintain compliance.

    Examples

    1. Sprout Social has a toolkit that guides you through the creation of a social media policy, which could easily be adapted to include ethics and PPP factors like sustainability within its criteria (under legal guidelines).

    Tests

    Procedure

    1. Check for media management and use policy existence.
    2. Check for a section on images to verify impact coverage.

    Expected Results

    1. All checks above are true.
  33. UX16-1: Remove Any Functionality That May Trigger Auto-Playing Animation

    Applicability

    This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.

    Description

    Ensure that media will only run within a website or application on the command of the visitor and not begin consuming resources and thrashing hardware from the immediacy of the page load event. Because this involves HTML (attributes and background media), JavaScript, and images (animated GIFs), a multi-functional approach is required.

    Because media can involve audio and video, implementors will need to ensure that both are treated with equal respect as they both consume resources and contribute to emissions (to varying degrees). Additional care will also need to be taken to identify background media and prevent its sudden onset unless the function of that page or application is to show a media file (with nothing else).

    Examples

    1. Advertisements are one of the most common forms of autoplaying media, they often appear as small boxes (Wired is an example of this in use). While muting the media may reduce noise pollution, it does not reduce the carbon impact of loading, rendering, and playing of the media itself.

    Tests

    Procedure

    1. Check for background media and prevent auto-playing upon page load.
    2. Check that media that is required to play as a function of the product is muted by default.
    3. Check for audio and video HTML elements and remove any autoplay true events.
    4. Check for custom audio or video players and flag auto-playing media.

    Expected Results

    1. All checks above are true.
  34. UX16-2: Optimizing All Audio and Video Assets for Speed and Compatibility

    Applicability

    This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.

    Description

    Ensure that both the formats being used for media have widespread browser compatibility and that the audio and video being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.

    Compression tests can be run against every video and audio file to see if improvements can be made without losing too much quality that the artifact becomes degraded beyond usefulness. Audio and video formats to be embedded with browser compatibility issues can also be machine detected and recommendations for alternatives can be offered (and implemented).

    Examples

    1. BBC iPlayer is an on-demand platform that can change the quality setting of the video being streamed. The quality being applied also changes based on variables impacting the environment.
    2. YouTube has a setting to alter the streaming quality of videos (where the quality can be lowered). Again this platform uses browser metrics to automatically adjust the stream quality based on the visitor's environment.

    Tests

    Procedure

    1. Check for compression potential against every available audio and video asset.
    2. Provide optimized solutions if audio or video problems have been detected.
    3. Check for third-party non-native media player usage and recommend native options (if possible).

    Expected Results

    1. All checks above are true.
  35. UX16-3: Provide Facades for Media, Animated GIFs, and Third-Party Content

    Applicability

    This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.

    Description

    Provide a mechanism that ensures visitors remain in control of when highly impactful (in terms of emissions) media or third-party materials begin to transmit to their devices. By having a static facade that upon clicking begins the render (and not simply hiding the video, audio or interactive element behind the static image), you ensure that the content has a lazy-load effect even if autoplay or the interaction is not active (as pre-buffering may occur during the render process).

    Because there is no browser default method for facades (at the time of publication), identification of such events for machine testability will come down to them being clearly labeled using ID or class attributes (or through heuristic testing for an image that is either anchor linked or has a button attached with a JavaScript event handler to process the content switch).

    Examples

    1. Jason Knight has provided some simple but useful examples of his own implementation of accessible HTML video facades which he developed as a solution to the performance (and sustainability) issues media can cause.

    Tests

    Procedure

    1. Check that no video or audio elements trigger content upon the page load.
    2. Check that any uses of the GIF extension are not animated by default.
    3. Check that no impactful third-party content activate upon the page load.

    Expected Results

    1. All checks above are true.
  36. UX16-4: Optimizing All Media Assets for a Variety of Different Resolutions

    Applicability

    This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.

    Description

    Ensure that videos are not only served at the correct resolution but also to identify assets that can help visitors make decisions before choosing to load the main video as to whether the media is correct for them. The location of these links should be connected to the embedded media, and either replace the original video upon click or downloadable for the visitor (they should not increase the number of embeds).

    JavaScript can be used to detect the resolution of the device accessing the media and serve the correct media size for the visitor requesting it (ensuring smaller devices don't get larger media files). Regarding the types of links to be offered, they should provide the visitor with more choice in terms of the quality of the video being consumed or provide added context to its purpose.

    Examples

    1. The Internet Archive provides downloadable media files in a variety of formats in addition to being able to stream. This helps reduce the repeat burden on servers and avoids page-reload rendering emissions.

    Tests

    Procedure

    1. Check that the video assets are provided in a variety of common resolutions (which can be user-chosen).
    2. Check for automatic detection and selection of media resolutions.
    3. Check if clips or trailers are provided to give a shorter overview.
    4. Check if downloads are provided for repeat viewing.

    Expected Results

    1. All checks above are true.
  37. UX16-5: Add Details About Media Impacts to the Media Management and Use Policy

    Applicability

    This technique is Advisory to meet Success Criteria within 2.16 Take a More Sustainable Approach to Media Assets.

    Description

    Ensure that you provide details about any audio or video reduction targets and techniques within a publicly available media management and use policy. This will help you to establish the criteria you will attempt to meet across your product or service and allow visitors to identify any failings if they spot them (which can be fixed during the product lifecycle).

    Machine testability for such processes can involve first ensuring that the media management and use policy exists and then checking for a section on audio, video, or media, exists can be a definable way of identifying that the subject is being mentioned. Content can be checked for accuracy through auditing processes and failings flagged up as issues requiring resolution to maintain compliance.

    Examples

    1. HootSuite has a template that guides you through the creation of a social media policy, which could easily be adapted to include ethics and PPP factors like sustainability within its criteria (under a new section).

    Tests

    Procedure

    1. Check for media management and use policy existence.
    2. Check for a section on audio to verify impact coverage.
    3. Check for a section on video to verify impact coverage.
    4. Check for a section on media if the above doesn't exist to verify impact coverage.

    Expected Results

    1. All checks above are true.
  38. UX17-2: Measure and Respond to High-Intensity Animation Usage

    Applicability

    This technique is Advisory to meet Success Criteria within 2.17 Take a More Sustainable Approach to Animation.

    Description

    Determine if animation usage within an application or website will diminish the user-experience. By calculating the number of animations that occur and identifying if this number firstly can be reduced and then running each of them individually and identifying if they cause lagging on lower specification devices, offering a resolution will help reduce the CPU and GPU burden which can consume vast visitor resources.

    It should be noted that the types of animations used can dramatically affect the rendering process. Certain CSS transitions and animations are more process efficient than others, and JavaScript animation can also have differing impacts on CSS in both the choice of animation and how it has been put together. This should be considered when testing to offer low-impact animations and effects.

    Examples

    1. Personal portfolios are an ideal place to showcase expertise and you have more liberty to experiment with technologies including animation, which is precisely what Kent C. Dodds, Sarah Drasner, and Una Kravetz have done.

    Tests

    Procedure

    1. Check both CSS and JavaScript for animations and calculate the number that exists.
    2. If too many (based on a justified value) exist, request the removal of the most impactful.
    3. Check if animation techniques can be replaced by more efficient methods.
    4. Check individual animations to determine if the CPU or GPU burden on devices is within safe limits.

    Expected Results

    1. All checks above are true.
  39. UX17-3: Include a Mechanism To Pause or Stop Page Animation

    Applicability

    This technique is Advisory to meet Success Criteria within 2.17 Take a More Sustainable Approach to Animation.

    Description

    Provide a mechanism to bypass animated effects that may be included within the page of a website or application. The buttons provided must include a stop / opt-out button as a mandatory option but may also include an option to pause and restart play as optional elements. These buttons must be visible at page load and before animation starts thereby giving the visitor time to action them before effects begin.

    It is preferable that the buttons remain visible when scrolling the page however if the animation is restricted to a certain part of a page then the buttons can also be restricted to that region and be classed as passing the criteria. There should be one universal set of buttons for all animation rather than individual options for every effect (except media such as video).

    Examples

    1. On the 12 Days of Web advent calendar for web developers, there is a toggle that can be enacted to start and stop an animated snow effect. This is particularly great as the trigger is set to opt-in animation.

    Tests

    Procedure

    1. Check if the animation has a mechanism to pause or stop animation or effects.
    2. Check that the buttons load and give a grace period to the animation taking place.

    Expected Results

    1. All checks above are true.
  40. UX18-1: Provide Accessible Mechanisms Around Symbols in System Fonts

    Applicability

    This technique is Advisory to meet Success Criteria within 2.18 Take a More Sustainable Approach to Typefaces.

    Description

    Avoid rendering issues surrounding emojis and symbol typefaces based on the operating system an individual may be using or the localization of their device. This can inherently cause accessibility issues which can reduce readability as well as the aforementioned visual glitches that can potentially have an impact on system hardware.

    For machine testability, flagging of symbol fonts (and potentially Emojis where operating system compatibility differences may be an issue) will increase readability if resolutions are put in place. One consideration may be to avoid using symbol fonts without an explicit justification and to only use Emojis that have endured widespread operating system compatibility.

    Examples

    1. Because there are some issues surrounding Emoji use and typography, a guide has been put together showcasing examples of the issues that can prevent visitors from being able to make sense of the content.

    Tests

    Procedure

    1. Check that system fonts are replaced with accessible Emojis.
    2. Check that Emojis are implemented with compatibility in mind.

    Expected Results

    1. All checks above are true.
  41. UX18-2: Use Modern, Efficient Custom Typefaces Within a Set Limit

    Applicability

    This technique is Advisory to meet Success Criteria within 2.18 Take a More Sustainable Approach to Typefaces.

    Description

    Optimize a typeface to ensure that when custom fonts are provided, they are implemented as sustainably as possible requiring the fewest amount of resources to download and render as possible. By placing restrictions on the number of custom typefaces and using a highly optimized format (hopefully subsetted), you will have a highly compressed file.

    Multiple variables will contribute to the size of a typeface, as such, it is difficult to simply specify a hard number to aim for, but if (for example) you aren't using international characters and you can subset your font, eliminating such waste could save precious resources from the fonts character table and reduce the file size considerably (reducing the system resource load upon rendering).

    Examples

    1. Color fonts experiment with not just custom typefaces, but with fonts that have multicolor effects built into them. This expands their file size (and is one of many factors which can impact the rendering load).

    Tests

    Procedure

    1. Check that the number of custom typefaces does not exceed five.
    2. Check that fonts are preloaded within the head of the HTML document.
    3. Check that no obsolete font formats are listed (such as EOT, TTF, SVG).
    4. Check that fonts are only provided using WOFF2 or WOFF as a fallback.
    5. Check that if a custom font is variable enabled (refer to font a preferred), remove other references (bold, italic, etc).

    Expected Results

    1. All checks above are true.
  42. UX19-1: Ensure That Every PDF or Other Non-Web Document Has an HTML Alternative

    Applicability

    This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.

    Description

    Expand the compatibility of non-native Web documents and to reduce the reliance on proprietary formats which can become an issue for visitors who may not have access to the software required to view the documents in question (either due to cost, the time expense of installing additional software, or the format no longer being maintained).

    The alternative format provided should be in HTML as this can be marked up to be Web accessible (and as an open format it can be maintained to be sustainable to meet PPP targets). The choice of which format to use should be clearly identified both using text and (if possible) using iconography, and if a default format needs to be set, having the open format is best.

    Examples

    1. The book Eloquent JavaScript has been written primarily in HTML but also has an offline compiled PDF, ePub, and MOBI version available for individuals who would prefer to make use of these eReader-safe formats.

    Tests

    Procedure

    1. Check that all non-native Web documents have an HTML alternative available.
    2. Check that available formats are clearly identified with text and / or icons.
    3. Check any generated HTML transcript meets the WSGs.

    Expected Results

    1. All checks above are true.
  43. UX19-2: Provide a Suitable Font Stack When Custom Typefaces Are Used

    Applicability

    This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.

    Description

    Ensure cross-platform compatibility when custom fonts are utilized within a product or service. This should be done by providing at least three fallbacks across a variety of different desktop operating systems (Windows, Mac, and Linux), mobile platforms (Android and iOS), and finally offering a generic font family as a last resort to fall back upon.

    Machine testing for suitable typefaces will involve a list of Websafe fonts for each platform to ensure a high probability of compatibility. Statistics about the usage of each platform can be gathered from analytics packages and used to determine which operating systems require support, from there, using a list of pre-installed fonts listed as web-safe will help you define a sustainable stack.

    Examples

    1. This guide provides a solid overview of Websafe typefaces across macOS, this guide provides coverage of Microsoft Windows and includes typefaces within Microsoft Office, and this guide showcases typefaces across all of Apples platforms.
    2. CSS Font Stack gives a list (with statistics) of Windows and Mac typefaces including their availability. It's particularly useful as it groups them based on the relative font family the typeface relates to.

    Tests

    Procedure

    1. Check all custom fonts have a generic font-family.
    2. Check all custom fonts have at least three listed Websafe fallback typefaces.
    3. Check that all Websafe fallback fonts cover Windows, Mac, Linux, and Mobile.

    Expected Results

    1. All checks above are true.
  44. UX19-3: Provide Alternative Text or Figure Captions for Images

    Applicability

    This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.

    Description

    Ensure that alternative text is available for descriptive images that are non-decorative and are important to the content of the website or application. Within HTML this can be provided either through the use of the alternative text attribute or through the use of figure captions (which are associated with an image providing added context).

    Machine testability should flag any IMG element that has no alternative text unless it exists within a figure element that contains a figcaption element. There may be cases for decorative purposes where images do not require alternative text, however, these (arguably) should be identified as such and accessibility aids given the notification they can skip over them rather than being ignored.

    Examples

    1. MDN provides documentation and an example of how figcaption can be used to give contextual meaning to an image. While the example only uses an IMG element, you could embed a picture element for the same effect.
    2. Web Development magazine A List Apart provides alternative text on its images to ensure that those using assistive technologies can read the content without missing out on the context that they can provide.

    Tests

    Procedure

    1. Check that all IMG elements contain ALT attribute text unless within a FIGURE element with FIGCAPTION.
    2. Check that SVG elements contain a TITLE and optionally a DESC element.
    3. Check that CANVAS elements contain alternative textual content.
    4. Check that all AREA elements (for MAP elements) contain ALT attribute text).

    Expected Results

    1. All checks above are true.
  45. UX19-4: Provide a Text Transcription of Media Files in an Open HTML Format

    Applicability

    This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.

    Description

    Offer a transcript of the content of an audio file or video. This is especially useful within podcasts or shows that follow the linear progress of events that can be organized into chapters. How content is chosen to be written up is at the discretion of the author but to pass the criteria it must be accessible, understandable, and content complete.

    As with all generated external HTML documents, for machine testability (this will include other examples such as UX51), the generated content must itself pass the WSG guidelines in being sustainable (meeting the Success Criteria as an HTML document that generates emissions). The document can be tested against the techniques laid down in this reference.

    Examples

    1. The 99% Invisible podcast provides text transcripts of its episodes using a paper icon to indicate availability. The speaker is clearly identified within each episode and credits are also provided alongside.

    Tests

    Procedure

    1. Check for an HTML transcript either below the media file (labeled transcript) or linked to the media.
    2. Check any generated HTML transcript meets the WSGs.

    Expected Results

    1. All checks above are true.
  46. UX19-5: Include WebVTT Closed Captions, Subtitles, and Sign Language Support

    Applicability

    This technique is Advisory to meet Success Criteria within 2.19 Provide Suitable Alternatives to Web Assets.

    Description

    Enhance existing media that has been provided to ensure maximum legibility and comprehension. In doing so, the viewer is less likely to have issues in which they rewind content to attempt to understand dialog which they had difficulty understanding (which will in turn cause bursts of CPU and GPU activity during this media manipulation).

    Layering additional context upon a video may initially have an additional outlay in terms of emissions (due to the loading of additional files or rendering extra content upon the screen) but because of its social and societal benefits, it meets PPP targets and as such should be prioritized. Testers should therefore seek to identify such features and flag non-availability as an issue.

    Examples

    1. Meryl Evans has a great example with some basic code (and links to further materials on using the WebVTT format for providing essential captioning for your videos. It's covered in more depth on MDN.
    2. Paying for a sign language interpreter for your videos can offer added value to your content. It can be easier to understand than written content, reduces accessibility issues, and increases satisfaction levels.

    Tests

    Procedure

    1. Check for iconography to identify sign language or accessible media aids.
    2. Check for the availability of a WebVTT file with video files.

    Expected Results

    1. All checks above are true.
  47. UX20-1: Detect Form Requirements and Identify Data Collection Policies

    Applicability

    This technique is Advisory to meet Success Criteria within 2.20 Provide Accessible, Usable, Minimal Web Forms.

    Description

    Reduce the amount of inteference that occurs when information is required to be entered on a form control within a user-interface. Identify how many elements must be filled out (and those which are not mandatory being eliminated), clearly label how such components need to be filled out (and how many steps there are), plus have links to data collection policies.

    Such actions can lead to less unnecessary data transmission which is not only beneficial in terms of privacy and security (the people and societal component of PPP) but also having fewer form elements or steps to render will have an emissions reduction due to the lower amount of individual elements to render to the screen (and the visitor will spend less time on-screen filling the forms in).

    Examples

    1. Domain Registrar iWantMyName takes form filling to the most minimal extent possible on their contact page, only providing three essential fields to deal with requests, each of which is clearly labeled.
    2. The registration page for BBC iPlayer is a multi-step process that is well designed and labeled to avoid overwhelming new users and it only requests the minimum amount of data necessary to set up an account.
    3. This medical diagnosis application decides what information is required to be filled in next upon the previous submission, thereby only relevant questions are being asked (leading to less redundancy in form filling).

    Tests

    Procedure

    1. Check how many steps there are in an input process and if this can be reduced.
    2. Check that all form components are clearly labeled and disclose their purpose.
    3. Check that only necessary form elements are displayed onscreen.
    4. Check that a publicly displayed data collection policy link is visible (this may be a section of a privacy policy).

    Expected Results

    1. All checks above are true.
  48. UX20-2: Allow For Data Entry With Automatic Assistance Settings

    Applicability

    This technique is Advisory to meet Success Criteria within 2.20 Provide Accessible, Usable, Minimal Web Forms.

    Description

    Provide a mechanism within forms during necessary data entry to allow automated tooling to provide necessary assistance. Such data entry will cause visitors to spend less time onscreen and as such will generate fewer emissions in addition to less frustration from failed attempts that would otherwise lead to potential escalation of such emissions being triggered (while also conserving bandwidth).

    For machine testability, the autocomplete attribute should be disabled in all form inputs outside of those likely to be utilized / assisted by a password manager (this is especially true where the suggestions do not come from a visitor device but a third-party and additional bandwidth or rendering processes are likely to be endured). A list of such inputs can be identified from a common password manager and marked against each form for correctness.

    Examples

    1. Login forms such as Grammerly can be detected by password manager software and therefore have a benefit for autocompletion (as visitors cannot be expected to remember personal details across all websites).
    2. Filling in an application form should allow autocomplete for elements of a profile that may be contained within a password manager (such as name, email, and phone), but should be disabled in all other formats to prevent the text element from attempting to anticipate the visitor's responses.
    3. The HTML Hell advent calendar has a great article on providing autocomplete for the sake of third-party tooling access such as password managers. It includes a range of examples and links to additionally useful articles that might guide decision-making.

    Tests

    Procedure

    1. Check that autocomplete is enabled in all username and password fields.
    2. Check that autocomplete is enabled in all identity (E.G. address, email) fields.
    3. Check that autocomplete is disabled in all other input fields.

    Expected Results

    1. All checks above are true.
  49. UX21-1: Provide Compatibility for Non-Visual Browsing Methods

    Applicability

    This technique is Advisory to meet Success Criteria within 2.21 Support Non-Graphic Ways To Interact With Content.

    Description

    Provide a mechanism for those not using a screen as a primary method of browsing to have an equal browsing experience. This is primarily due to screens being a high emitter of emissions in terms of energy usage, but also with the increase in speech-powered devices, the need for support mechanisms to be in place has increased in recent years.

    Detection of such support can be clearly identified with good semantics and high-quality content as a foundation, however for machine testability, thresholds such as having a speech stylesheet in place (that can help with issues around pronunciation and tone) and testing projects within text browsers for fundamental mechanical issues are critical to establish a pass.

    Examples

    1. The Lynx text browser is one of the oldest maintained browsers on the Internet and it can be particularly useful in identifying issues around keyboard navigation and non-visual browsing when features are unavailable.
    2. This article from Smashing Magazine describes in detail how websites and applications can make their work easier for non-visual devices such as speech environments (like cars and smart speakers) to access.

    Tests

    Procedure

    1. Check for a speech stylesheet for indicators to clarify pronunciation issues.
    2. Check that your website or application functions correctly in a text-only environment.

    Expected Results

    1. All checks above are true.
  50. UX22-3: Provide Error Detection When User Input Is Required

    Applicability

    This technique is Advisory to meet Success Criteria within 2.22 Provide Useful Notifications To Improve The Visitor's Journey.

    Description

    Provide a mechanism to notify the visitor if an issue has occurred during the form-filling process that may be remedied either before or after the submission process has occurred to resolve the issue before it reaches its intended location. Providing error detection and assistance can reduce screen time searching for answers.

    Machine testability for error detection would require the submission of dummy data (incorrectly) into a form of required elements to test the durability of the process to see if instructional recommendations are given. Such prompts that either correct or guide the visitor can be deemed to pass the Success Criteria. If no guidance is offered or if forms are incorrectly labeled, leading to errors, this would qualify as a failure.

    Examples

    1. This example of client-side form validation based on WCAG Technique SCR32 showcases a classic example of error handling and how providing a response to the user input when failure occurs is important.
    2. UX Writing Hub provides some great examples of how errors in content, forms, and other parts of a page or application can be successfully written to ensure visitors do not feel undermined and can continue navigate successfully.

    Tests

    Procedure

    1. Check that HTML form components are correctly labeled (for example, required elements, and content types).
    2. Check that JavaScript error handling is provided for the client-side (pre-submission).
    3. Check that server-side error handling is provided for post-submission (no JavaScript) issues.
    4. Check that all prompts occur in a suitable location, relative to the error occurring.

    Expected Results

    1. All checks above are true.
  51. UX23-1: Add a Print-Friendly Media Query or Stylesheet

    Applicability

    This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.

    Description

    Reduce the amount of excessive paper and ink resource mineral waste produced from the physical printing of documents by visitors of your products and services. By creating a customized print stylesheet you can optimize the use of these resources and improve readability of the layout if the document is exported into a static format such as PDF.

    This technique is most useful when it covers a lot of issues that physical formats can suffer over their digital counterparts (such as no interactivity), the necessity of color usage or content, breaks in flow and pages, etc. Machine testability can examine the stylesheet for such feature handling as compliance with Success Criteria (link expansion, paper size support, CSS resets, etc).

    Examples

    1. The Printed CSS Framework contains a multitude of features intended to reduce the impact of printed media, with additional support for handling a range of special developer set cases that can be defined in HTML.

    Tests

    Procedure

    1. Check that the website or application has a print-friendly media query (@print) or stylesheet.
    2. Check that the print CSS makes attribute content and URLs visible.
    3. Check that the print CSS considers content color accuracy and monochrome options.
    4. Check that the print CSS considers paper orientation, color, and size.
    5. Check that the print CSS hides unnecessary structural material.

    Expected Results

    1. All checks above are true.
  52. UX23-2: Optimizing All Document Assets Across a Variety of Different Formats

    Applicability

    This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.

    Description

    Ensure that non-web documents are not only served in a range of compatible (suitable) formats for the device requesting them but also to make sure that the documents being served have been compressed using a suitable algorithm to provide the asset at the lowest bandwidth requirements possible to the visitor, thereby improving multiple PPP variables and increasing performance.

    Compression tests can be run against every document format to identify if improvements can be made without losing too much quality that visibility becomes degraded and therefore a recognition issue (in embedded images and media within). The primary format given to visitors should be the one with the widest compatibility for viewing within the browser plugin-free (usually PDF).

    Examples

    1. A legal document that an individual will need to read through before potentially taking part in an offline process (that is partially handled online) may require such a process. Avoiding the necessity to download and install additional software wherever possible will reduce emissions.

    Tests

    Procedure

    1. Check that the primary document format is the most browser-compatible.
    2. Check that more than one format is given if a non-Web format is offered for compatibility.
    3. Check for compression potential against every available document asset.
    4. Provide optimized solutions if document problems have been detected.

    Expected Results

    1. All checks above are true.
  53. UX23-3: Provide Pre-Generated Copies of Repeat-Use Physical Assets

    Applicability

    This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.

    Description

    Ensure that any document that is likely to be requested by visitors on multiple occasions is already compiled and generated at the source and available for download rather than generating a brand new asset upon each user request. This will reduce the emissions of having to repeat the same task over and over for unchanging content or materials that have little interaction potential.

    For machine testability, identifying that large web assets are available from a static address such as a CDN is one method of meeting the Success Criteria, another would be to scan through scripts for generative code that creates web assets to determine if such content could be better served through recompilation. If this is the case, warnings and remedial action should be issued.

    Examples

    1. GoalKicker provides a range of free Notes on various programming languages, each of which is precompiled to PDF format and therefore not generated on the server-side. This optimizes their delivery to the visitor.

    Tests

    Procedure

    1. Check that existing web assets are pre-generated for repeat-use.
    2. Check code for asset compilation to identify if pre-generation would be better.

    Expected Results

    1. All checks above are true.
  54. UX23-4: Provide Document Details Including a URL Within a Page

    Applicability

    This technique is Advisory to meet Success Criteria within 2.23 Reduce the Impact of Downloadable or Physical Documents.

    Description

    Provide a mechanism for individuals to access downloadable or embeddable documents without the material having been embedded within the document. Embedded content has an attributable carbon cost due to the rendering of the host applications processes as well as the content and there is a chance visitors clicked a link in error.

    Providing structural information about resources including direct links to files in preference to auto-loading content keeps the visitor in control, avoiding the loading and rendering of necessary resources. Machine testability of such components can test for embedded elements to avoid clearly marked-up document descriptions and direct URL links that are user-enacted.

    Examples

    1. Smashing Magazine provides a copy of their front-end performance checklist in a multitude of formats, and this is described with direct links at the top of their open HTML version of the same document.

    Tests

    Procedure

    1. Check that document pages do not embed large files (internally or with third-parties).
    2. Check that document links are clearly described with information about name, format, language, size, and potentially a summary.
    3. Check for choice options with format and language variables in documents.

    Expected Results

    1. All checks above are true.
  55. UX24-1: Identify a Stakeholder-Focused Testing and Prototyping Policy

    Applicability

    This technique is Advisory to meet Success Criteria within 2.24 Create a Stakeholder-Focused Testing & Prototyping Policy.

    Description

    Identify the way new features, product ideas, and user-interface components will be tested among various stakeholder groups such as those with with accessibility issues. The main location for such an outline will be through a stakeholder-focused testing & prototyping policy and as such once implemented publicly, a link to the policy should be available.

    Machine detection could use heuristics to identify key policies within the text, however at a basic level, being able to identify the policy exists, potentially within the footer of a website or application along with other policies and legal documents, and then further analysis of individual headline elements of key sections of the document, should be enough to justify a passing criteria.

    Examples

    1. The UK Government has a publicly visible Usability Testing (User Research) guide within their service manual which outlines the processes they aim to follow when conducting such activities for their service.
    2. The University of St Andrews has a publicly facing Usability Testing Policy that covers and applies to all staff that undertake usability testing at the campus at that university. It's well broken down and explanatory.

    Tests

    Procedure

    1. Check if a Testing policy is both published and publicly visible.
    2. Check for key policy factors are presented within the text (optional).

    Expected Results

    1. All checks above are true.
  56. UX25-1: Produce a List of Checkpoints To Ensure Quality Control

    Applicability

    This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.

    Description

    Produce a comprehensive maintainable list of quality assurance lists across a range of different categories not limited to bugs, security, web performance, accessibility, sustainability, etc. By having this list in place, everyone involved in the creation process can closely monitor the application or website against each checkpoint to identify resolutions either before, during, or after a product or service is launched.

    This list should be created during ideation if possible but can also be machine-generated from a pre-determined set of lists relating to these topics based upon evidence and research. The content of this material can potentially be further tested through automation however at a bare minimum, this list must be utilized in-house to enact sustainable change. For machine testability, if this list is not publicly visible, internal access will be required to determine creation.

    Examples

    1. Lighthouse is an open source project which enables both the auditing and production of reports of websites and applications for several common issues across a range of categories from within a web browser.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check that the testing regime is available (internally or externally).
    3. Identify all bug and security tests to perform during the testing process.
    4. Identify all web performance tests to perform during the testing process.
    5. Identify all web accessibility tests to perform during the testing process.
    6. Identify all web sustainability tests to perform during the testing process.
    7. Report and resolve the findings within your auditing process.

    Expected Results

    1. All checks above are true.
  57. UX25-2: Provide Active Monitoring for Issues on a Frequent Schedule

    Applicability

    This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.

    Description

    Ensure that problems relating to the sustainability of a website can be picked up with more frequency and regularity. Within the scope of website and application testing, it is important to be actively monitoring for common failure points that build over time and upon being notified of said issues provide resolutions within a reasonable timeframe.

    Machine testability for this implementation will be based on the mechanism used for testing. For instance, if a product or service provider chooses to simply run routine tests on a scheduled basis this may be considered a pass as long as the time between scans is frequent enough that it can be considered active in opposition to occasional (weekly would be the widest margin).

    Examples

    1. Speedcurve is a commercial product that actively monitors websites for performance issues (broken down by category in rendering) and can provide advisory guidance on meeting targets for performance budgets.
    2. Google Search Console is a free product that monitors websites for issues that may affect their ability to be indexed within the Google search engine, which notably can impact their overall global page visibility.

    Tests

    Procedure

    1. Check that testing occurs hourly / daily / weekly to be considered active.
    2. Check that all publicly visible pages are included within the monitoring.
    3. Check that the facility providing the monitoring meets the WSGs.
    4. Check that a report of ongoing listed issues is publicly available until fully resolved.
    5. Provide resolutions for detected issues within a timely manner.

    Expected Results

    1. All checks above are true.
  58. UX25-3: Scan for Introduced Issues Upon the Completion of a New Release

    Applicability

    This technique is Advisory to meet Success Criteria within 2.25 Conduct Regular Audits, Regression, and Non-Regression Tests.

    Description

    Provide a mechanism for eliminating any potential new flaws that may have been introduced into a website or application upon the release of a new version that will have additional updates or features that could break functionality if not implemented correctly. As such, enforcing a sustainability scan across a series of variables is critical.

    For machine testability, the necessity of automation of running a scan could be triggered on the publication of each new release, or for more nuanced control (where active monitoring already exists and non-breaking features are being introduced), a scan could be triggered only where only a major release is issued when breaking changes are more likely to occur.

    Examples

    1. Accessibility scanners can use the machine-testable elements of WCAG to help identify issues that can be resolved. While they aren't a silver bullet and cannot replace manual audits they can identify some issues.

    Tests

    Procedure

    1. Check that all publicly visible pages are included within the monitoring.
    2. Check for a wide variety of issues within the scope of the WSGs.
    3. Check that a report of ongoing listed issues is publicly available until fully resolved.
    4. Provide resolutions for detected issues within a timely manner.

    Expected Results

    1. All checks above are true.
  59. UX26-1: Establish Performance Testing Routines With Each New Release

    Applicability

    This technique is Advisory to meet Success Criteria within 2.26 Incorporate Performance Testing Into Each Major Release-Cycle.

    Description

    Encourage a routine to improve web performance within websites and applications due to the established link between web performance optimization and digital sustainability. Running regular benchmarks and working from checklists (either prefabricated or built from scratch) will encourage a schedule to identify potential bottlenecks.

    The content of these testing routines can potentially be further tested through automation however at a bare minimum, this list must be utilized in-house to enact sustainable change. For machine testability, if this list is not publicly visible, internal access will be required to determine whether creation has taken place.

    Examples

    1. Smashing Magazine has a comprehensive front-end performance checklist that could be utilized as part of a routine in release schedule checking for Web performance issues that may exist or have been introduced.
    2. Addy Osmani back in 2018 created an 18-point Web performance checklist that if you're looking for something more simplistic and rapid to compare builds against could potentially be very useful as a starting point.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check that the testing regime is available (internally or externally).
    3. Identify all web performance tests to perform during the testing process.
    4. Report and resolve the findings within your auditing process.

    Expected Results

    1. All checks above are true.
  60. UX26-2: Provide Compliance Checks With Each Release for Relevant Legislation

    Applicability

    This technique is Advisory to meet Success Criteria within 2.26 Incorporate Performance Testing Into Each Major Release-Cycle.

    Description

    Provide a mechanism during the development process where businesses and individuals creating Web projects can verify that new work meets compliance targets for individual regulations. This technique is most useful when performed with each major milestone as there is a potential for feature (and compliance) breaking material to occur.

    Within the scope of sustainability, this is increasingly important as there are explicit laws surrounding the subject in addition to expanded PPP remits such as accessibility, privacy, etc. In terms of machine testability the mechanics of an implementation may be more difficult than utilizing an automated checker, but wizard software can work through questions to help identify key issues.

    Examples

    1. Internationally reaching environmental legislation such as CSRD and the Green Claims Directive provide stricter enforcement around product descriptions and the need to reduce emissions and accurately report such efforts.
    2. Accessibility legislation (of which there are many worldwide) provides strict enforcement around ensuring inclusivity not just offline but also in the digital sphere as well, leaving non-compliance open to lawsuit risks.
    3. Internationally reaching privacy legislation such as GDPR can also impact digital services on a sustainability level (through the people and societal aspect of PPP), and various checklists can assist.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check that the testing regime is available (internally or externally).
    3. Identify all relevant compliance checks to perform during testing process.
    4. Report and resolve the findings within your auditing process.

    Expected Results

    1. All checks above are true.
  61. UX29-1: Provide a Publicly Visible Compatibility Policy or Statement

    Applicability

    This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.

    Description

    Ensure that visitors are made aware of (or able to find out) the limitations of a product or service before issuing a support request that will involve creating new emission sources. This compatibility policy can be listed amongst other policies on a website, however, it must contain detailed information about any factor that may have reduced capability.

    For machine testability, attempt to first establish that the policy exists and then try using heuristics (or identification of the headlines) to locate sections on what is both actively supported (those conditions tested upon), and those that are confirmed as unsupported (those conditions known to be broken but will not be fixed with reasons given). Ensure that a support method is also provided.

    Examples

    1. British cellphone provider GiffGaff provides a web browser support policy that identifies which browsers they support and don't actively support plus they also offer the justification behind their decisions.

    Tests

    Procedure

    1. Check for a compatibility policy or statement's existence.
    2. Check for sections on included and excluded coverage.
    3. Check that a support method is offered for issues outside of scope.

    Expected Results

    1. All checks above are true.
  62. UX29-2: Provide Clarification When Updates Are Significant Over Minor Releases

    Applicability

    This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.

    Description

    Provide a mechanism for introducing visitors to new features and capabilities within websites and applications upon a major release being triggered. This would include compatibility changes, any alterations that affect workflow, and bugs that have been resolved. It applies equally to both applications on upgrade paths and website redesigns.

    The importance of re-orientating users around breaking changes is crucial as modifications made could lead to errors in usage, increases in technical support, problematic friction in usability or accessibility, or introduction of issues that may lead to multiple impacting PPP factors that could be resolved through training, answered questions, and well-signposted information architecture.

    Examples

    1. Using Semantic Versioning, providers through their release notes (which if publicly available could be machine tested against), could easily identify a major (breakable significant), against a minor (non-breaking) release.

    Tests

    Procedure

    1. Check for a publicly available change.log or release notes.
    2. Check that the frequency of updates has not been less than 12 months.
    3. Check that major releases provide details of breaking features.

    Expected Results

    1. All checks above are true.
  63. UX29-3: Establish Compatibility Testing Routines With Each New Release

    Applicability

    This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.

    Description

    Ensure that compatibility issues resulting from visitor constraints (such as operating system or browser age / connection speed, availability, or cost) are factored into the testing process at each new release. Creators must have processes in place whether through checklists that are pre-created or crafted from scratch to ensure that a regime is in place.

    In terms of machine testability, some tools can provide virtualized emulations of certain operating systems and thereby load products and services to undertake tests (or screenshots) to examine compatibility. Data also exists regarding mobile data charges and connection speeds which can be used to emulate or identify the compatibility costs associated with projects.

    Examples

    1. Browser Stack and LambadaTest provide virtual testing (which can be automated) amongst a variety of common web browsers allowing testers to identify flaws in their website or application's compatibility.
    2. What Does My Site Cost calculates the amount of money a website will cost in terms of network fees to an individual around the world based on open data which can be a useful incentive to reduce excessive data use.

    Tests

    Procedure

    1. Check to see if deliverables require internal access (if so, permission is given).
    2. Check that the testing regime is available (internally or externally).
    3. Identify all compatibility tests to perform during the testing process.
    4. Report and resolve the findings within your auditing process.

    Expected Results

    1. All checks above are true.
  64. UX29-5: Provide a PWA if More Suitable Than Its Native Counterpart

    Applicability

    This technique is Advisory to meet Success Criteria within 2.29 Incorporate Compatibility Testing Into Each Release-Cycle.

    Description

    Identify a web application that exists and then determine if turning the web application into a progressive web application would be more beneficial than providing a native offering. This would need to take into account current web capabilities (such as ServiceWorkers), methods of delivery (such as WASM), and the human aspect (audience requirements).

    Machine testability can firstly examine the existing state of an application to determine how sustainable it is and what technology stack was used to develop it (and how sustainable it already performs for all), from there, data points can be used to identify an approximate cost of implementation for both a native or highly optimized web application endpoint and options can be provided.

    Examples

    1. Native applications may be a good choice if applications require specific access to hardware as Web APIs can still be quite restrictive. In addition, there is a case for going native to provide a more OS-centric look and feel.
    2. Web applications are often quicker and easier to deploy and can provide updates without delays in the pipeline (as they don't have to go through store controls), they can also benefit from lower creation costs.

    Tests

    Procedure

    1. Check if the existing application already utilizes a ServiceWorker.
    2. If so, check the PWA JavaScript and JSON are well formed, and the icon exists.
    3. If not calculate the sustainability impact of becoming a PWA.
    4. Check that no native application already exists (meta tag smart banner).
    5. If so use that, otherwise calculate the sustainability impact of building an application.
    6. If cost-efficient sustainability-wise, implement a PWA or / and a native application.

    Expected Results

    1. All checks above are true.

Web Development

Each of the below can be shown or hidden by clicking on the technique you wish to display.

  1. WD01-1: Profile Existing Projects To Identify Common Variables of Value

    Applicability

    This technique is Advisory to meet Success Criteria within 3.1 Identify Relevant Technical Indicators.

    Description

    Identify any technical indicators which may be of use within a sustainability budget or those that can be immediately identified as exceeding a recommended level for an average page size thereby indicating that the document should be split or broken down into either two or more pieces to reduce the impact of the experience upon the visitor.

    Factors that may be taken into account could include an extremely long document that would require excessive scrolling (based on either the visual spacing or the number of DOM elements involved), the number of DOM elements could also be a factor as too many could produce unwarranted rendering loads, also excessive HTTP requests can produce a lot of overhead.

    Examples

    1. Websites that need to display a large number of icons may choose to bundle the vector images into a sprite which avoids the HTTP requests overhead and allows requesting (and repeat usage) through references.

    Tests

    Procedure

    1. Check the visual spacing of elements is not too cramped.
    2. Check the scroll length of the document does not exceed a stated length.
    3. Check the number of DOM nodes in the tree does not exceed a stated number.
    4. Check the number of HTTP requests does not exceed a stated number.
    5. Check if optimizing will be beneficial, if so, implement a solution.

    Expected Results

    1. All checks above are true.
  2. WD01-2: Calculate the Energy Intensity of Technologies for Existing Projects

    Applicability

    This technique is Advisory to meet Success Criteria within 3.1 Identify Relevant Technical Indicators.

    Description

    Distinguish different technologies and the role they play, identifying which are the most resource-intensive and providing either a mechanism to reduce the intensity of heavy payloads or to load-balance the most demanding aspects of these features. It's important to consider that in terms of rendering, data transfer is not the only consideration.

    In terms of machine testability, calculating the percentage of HTML, CSS, JavaScript, images, media, etc, isn't enough. It's critical to weigh and calculate the energy requirements of each aspect of those languages on an atomic level to identify potential rendering issues from the browser as these will impact upon hardware (CPU, GPU, RAM, and other variables) having PPP implications.

    Examples

    1. Primary research exists to calculate the energy intensity of front-end specifications and this could guide your ability to identify the energy intensity of components beyond the amount of data transferred.
    2. Co2.js is a JavaScript library that can help developers estimate the relative carbon emissions of their applications, websites, and software using several variables (though it won't account for everything).

    Tests

    Procedure

    1. Check the energy intensity of the HTML elements and attributes used.
    2. Check the energy intensity of CSS At-Rules, Selectors, Pseudo's, Properties, and Values.
    3. Check the energy intensity of JavaScript, DOM, CSSOM, and API usage.
    4. Check the energy intensity of image and media usage.

    Expected Results

    1. All checks above are true.
  3. WD02-1: Minify Your Front-End Code if It Is Public Facing

    Applicability

    This technique is Advisory to meet Success Criteria within 3.2 Minify Your HTML, CSS, and JavaScript.

    Description

    Provide a mechanism in which your production-ready source code can have unnecessary data such as code comments, whitespace data, and machine-detectable redundancy removed to deliver the smallest file payload possible to the visitor. This will improve both the speed of your site and lower screen usage (wait) time.

    In terms of machine testability, the functions of a minification tool can be replicated for the languages HTML, CSS, and JavaScript using scripts and these can identify improvements to be made. Function names can be uglified (shortened to the smallest value) to reduce the size of the payload further and reporting tools can make recommendations based upon feedback.

    Examples

    1. Projects that are both public-facing but also allow for others to contribute to development should allow for un-minification. Developer tooling often contains features or extensions to restore code to a readable state.
    2. Mithril is an example of a JavaScript framework that uses minification to great effect. It's small to start with but by offering both developer and minified versions on its GitHub repository you get the best of both worlds.

    Tests

    Procedure

    1. Check if your HTML, CSS, and JavaScript can be minified.
    2. Check if JavaScript can gain size reductions through uglify processes.
    3. Check if the code is on a non-production public-facing website.
    4. Check if obfuscation exists, if not minify the source code.

    Expected Results

    1. All checks above are true.
  4. WD03-1: Implement Code-Splitting Where Appropriate To Reduce Payloads

    Applicability

    This technique is Advisory to meet Success Criteria within 3.3 Use Code-Splitting Within Projects.

    Description

    Provide a mechanism where modularization can occur to reduce the overall payload of applications, libraries, and frameworks for the Web. By using code-splitting and modules where isolating code can take place (machine-testability can detect this), large components can be successfully broken down into pieces that will be delivered as required.

    This technique is most useful when dealing with significant-sized or complex pieces of production code that may not be required to be delivered in a single volume. If functionality will be used on-demand (for example), the payload to activate and run the functionality could be fetched and run when it is needed instead of the entire applications library being gathered upon page load.

    Examples

    1. MDN provides a detailed guide to JavaScript modules that are contained within many frameworks today to split them into separate component-based libraries to be called only when they are required by the visitor.
    2. Separating CSS into stylesheets based on arguments such as media or preference queries (or in rarer instances using imports), allows for large stylistic libraries to be broken down and requested only if its necessary.
    3. Alpine is a JavaScript framework that uses modules effectively to import the features it requires, and as such, remains fast and lightweight compared to competing frameworks that load everything in one go.

    Tests

    Procedure

    1. Check the size of packages to determine the benefit of code-splitting.
    2. Check if CSS would benefit from being split into separate stylesheets based on queries.
    3. Check if JavaScript can be modularized and imported when required.
    4. Check if obfuscation exists, if not code-split the source code.

    Expected Results

    1. All checks above are true.
  5. WD04-1: Eliminate Redundant Code Through Coverage or Tree Shaking

    Applicability

    This technique is Advisory to meet Success Criteria within 3.4 Apply Tree Shaking To Code.

    Description

    Eliminate redundancy within an application or website that may have been either introduced previously or through error. By using browser development tooling such as DevTools coverage or techniques like tree shaking any code which is no longer associated with functionality can be identified as potentially fit for removal (always verify this is accurate).

    For machine testability, redundant (orphaned / unused) code can be identified by its lack of association with existing components within the web page or application. Consideration will also need to be given to components that may be generated mid or post-render in addition to styles that only trigger when a certain state occurs (such as through hover or the target pseudo selector).

    Examples

    1. This article by Jeremy Wagner showcases the importance of tree shaking within projects, including the history behind it and how it functions, in addition, it provides a few useful code examples.
    2. You can identify unused code within a project using Google Developer Tools in the Coverage Panel. While there are a few gotchas (such as identifying inactive pseudo selectors), it's a useful way to weed CSS.

    Tests

    Procedure

    1. Check CSS to determine if redundancy exists within your code.
    2. Check packages to determine if redundancy exists within JavaScript.
    3. Check if redundant code may be reintroduced, if not, remove it.

    Expected Results

    1. All checks above are true.
  6. WD05-1: Conform to WCAG as a Baseline Level of Acceptable Accessibility

    Applicability

    This technique is Advisory to meet Success Criteria within 3.5 Ensure Your Solutions Are Accessible.

    Description

    Provide a mechanism for ensuring that a website meets a baseline level of accessibility compliance as recommended by the Web Content Accessibility Guidelines (WCAG). Having an inclusive product or service is at this point (in most places) a legal requirement and meets PPP (People) and societal factors in addition, so it becomes a sustainability compliance target.

    Machine testability for accessibility exists on some level already through automated testing tools and this can potentially be integrated into a custom white-label product (or created from scratch) if required, by identifying the criteria set out in WCAG A-AAA and attempting to seek machine testability with the guidelines and success criteria (as you are doing with the WSGs).

    Examples

    1. Testing your product or service in several screen readers such as NV Access will help establish that individuals with vision impairments can more easily navigate around your content.
    2. Tooling such as WebHint, Axe Core, or WAVE helps product creators identify certain accessibility issues that can occur within a website or application though they can only test what can be identified by a machine.

    Tests

    Procedure

    1. Check if the application or website meets a specified level of WCAG.
    2. Check beyond WCAG for additional inclusive design requirements.
    3. Check for compatibility with several different accessibility tools.
    4. Check if an accessibility statement exists and testing is included.
    5. Check if additional legal compliance requirements are being met.

    Expected Results

    1. All checks above are true.
  7. WD05-2: Provide ARIA Enrichment to HTML Only if Deemed Necessary

    Applicability

    This technique is Advisory to meet Success Criteria within 3.5 Ensure Your Solutions Are Accessible.

    Description

    Provide a mechanism to identify firstly if ARIA support is required for assistance devices and secondly, if it is required, to implement that support if and only if it will enhance the solution to aid the accessibility of the product or service being given. The use of ARIA for code enrichment should only be utilized if no alternative can be utilized.

    For machine testability, the use of heuristics to examine the code structure and if certain components require additional semantic clarification is technically possible. Certain HTML elements for example will have built-in semantic value and require no contextual clarifications while custom elements or components of complexity (such as those used in applications) may require enrichment.

    Examples

    1. This document from Aditus provides some general examples of ARIA in action showcasing how others have implemented it successfully within their projects (and naturally where it could be appropriate for use).

    Tests

    Procedure

    1. Check if the application requires ARIA before implementation.
    2. Check if the ARIA use is appropriate and correctly marked up.

    Expected Results

    1. All checks above are true.
  8. WD06-3: Write Code for Performance Removing Duplication

    Applicability

    This technique is Advisory to meet Success Criteria within 3.6 Avoid Code Duplication.

    Description

    Provide a mechanism within projects to eliminate redundancy within your coding methodology. There will be occasions where the same solution is required for multiple events and in preference to repeating the code to achieve the same effect, referencing the code to perform on each occasion is more optimal as it reduces duplication and repetition.

    In terms of machine testability, identifying a coding methodology within languages like CSS can be easily accomplished by the way naming schemes are formed (such as the BEM pattern). In terms of repeat coding, using DRY, repeated CSS property and value pairs (or in JavaScript, code that repeats the same action) can be identified and flagged for rearrangement.

    Examples

    1. BEM (Block Element Modifier) is a methodology that is commonly used within websites to assist with creating reusable components and patterns, using easier-to-name schemes within front-end development.

    Tests

    Procedure

    1. Check for a recognized naming methodology or pattern in CSS.
    2. Check for the absence of repeating code (DRY) within CSS.
    3. Check for duplicate functionality or names within your JavaScript.

    Expected Results

    1. All checks above are true.
  9. WD07-1: Determine and Decision Make Upon Third-Party Impacts

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    First identify any third-party components within a website or application, then analyze that feature as if it were a first-party product or service (against the WSGs) for sustainability. If the product or service is determined to be highly impactful in a negative way, the third-party should be replaced or removed; otherwise, it can be optimized, or stay as-is.

    As third-party components are hosted externally, machine testing these elements should involve isolating these components and testing them as separate from the product or service. This can be factored into any decision-making regarding inclusion as highly performant and sustainable third-party materials will ultimately be low impact (and the opposite is true for others).

    Examples

    1. OpenStreetMap allows for embedding of their map data in websites and applications which can be useful for providing directions to physical locations, but it could also drain data and rendering resources.
    2. A carousel with multiple resources (potentially being sourced from a third-party photo library) could be pretty to look at, but the combination of both animation and rendering may affect your visitor's battery.

    Tests

    Procedure

    1. Check if any third-party components exist within the page or application.
    2. Check that component against the WSGs for its sustainability compliance.
    3. Check if high-impact third-party components offer self-hosted alternatives.
    4. Check that self-hosted alternatives are used if provided.

    Expected Results

    1. All checks above are true.
  10. WD07-2: Provide an Import on Interaction Delay for Third-Party Resources

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    Provide a mechanism to delay third-party content from loading to the screen until the visitor has requested it. Because third-party content is sourced from outside the origin domain, the sustainability impacts of the third-party code are outside of the control of a project, and thus click-to-load delay screens using the import-on-interaction pattern is critical.

    The pattern used to either switch in the third-party content or load it on-demand can be machine-identified and if common third-party library resources are identified as loading (or leaking) upon a visit, this can and should be flagged up as a failure of the success criteria in functioning upon the visitor request (as such external requests should be within the visitor control).

    Examples

    1. BBC News uses a cloaking mechanism to prevent third-party content of social media sites from loading unless requested by the visitor to reduce the impact on their devices (if this fails to work, a source link is offered).

    Tests

    Procedure

    1. Check if any third-party components exist within the page or application.
    2. Check that it exists behind a click-to-load delay screen (that visitors must request).

    Expected Results

    1. All checks above are true.
  11. WD07-3: Identify Lightweight Alternatives to Unsustainable Third-Party Resources

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    Reduce the overhead of libraries and frameworks which may be used by the product or service but could be offered through lighter or less production-heavy alternatives. This will have significant sustainability benefits as replacing a heavyweight framework where only a small proportion of features are used with one that is potentially 1/5th of the size could not only improve performance but reduce the rendering load.

    This will require testability tooling to have both a library of existing frameworks and libraries - including smaller single-purpose ones (with the functionality they contain) and the ability to identify within code the feature set that is being utilized by a product or service. Through isolating in-use capabilities, better recommendations can be made which could reduce the project payload.

    Examples

    1. You Might Not Need jQuery provides information about potential replacements for the longstanding JavaScript framework that could be more performant and lightweight while offering the same functionality.
    2. You (Might) Don't Need jQuery showcases a series of examples of how pre-existing jQuery functionality can now be replaced with native functionality and as such, the framework could be eliminated.
    3. MicroJS is a list of tiny JavaScript frameworks that serve a single purpose and if you utilize a single large framework for this very reason, could be swapped out to reduce the ecological impact of your work.

    Tests

    Procedure

    1. Check if CSS libraries could be replaced with smaller alternatives or custom code.
    2. Check if JavaScript frameworks could be replaced with smaller alternatives or native code.
    3. If replacements can be made, then replacements should be introduced.

    Expected Results

    1. All checks above are true.
  12. WD07-4: Ensure That Content Is Delivered Through the Most Sustainable Pathway

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    Identify the location of website content and determine the most suitable pathway for it to be consumed sustainably. With the rise in content being hosted on third-party blogging platforms, there is a risk of significant content loss if the platform disappears and therefore a need for self-hosting exists (especially to maintain control of sustainability impacts).

    For machine testability, identifying the source of the content is a key priority, then determining the impact of any third-party (the risk of content loss along with any sustainability impacts through WSG testing). Finally, this should be weighed up against the impact of self-hosting (and any potential negative consequences such as content moderation requirements that may occur).

    Examples

    1. Medium is a popular content hosting platform yet while it does offer a reasonably readable interface, it may not be as sustainable as providing the content within your own website or application in a dedicated blog.
    2. Social media posts are contained within your account on that network (in some social networks are only visible to members or even users of a mobile application) and will have the sustainability issues of that provider.

    Tests

    Procedure

    1. Check if the level of content being published is regular.
    2. Check if features such as comments are a requirement (if so, can moderation be offered).
    3. Check if the website already has an audience to give content visibility.
    4. If so, check that the content is self-hosted on the first-party domain.

    Expected Results

    1. All checks above are true.
  13. WD07-5: Create Custom Clickable Icons and Widgets for Third-Party Services

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    Provide a mechanism for interacting with third-party products and services through your own project without having to rely on a third-party solution that will inevitably have tied sustainability impacts that are bound to a third-party service. Creating these custom repeatable use objects can have either a single function use, or can serve multiple functions.

    Identification of third-party solutions can be found through heuristics of source code and recommendations can be made to produce custom first-party solutions for sustainably impactful services. Additionally, custom solutions can often be identified based on either the goal they aim to achieve or the label they are given within HTML ID or class names that can be identified.

    Examples

    1. There are many examples of custom social media icons embedding out there, and there are also plenty of free libraries out there you can choose between to avoid relying on the default offerings by providers.

    Tests

    Procedure

    1. Check that the product or service has accounts with different service providers.
    2. Check that a custom solution is either in link form, icon form, or looks similar to the provider's native implementation.

    Expected Results

    1. All checks above are true.
  14. WD07-6: Provide the Ability To Identify and Choose Third-Party Components To Load

    Applicability

    This technique is Advisory to meet Success Criteria within 3.7 Rigorously Assess Third-Party Services.

    Description

    Provide visitors with the optimum level of control over sustainability impacts by not just allowing them to identify and load third-party controls upon click but to also control third-party services, libraries, and tooling used by the product or service (even at load) via a preference panel where individual third-parties (which may be impactful) can be refused access.

    This mechanism if using a commonly accepted scheme could be detected by a machine. If the preference is presented upon load and not set by default and if the visitor has the option of selecting individual services (one at a time), accepting all, and denying all, then it can be deemed that the functionality is working as expected (as long as third-party services obey that scheme).

    Examples

    1. Cookie banners are a classic example of personalization from the point of the page load and they can dictate the functionality of a website or application based on the choices made as shown in this example.
    2. Being able to log into to a product allows customers to trigger decisions regarding their experience. Apple Music (for example) uses the details of the visitor to make decisions based on their prior browsing habits.

    Tests

    Procedure

    1. Check that information is provided to indicate upon load to indicate preferences.
    2. Check that the option to allow all, deny all, and select between providers is offered.
    3. Check that information is provided identifying providers and describing the options carefully.
    4. Check that preferences are remembered when navigating during the session.

    Expected Results

    1. All checks above are true.
  15. WD08-1: Validate Source Code for Semantic Accuracy

    Applicability

    This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.

    Description

    Ensure that HTML uses accurate semantics (Semantic HTML). Use the correct semantic element for the correct purpose and avoid common syntax mistakes that could be easily correctable. In terms of web sustainability it is important to have code that remains as likely to stay functional in the future as possible.

    For machine testability, examining code for correct element use is a primary step. Other features found in validation services such as ensuring elements are closed correctly and that HTML entities are correctly marked up help avoid rare issues that might occur during the rendering process such as web browsers accidentally interpreting data incorrectly.

    Examples

    1. The W3C provides markup validation services for HTML and CSS, JSON, as well as feeds to identify potential issues with the code. These tools can work with direct input as well as files and direct input.
    2. Linting tools can check (and be configured) to help you find and resolve common errors either while you code or during an auditing process. There are linting tools available for languages like HTML, CSS, JavaScript.

    Tests

    Procedure

    1. Check if the HTML, CSS, and other source code contains resolvable errors.
    2. Check that any errors have been successfully linted and corrected.

    Expected Results

    1. All checks above are true.
  16. WD08-2: Eliminate Optional Code To Reduce the Rendering Impact

    Applicability

    This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.

    Description

    Provide a mechanism for eliminating further redundancy from source code which may not be strictly required for semantic reasons but can help reduce the amount of data being transferred and thereby improve web performance. This can be done by removing any optional code from a document that will not affect the rendering of the page.

    This technique is most useful when subscribing to performance budgets and attempting to reach the smallest file size possible is of critical importance to reach PPP targets. Such optional code removal can be identified by machine and recommendations for where optimization can occur can be provided but there may be occasions (as with minification) in non-production to retain the data.

    Examples

    1. This article by Jens Oliver Meiert provides details about some of the optional HTML elements you may wish to eliminate to reduce the amount of data being transferred to visitors during a page view session.
    2. Browser Default Styles provides information about the CSS that a web browser will render to the visitor as a matter of course when an HTML element is created (therefore you don't need to repeat the statements).

    Tests

    Procedure

    1. Check that optional HTML elements that are unnecessary for rendering are removed.
    2. Check that browser default styles are not repeated in the stylesheets.

    Expected Results

    1. All checks above are true.
  17. WD08-3: Replace Non-Standard Code With Suitable Alternative Syntax

    Applicability

    This technique is Advisory to meet Success Criteria within 3.8 Use HTML Elements Correctly.

    Description

    Identify and eliminate any non-standard coding practices including but not limited to proprietary additions included by web browsers to test new functionality before potentially including it fully within the web browser (this isn't so common today but remnants still exist). As older code is often not as highly optimized for performance, it can take longer to render and thrash hardware causing higher emissions so is worth resolving.

    For machine testability, lists of non-standard syntax are available within common specifications which can be used to identify within source code. In addition, browser-specific code can be identified by its prefix or on lists of techniques and hacks to resolve browser bugs. If replacements for these techniques can be offered, provide them, otherwise recommend removal from your code.

    Examples

    1. In the past, it was necessary to provide specific coding techniques (known as hacks) to ensure web browser compatibility. Browser Hacks, lists the various patterns used that could be solutions fit for removal.
    2. When testing new features, browsers had a habit of prefixing their code in the past behind flags to avoid breaking existing code. Lists of such prefixes can be found online as well as archaic proprietary HTML and CSS. Websites like BCD Watch, CSS Triggers, Runtime compatibility?, and Web features explorer can help with compatibility testing.

    Tests

    Procedure

    1. Check that archaic proprietary specification additions are not included within the code.
    2. Check that browser prefixes have been removed from CSS unless required for compatibility.
    3. Check that no browser hacks exist unless they are required for compatibility.

    Expected Results

    1. All checks above are true.
  18. WD09-1: Set Scripts To Load Either Asynchronously or Deferred

    Applicability

    This technique is Advisory to meet Success Criteria within 3.9 Resolve Render Blocking Content.

    Description

    Provide a mechanism for JavaScript (and if possible StyleSheets) to be loaded asynchronously or deferred to avoid acting as render blocking events which can delay the loading of content. This will lessen the initial impact on visitors' hardware and thus can have PPP and web performance benefits which should be taken into account.

    This technique is most useful when it is applied to all materials which will run upon the page load. In terms of testing for this technique, scripts can be identified by the attribute being provided within the HTML code. If the attribute is not provided, guidance can be offered to question if this was intentional or machine testing could analyze the code to verify if an issue will occur.

    Examples

    1. Chart.js is a library that allows you to easily produce complex charts within a website or application. It uses a combination of asynchronous and deferred loading to ensure the rendering doesn't block other content.

    Tests

    Procedure

    1. Check that all scripts are set to load with either the async or defer attributes unless necessary.
    2. Check if CSS (with the aid of JavaScript) could benefit from asynchronous loading, and if so, do it.

    Expected Results

    1. All checks above are true.
  19. WD09-2: Provide Assets Required at Load With the Correct Delivery Route

    Applicability

    This technique is Advisory to meet Success Criteria within 3.9 Resolve Render Blocking Content.

    Description

    Provide a mechanism for assets to be loaded at the correct time in the rendering process. With the ability to use preload, prefetch, and preconnect we can take resources such as web fonts and scripts that are necessary for the product or service to successfully be able to be displayed and ensure they are prioritized over other web assets.

    Identifying that the asset being chosen for this mechanism is correct, linked to correctly, and has the right mechanism in place is of the highest priority for machine testers to ensure that the rendering process is not interrupted. If a low-priority object is given high priority it could delay the website or application from reaching the visitor and increase screen time.

    Examples

    1. Mightybytes uses DNS prefetching to ensure that fonts and JavaScript that are hosted on a third-party CDN service will be less likely to suffer render-blocking (as the host is readied before the content is requested).
    2. Wholegrain Digital uses preloading on its web typography to avoid render blocking with several different weights being offered on the same typeface (all of which will be downloaded as the content is).

    Tests

    Procedure

    1. Check that preload, prefetch, and preconnect are used as appropriate in HTML documents.

    Expected Results

    1. All checks above are true.
  20. WD10-1: Provide Accurate Social Metadata and Microdata

    Applicability

    This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.

    Description

    Provide a mechanism for social networks and search engines to first identify your website or application and then to be able to showcase it successfully within their products and services. Because each social network and search engine has its own requirements, it will involve a combination of different metadata and semantic markup to achieve results.

    For machine testability, toolmakers will need to maintain a list of the most popular products and services with which they wish to maintain compatibility. From there they will need to work through their requirements to ensure that the patterns expected are included within pages (and that they match the expectations of the providers so that results will ensure visitor findability).

    Examples

    1. The Small Tech Foundation ensures that their content can be easily identified by third parties by providing metadata, open graph data, and microdata within the document header (that is easily machine-readable).
    2. Many social networks take advantage of the OpenGraph Protocol including Facebook and X (formally Twitter) in the HTML head. Mastodon uses the rel="me" attribute to identify websites with owners.

    Tests

    Procedure

    1. Check that the necessary OpenGraph data for Facebook is included within HTML.
    2. Check that the necessary Twitter Card data is included within HTML for X.
    3. Check that the necessary Mastodon link containing the rel attribute exists.
    4. Check that other social networks have their criteria met or have links added.

    Expected Results

    1. All checks above are true.
  21. WD10-2: Provide a Robots File That Contains Relevant Indexing Data

    Applicability

    This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.

    Description

    Provide a mechanism of maintaining findability within products and services for search engines but also to attempt to (successfully or otherwise) reduce the amount of traffic from bad actors or unethical / unsustainable products that may impact your wider projects and service-users. This will be produced using the robots.txt document.

    This technique requires that the robots.txt file be present within the base directory of a website and be formatted according to the commonly agreed upon Robots Exclusion Standard. For the Success Criteria, listing bad actors and unethical / unsustainable products is considered optional however if such a list can be maintained and adhered to, it's worthy of inclusion.

    Examples

    1. Google has a robots.txt file of their own that identifies to search engines what pages should be included within search results. Naturally, you don't want to copy this file exactly but it may serve as inspiration!
    2. The Web Robots Pages website is a resource dedicated to the robots exclusion standard and how it can be applied to websites including different examples and details about the robots file structure.
    3. Google Search Central has a document that provides working examples of the robots file and the properties you should consider including within it (plus examples and details of how they will function).

    Tests

    Procedure

    1. Check for the existence of a robots.txt within the base directory of a website or application.
    2. Check that the document is well formed and contains the correct property / value pairs.
    3. Check for a list of blacklisted bad actors and unsustainable products.

    Expected Results

    1. All checks above are true.
  22. WD10-3: Add Signposting Within the Page To Direct Visitors and Accessibility Aids

    Applicability

    This technique is Advisory to meet Success Criteria within 3.10 Provide Code-Based Way-Finding Mechanisms.

    Description

    Provide mechanisms to assist the visitor with finding and navigating through content within the page or application. This can come in the form of links that allow you to bypass blocks of content, which is especially useful when large vocal areas of navigation or other content exist. It could also be in the form of keyboard shortcuts that activate certain features within a complex application rather than having to click multiple steps.

    Machine testability for such features can identify events within JavaScript or features that use common patterns in code that are recognized as helpful signposting features. If such features are present then the success criteria can be marked as compliant but if none are found it could be an indicator, especially in complex websites or applications that such features are needed.

    Examples

    1. WebAim has a great article describing the details relating to skip links as does Jim Thatcher, both include plenty of useful visual examples and situations to work with when designing a custom solution of your own.
    2. Smashing Magazine published a two part guide to keyboard accessibility which included HTML, CSS, and JavaScript and covered not only skip links but also provided keyboard-based accessibility aids.

    Tests

    Procedure

    1. Check that the project can be successfully navigated using the keyboard.
    2. Check that the tabindex order of the content makes logical, navigable sense.
    3. Check that skip links exist to guide accessibility tools through content.
    4. Check that keyboard shortcuts are well documented and work as expected.
    5. Check that signposting within pages are well described, and work as directed.

    Expected Results

    1. All checks above are true.
  23. WD11-1: Ensure That Form Inputs Correctly Match and Validate Content Expectations

    Applicability

    This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.

    Description

    Ensure that input types correctly match the type of content being placed within them, that content types like passwords are handled in a way that visitors can easily reveal them, and that the patterns attribute is used correctly to help reduce errors during data entry. In doing so, problematic friction encountered during form filling can be reduced as can erroneous submissions.

    Machine testability for this criteria will involve analyzing the components of forms to ensure they are well formed and that (for example) any regular expressions used in patterns will not produce an erroneous result. It is also important that functionality within forms perform well on mobile devices as well as desktop so consideration must be given to the choice of input for tasks.

    Examples

    1. This article from TheGood showcases multiple examples of great form design not only from the point of the user-experience but also in the development to ensure that the ability to make mistakes is reduced.

    Tests

    Procedure

    1. Check that the input type matches the expected content type.
    2. Check that the use of the pattern attribute does not prevent correct input.
    3. Check that regular expressions are correctly formed and do not equate errors.
    4. Check that password fields can be made visible to allow for recognition.
    5. Check that one input type will function well on mobile devices over another.

    Expected Results

    1. All checks above are true.
  24. WD11-2: Ensure That Form Elements Are Correctly Labeled

    Applicability

    This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.

    Description

    Provide a mechanism for accessibility tooling to be able to accurately describe the content of form features and to provide visual aids for visitors aiming to identify what information is required to be entered. This technique requires all interactive elements within the form to have an associated label to describe the purpose and / or role of the item.

    Labels should be presented directly beside the element in question to imply association and if multiple associations are required, a grouping element with a label can be provided. Machines should be able to identify the relationship between the label and the object through the syntax and if objects without labels exist, a failure statement can be flagged up.

    Examples

    1. This CodePen example clearly has all interactive elements within the form correctly labeled and additionally goes the extra step by using input types to help reduce the chances of errors occurring during data entry.

    Tests

    Procedure

    1. Check that all interactive form components are correctly labeled.

    Expected Results

    1. All checks above are true.
  25. WD11-3: Reduce Issues Within Data Entry With Manual Third-Party Content

    Applicability

    This technique is Advisory to meet Success Criteria within 3.11 Validate Form Errors and External Input.

    Description

    Provide a mechanism in which visitors can easily take content from third-party sources and use it within your product or service. Techniques such as the ability to paste into forms or the ability to drag and drop can act as shortcuts to avoid retyping or recreating (using system resources) which may be energy-intensive or time-intensive for the visitor.

    Identifying mechanisms that may prevent the ability to import third-party content such as blocking pasting content or disabling the ability to drag and drop should be detected and flagged (unless a reason for this can be justified within the code). Mechanisms that aid the ability to import third-party content such as import or paste buttons should be actively encouraged.

    Examples

    1. These CodePen examples of log in and registration forms do not inhibit the visitor from pasting content from third-party sources which allows for fast data entry if the information is stored elsewhere.

    Tests

    Procedure

    1. Check that the ability to paste content is uninhibited (no onpaste="return false;").
    2. Check that the ability to drag and drop third-party content can be achieved successfully.
    3. Check that import options are provided if the above two options are unavailable or fail.

    Expected Results

    1. All checks above are true.
  26. WD12-1: Ensure That the Required HTML Elements Are Included

    Applicability

    This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.

    Description

    Ensure that minimum required features are present to render a website or application correctly. This includes a Doctype and a series of core HTML elements. While a page can technically render without them it is considered bad practice to not include these features.

    For machine testability, the ability to identify a DocType (and the version of HTML being rendered), plus ensuring that the necessary base HTML elements are present is script detectable. This can be done via the validation process, manually, or automated to ensure pages render correctly and avoid triggering quirks mode (that will affect the visual rendering of the website or application).

    Examples

    1. Courtney Thomas back in 2018 created a great article that showcased the minimum HTML that is required for any website or application (though as Josh Buchea notes, the viewport meta tag should also be there).

    Tests

    Procedure

    1. Check that the document includes a well-formed DocType.
    2. Check that the page or application includes the required HTML elements.
    3. Check that the page or application includes valid links to CSS and JavaScript assets (optional).
    4. Check that the HTML includes the viewport META element for handheld devices.

    Expected Results

    1. All checks above are true.
  27. WD12-2: Provide Relevant Metadata Using a Recognized Scheme

    Applicability

    This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.

    Description

    Provide the necessary metadata within the head of your website or application to ensure that search engines can index your content correctly. You can use several different mechanisms to achieve this as several different formatting schemes have been provided over the years and they have varying levels of support by different search providers.

    To enhance the findability of your content (which will reduce time being wasted by visitors trying to locate you), having a well-formatted series of metadata is critical. As such, testing should focus on determining whether basic meta tags are used or if another format is being used to serve data. Once detected, identify if the tags are recognized, if they aren't in common use or have been deprecated by a provider, it's worth requesting removal for the data savings.

    Examples

    1. The WHATWG not only has within the HTML specification a list of recognized elements that can provide contextual information about the page, but it also has a wiki that lists proprietary META tags.
    2. The Dublin Core Metadata Initiative was an attempt to standardize meta tags with a set labeling scheme. It's supported by the majority of search engines and could be used (as is or in RDFa) instead of other formats.

    Tests

    Procedure

    1. Check that the HTML document contains a description META element of between 50 and 150 characters.
    2. Check that the HTML document contains additional metadata from a recognized scheme.

    Expected Results

    1. All checks above are true.
  28. WD12-3: Ensure Content Is Structured Using Microdata

    Applicability

    This technique is Advisory to meet Success Criteria within 3.12 Use Metadata Correctly.

    Description

    Provide mechanisms for search engines and social networks (and sometimes even visitors and web browsers!) to take context-rich content from your website or application and re-use it for the benefit of your product or service elsewhere. Structuring your content successfully can take place (like metadata) using one of several formats.

    Because microdata uses hooks that attach to existing HTML to make it easy to identify (for search engines and third-parties), this will also make it easy to identify for machine testability. Using a pattern library of these various structural features you can not only show visitors how content could be used but using heuristics identify other content that might benefit from it.

    Examples

    1. Schema is a database of common patterns that have been formed using microdata. It's a huge library and could be potentially helpful in describing certain components in a way that search engines can recognize.

    Tests

    Procedure

    1. Check that content is correctly marked up using microdata, microformats, etc.
    2. Check for external RDFa assets that may contain rich metadata.

    Expected Results

    1. All checks above are true.
  29. WD13-1: Ensure CSS Preference Media Queries Are Correctly Applied

    Applicability

    This technique is Advisory to meet Success Criteria within 3.13 Adapt to User Preferences.

    Description

    Provide mechanisms through CSS that adhere to the visitor's preferences regarding how they may choose to browse a website or application. While some queries that exist in the language will hold little sustainability value, others could have PPP benefits through accessibility (societal factors), or environmental (reducing hardware or data usage).

    Each of the preference queries can be machine-identified through scripts and therefore can be tested against firstly for browser support, and secondly that the project provides some kind of environmental consequence for using the query that will benefit the visitor and / or the ecosystem. The value of such queries being applied could be measured by triggering or emulating them.

    Examples

    1. Hidde de Vries has a dark mode toggle that using a mixture of CSS preference queries and JavaScript allows dark or light mode to either be user-triggered or run at the default (it's non-invasively implemented).
    2. These examples provided by the Webkit team showcase how reduced motion (when enabled) can prevent excess movement. Though a toggle (such as this by Michelle Barker) could provide added use-cases.

    Tests

    Procedure

    1. Check for preference queries and that the effect they produce is beneficial.
    2. Check for manual override toggles or preferences within the page.
    3. Check for alternative styles if paged media or scripting is disabled.

    Expected Results

    1. All checks above are true.
  30. WD14-1: Test Against Network Speeds and Visual Resolution Breakpoints

    Applicability

    This technique is Advisory to meet Success Criteria within 3.14 Develop a Device-Adaptable Layout.

    Description

    Ensure that your product or service can be classified as device-adaptable (responsively designed) to hopefully assure the widest remit of device types, and at least some degree of visual compatibility with your website or application. While there are many ways to approach this task, this technique is focused on connection speeds and the potential window size.

    Because connection speeds can vary based on a whole range of factors (and cannot be aligned with averages due to location, mobile vs home, connection quality, etc), multiple speed ranges should be machine tested against. The same can be said for window sizes as while resolutions are common to certain devices, browsers can be resized, and unusual device types do also exist therefore CSS fluid scaling and visual breakpoints should be utilized.

    Examples

    1. WebPageTest allows you to check how a product or service will perform at different speeds (including custom profiles you set up). Running such tests can identify if a product or service takes too long to load.
    2. Browser developer tools can test a website at various dimensions (on desktop) and by manually resizing the browser window with the developer tools open (and rulers toggled in Firefox), you can identify breakpoints where the design of the site requires code solutions.

    Tests

    Procedure

    1. Check that the website or application performs well on a variety of network speeds.
    2. Check that the product or service visually doesn't break when resized at multiple window sizes.

    Expected Results

    1. All checks above are true.
  31. WD14-2: Provide Progressive Enhancement Feature Testing Within Projects

    Applicability

    This technique is Advisory to meet Success Criteria within 3.14 Develop a Device-Adaptable Layout.

    Description

    Provide a mechanism for providing features within your product or service only if they are supported and if not, providing alternatives. This can be done using feature detection and progressive enhancement which should be used as a priority over graceful degradation (as it is better to add useful extras rather than patch-critical - but broken content).

    Machine testability for feature testing should identify any code within the product or service that relies upon newer functionality lacking a level of web browser support (either in competing products or older versions), testing should occur and errors should be handled. Furthermore, if technologies are not supported, the fallback mechanism should allow a basic project to run.

    Examples

    1. Jeremy Keith's Website (Adactio) is a great example of progressive enhancement in action. It works if JavaScript, CSS, or even images are disabled (though naturally, the visuals can suffer as a result!).

    Tests

    Procedure

    1. Check that the website or application can function if JavaScript is disabled.
    2. Check that the page is still functional if an older browser version with older JavaScript and CSS version is used.
    3. Check that the product or service is still functional if CSS is disabled.
    4. Check that the product or service is still functional if images or media are disabled.

    Expected Results

    1. Expectation of the above is dependent on the task but should aim to be true unless necessary.
  32. WD14-3: Configure Your Project Around Carbon-Aware Situations

    Applicability

    This technique is Advisory to meet Success Criteria within 3.14 Develop a Device-Adaptable Layout.

    Description

    Provide a mechanism for identifying when the project might be at its most resource or energy-intensive (either at the consumer level or the system level - or both), and then decision make to delay or alter when a heavy script or operation occurs to perform it when there are fewer visitors or when the user is causing less hardware intensive activity.

    For machine testability, this could be particularly tricky to implement as it will rely on data to which the project owner will need to gain access such as when visitor numbers are at their lowest (or when the task can be achieved at its quickest). This may require internal access, but if granted the use of such data could allow for redesigning the site to perform better during busy periods.

    Examples

    1. Electricity Maps provides an application that could be utilized to help make decisions as to where data is being served (based upon the intensity and level of renewables being offered in particular regions).

    Tests

    Procedure

    1. Check if operations can be built to function using carbon-aware technologies or resources.
    2. Check if operations can be processed or shifted to occur during less carbon-intensive periods.

    Expected Results

    1. All checks above are true.
  33. WD14-4: Provide Low-Impact, Possibly Delayed Methods of Interaction

    Applicability

    This technique is Advisory to meet Success Criteria within 3.14 Develop a Device-Adaptable Layout.

    Description

    Provide a mechanism for interacting with a website or application using more usual methods, but methods that often will have a reduced overall impact (such as having a reduced energy requirement). These indirect methods such as syndication feeds or the browser reader view can even eliminate the heavy impact of rendering that can affect hardware.

    This technique is most useful when it can be easily recognized by visitors and can be used instead of having to visit the main website or application. In terms of machine testability, if these low-impact techniques use a common pattern and that pattern can be easily identified within the source code then naturally it can be identified as such and can be marked as meeting the criteria.

    Examples

    1. Quick Response (QR) codes are a great example of objects that can be included not only online but offline in branding to trigger both links and in-app actions with only the impact of rendering the object (or printing it).
    2. Smartwatches may have some Internet connectivity. Due to the small screen size, they will have a low energy output but also a small battery so having a low-fidelity version of a website can be really useful.

    Tests

    Procedure

    1. Check for low-impact interaction methods such as QR codes, smartwatch layouts, or voice assistant tooling.

    Expected Results

    1. All checks above are true.
  34. WD15-1: Optimize a Codebase Through Rewriting for Performance

    Applicability

    This technique is Advisory to meet Success Criteria within 3.15 Use Beneficial JavaScript and Its APIs.

    Description

    Improve the quality of JavaScript code by examining the contents and identifying any issues that could be deemed a matter of sustainability which could trigger a large load upon hardware resources (and thus put a strain on battery charge cycles). In addition to this, matters of accessibility and performance with such code can be considered.

    To meet the success criteria, rewriting for performance should only be done if the act does not cost more in effort (and creator impact) than it gains in impact. For machine testability, it's critical that such actions within scripting use tools at their disposal (libraries, patterns, even AI) to heuristically identify any issues that may require resolution within the project's source code.

    Examples

    1. Solid.js author Ryan Carniato created this thoughtful post in 2019 about how they went about integrating web performance in their framework-building processes. It is based on a widely recognized JavaScript study.
    2. The team at Astro used open data to examine the performance ratings of various competing JavaScript frameworks to identify the differences between physical and real-world metrics (there are lots of useful charts).

    Tests

    Procedure

    1. Check that existing code has been rewritten to be less process intensive, etc.
    2. Check that any existing tooling is considered sustainable and / or performant.
    3. Check if JavaScript frameworks can be replaced with others that are more performant.
    4. If frameworks can be replaced, check that they are replaced.

    Expected Results

    1. All checks above are true.
  35. WD16-1: Ensure All Vulnerabilities Are Removed and Code Is Linted

    Applicability

    This technique is Advisory to meet Success Criteria within 3.16 Ensure Your Scripts Are Secure.

    Description

    Scan the website or application's code for issues that may otherwise potentially leave a product or service vulnerable to exploitation. This will not only include first-party code that is created by the project owners but third-party tooling imported. Any third-party library or framework found to contain harmful code should be removed before production.

    Because this can occur at both the client-side and server-side internal access may be required if server-side scanning is wished to be included within machine testability, however within the scope of the success criteria, if internal access is not available or permitted, testing the code using known methods and scanning client-side code for harmful techniques should help with passing.

    Examples

    1. RayGun provides a great article about vulnerabilities and best practices when working with JavaScript. Some of the things mentioned are covered within the WSGs in other sections, all others can be followed here.
    2. Snyk is one of the most well-known providers of data regarding code vulnerabilities. They can be integrated within existing tooling or you can browse their database of known security issues to be aware of.

    Tests

    Procedure

    1. Check the product or service for vulnerabilities within your code, ensuring they are resolved.

    Expected Results

    1. All checks above are true.
  36. WD17-1: Identify and Remove Unused Packages and Dependencies

    Applicability

    This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.

    Description

    Provide a mechanism for identifying if a library or framework is currently in use and if not, remove the said tooling and its unused dependencies from the production code. This will help reduce the PPP burden of the website as less bloat will reach the visitor which can have a large impact if they are on a low-powered device or a restricted data plan.

    For machine testability, internal access will be required to determine if web developers or creators will require specific tooling within a project. If internal access is given, the package.json file is an ideal place to locate the packages being requested and these can be compared against the public-facing website to identify redundancy in the toolchain process.

    Examples

    1. An application that removes functionality due to poor uptake no longer requires certain packages to contain scripts that were dedicated to running that function alone. Such dependencies have become redundant.

    Tests

    Procedure

    1. Check if a package.json has been submitted for identifying packages.
    2. If so, use that. Otherwise, check the source code for potential orphan references.

    Expected Results

    1. All checks above are true.
  37. WD17-2: Implement Modularized Frameworks and Libraries

    Applicability

    This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.

    Description

    Provide a mechanism for determining how much of a library of framework you require for your website or application to function, and then requesting just that modularized segment to reduce the overall payload and load on the visitor's hardware during rendering. This is especially critical for third-party scripting but is also useful for CSS.

    Machine testability should attempt to identify when third-party libraries are present within a codebase and if that framework or library supports the ability to load a lightweight or modular version of its features (thereby only loading what you require, when you require it), ensure that the project in question does so rather than loading an un-optimal "fully-loaded" version of the project.

    Examples

    1. Bundlephobia and pkg-size allow you to find the true size of a package on NPM by calculating not just the size of the item you wish to install but all of the dependencies the item requires to function.

    Tests

    Procedure

    1. Check the package size to determine at what size breaking into modular components should occur.
    2. Check the function sizes to identify if refactoring to smaller functions should also occur.

    Expected Results

    1. All checks above are true.
  38. WD17-3: Ensure That Dependencies Are Verified As Up-to-Date

    Applicability

    This technique is Advisory to meet Success Criteria within 3.17 Manage Dependencies Appropriately.

    Description

    Ensure that the deliverables are current and up-to-date, thereby providing any necessary security patches or bug fixes that are required to remain operational. This is especially true when a website or application is dependent on third-party libraries or frameworks and has a complex toolchain. As such, verifying the maintenance status of work is critical.

    To machine test or verify the dependency chain of a project, the source code of a project should identify (by filename or URL) the project and version. If during the tree shaking or production process all notifications of what third-party code has been used have been removed, internal access will be required to access the package.json (except heuristic fingerprinting of packages).

    Examples

    1. As you would expect, the jQuery project website uses the latest version of the jQuery framework. Because it is a regularly maintained project and can afford rapid update cycles, it stays current and up-to-date.

    Tests

    Procedure

    1. Check if a package.json has been submitted for identifying packages.
    2. If so, use that. Otherwise, check the source code for package version data.
    3. Check that all packages are at least non-breaking (minor) current if not major release current releases.

    Expected Results

    1. All checks above are true.
  39. WD18-1: Provide the Expected Website Assets in Your Base Directory

    Applicability

    This technique is Advisory to meet Success Criteria within 3.18 Include Files That Are Automatically Expected.

    Description

    Ensure that the non-HTML files that are expected to be located within the base directory of a website can firstly be found and secondly are correctly formatted (using the right syntax) and match the expectations of the product or service that requires them to benefit the visitor with a better user-experience, increased accessibility, and sustainability.

    For machine testability, tooling should aim to identify that the listed files are provided (if not flag these justifying the need for their inclusion). If they are included they should be examined to ensure they are semantically correct, especially if they are formed using a strict language such as XML. If they fail to validate correctly errors should constitute a failure to meet the success criteria.

    Examples

    1. Amazon has a highly recognizable brand and they provide a favicon in the base of their website so that when people visit them, their icon will show correctly in the top left of the tab or window (browser dependent).
    2. Developer tool Can I Use provide an opensearch.xml file in the base of their website so that if people visit their project, visitors can rapidly search the website in the future using a keyword and tab combination.

    Tests

    Procedure

    1. Check the base directory for a favicon in 16×16, 32×32, and 48×48 format.
    2. Check the base directory for a well-formed robots.txt document.
    3. Check the base directory for a well-formed opensearch.xml and sitemap.xml file.
    4. Check the base directory for a well-formed site.webmanifest document.
    5. Check that the links contained within these documents don't contain errors.

    Expected Results

    1. All checks above are true.
  40. WD19-1: Provide Useful Plaintext Website Assets in the Expected Locations

    Applicability

    This technique is Advisory to meet Success Criteria within 3.19 Use Plaintext Formats When Appropriate.

    Description

    Provide a mechanism for adding useful contextual information to a website or application within recognized locations in plaintext format (so that it doesn't impact the rest of the product or service). These standardized formats each have a defined beneficial purpose and are considered to be low-impact (sustainably speaking) so are safe to include.

    This technique is most useful when the assets are formatted as per their specification for purposes of readability, it is how individuals and machines will be expected to understand them. For machine testability, using heuristics to scan for recognized features within the text should help you identify the instructions or any features of note that could be weighed in calculations.

    Examples

    1. WordPress has an ads.txt file at the base of their website, that contains details of the advertisers they work with (and tries to increase transparency by revealing where impressions are purchased and resold).
    2. Web Developer Sarah Tamsin has a fun example of a humans.txt file that provides some witty content and a few basic details about her (plus an ASCII image that makes for a nice easter egg bonus feature).
    3. The security.txt specification writers have created a basic example of what could be included within the file. As you can see from the example, the location of the document differs from other plaintext examples.

    Tests

    Procedure

    1. If you have advertising, check that the base directory contains a well-formed ads.txt document.
    2. Check that the base directory contains a well-formed security.txt and robots.txt document.
    3. Check that the base directory contains a well-formed carbon.txt and humans.txt document (optional).

    Expected Results

    1. All checks above are true.
  41. WD20-1: Replace Deprecated Code With Suitable Alternative Syntax

    Applicability

    This technique is Advisory to meet Success Criteria within 3.20 Avoid Using Deprecated or Proprietary Code.

    Description

    Identify any deprecated code that is no longer recommended for use within specifications. As older code is often not as highly optimized for performance (browser makers often cease to maintain abandoned and deprecated features), it can take longer to render and thrash hardware causing higher emissions so as a general rule, is worth resolving.

    For machine testability, deprecations in languages can also be found within specifications from providers like the W3C. Documentation providers like MDN also provide extremely thorough coverage of syntax that has been deprecated such as within HTML, CSS, and JavaScript. If replacements for any techniques can be offered, provide them, otherwise recommend removal from your code.

    Examples

    1. The Space Jam website is considered a classic piece of 90s Internet design with its total disregard for function over form, but if you look at the code - it gets even uglier with table-based layouts and deprecated code

    Tests

    Procedure

    1. Check that the project doesn't contain any outdated layout techniques (such as table-based design).
    2. Check that the website or application doesn't contain deprecated syntax features.

    Expected Results

    1. All checks above are true.
  42. WD20-2: Replace Outdated Web Standards With Suitable Alternatives

    Applicability

    This technique is Advisory to meet Success Criteria within 3.20 Avoid Using Deprecated or Proprietary Code.

    Description

    Identify technologies and web standards that may be in use but have been superseded by newer technologies and web standards. In certain cases, the standards in use may still be actively supported by web browsers and if there is a sustainability reason to retain the feature, continue. Otherwise, updating the code should be considered.

    As with outdated and proprietary code, web browsers tend to stop providing optimizations for outdated practices and as such, using technologies that have a newer replacement may have inherent sustainability benefits. For machine testability, standards providers list current web standards and this can be matched against in-use technologies (which often can be detected in code).

    Examples

    1. The Web Design Museum houses a showcase of screenshots of websites that were built using the proprietary technology Adobe Flash. Along with Java Applets and Microsoft Silverlight they now no longer work.

    Tests

    Procedure

    1. Check if the existing product or service uses outdated technologies (VML / P3P, etc) and if so recommend replacements (X3D / POWDER, etc).
    2. Check for proprietary web plugins (like Flash) and remove them until a replacement can be created.

    Expected Results

    1. All checks above are true.
  43. WD21-1: Encourage the Use of More Sustainable Creation Toolchains

    Applicability

    This technique is Advisory to meet Success Criteria within 3.21 Align Technical Requirements With Sustainability Goals.

    Description

    Provide a mechanism of choice when creators decide how to build their product or service. This technique aims to identify how they created their product or service (if possible) and make recommendations based on the sustainability of such methods. Build steps and tooling add additional complexity to a project and this should be weighed against other factors (such as individual / team ability) to enable sustainable creation toolchains.

    For machine testability, internal access may be required if no traces of the creation tool have been left in the production code. If the production code does however contain fingerprints of the tool that created it, recommendations can be made to prioritize static over dynamic and flat-static over generated-static as reducing the processing effort on servers and client machines is meaningful.

    Examples

    1. Jekyll is one of many solutions that exist as a solution for producing a website - in this case, it uses markdown and bundles the plaintext into a finalized pure HTML state (which should have a low carbon output).

    Tests

    Procedure

    1. Check for the tool used for creation and determine its sustainability.
    2. If it has a low impact, continue. Otherwise, flag for replacement.

    Expected Results

    1. All checks above are true.
  44. WD21-4: Test All Third-Party Resources for Sustainability Impacts

    Applicability

    This technique is Advisory to meet Success Criteria within 3.21 Align Technical Requirements With Sustainability Goals.

    Description

    Provide a mechanism for testing third-party plugins, extensions, and themes (if such devices are used within a product or service's creation process) for any sustainability impacts they may have. These often are included within CMS software and are external assets that can add overheads to the website or application being rendered.

    For machine testability, many CMS products provide trace details of included features within the source code of a page (as conditional comments) due to these having to be loaded as third-party resources. Internal access may be required to get a full picture of everything being loaded. Third-party resources should be tested against the WSGs separately to identify sustainability issues.

    Examples

    1. When using WordPress as their primary content management system, they often want to have a distinctive layout for their blog or website. As such they will install a custom theme, and its code with sustainability impacts will be imported into the total bundle size. So be careful what you choose!

    Tests

    Procedure

    1. Check if any third-party resources exist within the page or application.
    2. If internal access is required to verify resource usage, grant it.
    3. Check that component against the WSGs for its sustainability compliance.
    4. Check if high-impact third-party resources offer self-hosted alternatives.
    5. Check that self-hosted alternatives are used if provided.

    Expected Results

    1. All checks above are true.
  45. WD22-1: Ensure the Latest Version of a Syntax Language Is Being Used

    Applicability

    This technique is Advisory to meet Success Criteria within 3.22 Use the Latest Stable Language Version.

    Description

    Ensure that the product or service is making use of the latest version of the chosen syntax language. As with keeping dependencies up-to-date, having the latest version of a syntax language can have sustainability benefits in terms of performance and security enhancements as well as useful, optimized new features so it's worthwhile.

    For machine testability, it will be difficult to verify the latest version of a syntax language is being used because, for security reasons (to avoid exploitation), most servers will not give out such information. As such, internal access will be required or a mixture of feature detection, source code examination, and the potential for asking the user of the tooling questions to establish versions.

    Examples

    1. PHP provides a list of the supported and unsupported versions of their programming language. You should be able to manage updates and upgrades within a hosting provider's control panel (if required).

    Tests

    Procedure

    1. Check that the version of a programming language being used is the latest.

    Expected Results

    1. All checks above are true.
  46. WD23-1: Replace Code That Can Instead Use More Efficient Native APIs

    Applicability

    This technique is Advisory to meet Success Criteria within 3.23 Take Advantage of Native Features.

    Description

    First identify and then determine if custom JavaScript functions could be replaced with a more optimal and established native API call. This will also apply to custom element creation over using established native features to the browser. With the evolution of JavaScript, the opportunity to optimize code with newer cleaner techniques occurs regularly.

    It is preferable to measure both the newer method of implementation with the older method to identify if one provides a more optimized (via performance and sustainability metrics) implementation than the other before replacement. Also, this technique is most useful when considering if the custom component provides additional functionality that the native component does not and if such features weigh up beneficially against the cleaner native implementation.

    Examples

    1. MDN provides a great list of native Web APIs for JavaScript including the interfaces that can be used to call them. Within each of those items, you will find detailed documentation along with examples of usage.

    Tests

    Procedure

    1. Check if the benefits of a native browser control exceed creating a custom element.
    2. Check if a native API exists that could replace custom calls, methods, or functions.
    3. In either of these instances, if the answer is yes, check that the native option is used.

    Expected Results

    1. All checks above are true.

Hosting, Infrastructure and Systems

Each of the below can be shown or hidden by clicking on the technique you wish to display.

  1. HIS01-1: Publicly Disclose Relevant Sustainability Metrics to Visitors

    Applicability

    This technique is Advisory to meet 4.1 Choose a Sustainable Hosting Provider.

    Description

    Ensure that anyone who wishes to compile an impact statement (as a consumer of your services) based on conditions such as PUE, WUE, and CUE can calculate the necessary energy utilization of a provider using each of the available data based upon any variables required. Making important metrics publicly visible to both customers and visitors, increases awareness, reduces the potential for greenwashing, and allows service providers to report with proof of compliance.

    For machine testability, this would require each customer account (of a hosting provider) to be equipped with a publicly visible stats page indicating resource utilization, including useful information on variables like CPU, GPU, RAM, and data usage, plus a calculation on equivalent water or other consumable resource utilization. This can be linked to from within the service status page. In addition, the methodology behind the calculations should also be provided in a centralized location.

    Examples

    1. OVHCloud and Scaleway both showcase different ways of displaying data about their environmental impact and notably, the variables that make up that number (with calculations and individual metrics provided). Both dashboards and sustainability panels are equally useful tools for showcasing such information.
    2. A service provider could make the internal resource usage stats page that comes with many hosting packages publicly visible (upon request of the website or application provider), and include sustainability data within.

    Tests

    Procedure

    1. Check that the provider has made available relevant hosting metrics publicly.
    2. Check that the PUE, WUE, and CUE are each listed for each data center.
    3. Check that individual servers provide live CPU, GPU, RAM, and data use tracking.
    4. Check that logging of past values over time is provided for calculating emissions.

    Expected Results

    1. All checks above are true.
  2. HIS01-4: Ensure That the Project Is Hosted on a Sustainable Platform

    Applicability

    This technique is Advisory to meet 4.1 Choose a Sustainable Hosting Provider.

    Description

    Verify that the product or service is being served through a sustainable provider. If the hosting provider (for example) generates its electricity from renewables and can document other intensity reduction techniques (such as using natural cooling and offering auto-scaling packages), this can contribute towards passing the success criteria.

    Because many factors can go towards how sustainable a provider is, machine testing will need to be vigilant in accounting for several variables. As such, using an established directory of sustainable hosts could be one way to pass or fail, or running validation checks against factors that affect the sustainability of services could be another (this may require internal access or knowledge).

    Examples

    1. The Green Web Directory from the Green Software Foundation provides a list of sustainable providers of hosting (based on their own criteria). If a service is listed in a reputable list as this, it may meet the success criteria.
    2. Krystal is an example of a provider offering detailed information about their sustainability process including their B-Corporation status, and the schemes they have signed up with to validate and reinforce their claims.

    Tests

    Procedure

    1. Check that the hosting provider has a sustainability statement.
    2. Check that the hosting is powered by 100% renewable energy.
    3. Check that the hosting platform does not use carbon credits to offset its energy requirements.
    4. Check for recognized green initiatives that the platform may be subscribed to.

    Expected Results

    1. All checks above are true.
  3. HIS02-1: Provide Caching Responses To Optimize Page Reloads

    Applicability

    This technique is Advisory to meet 4.2 Optimize Browser Caching.

    Description

    Provide a mechanism for websites and applications to ensure that content is cached for the correct length of time, with the intention of reusability. Because certain types of content will change and need to be reloaded more frequently than others, it makes sense to cache those that change the least for the longest period (to reduce data transfer rates).

    For machine testability, identifying cache response times for various file formats can be done by checking the cache-control headers for various files as they are requested from the server. If the length of time the content is being held does not appear to be long enough (or is too long for dynamic content), then recommendations can be made to improve the used formula.

    Examples

    1. For individuals who use content management systems, some plugins offer automatic caching management. They each have a range of features but essentially allow you to automate the caching process.

    Tests

    Procedure

    1. Check the cache-control header of each file has the correct length set.

    Expected Results

    1. All checks above are true.
  4. HIS02-2: Identify the Potential for PWAs and Other Useful Features

    Applicability

    This technique is Advisory to meet 4.2 Optimize Browser Caching.

    Description

    Identify if techniques could be used to increase the performance within the user-experience. Examples of this could include identifying if cookies could (and should) be used within an interface or if the website or application could be transformed into a progressive web application to provide advantages (project-wide) such as offline availability.

    Machine testability to identify existing features is relatively straightforward as it simply requires identifying through the source code where cookies, local database requests, service, or web workers are implemented and ensuring that they are both proportionate and correctly marked up. If these don't exist it will require analyzing the page (or components) for potential opportunities for use.

    Examples

    1. Gmail makes sense as a progressive web application because if the Internet connection is lost (at any point), being able to continue using certain features without losing your progress is critical to functionality.
    2. NPM saves information about your membership login locally (as do many other providers) to avoid needing to re-enter your details repeatedly. This is a useful feature as it increases a product's perceived performance.

    Tests

    Procedure

    1. Check if cookies are in use and, whether they can be added, removed, or replaced.
    2. Check if local data storage is being used and, whether it can be added, removed, or replaced.
    3. Check if a PWA or other worker functions are in use and, whether they can be added or removed.
    4. If a PWA exists, check for a well-formed manifest file, an icon, and semantically correct JavaScript.

    Expected Results

    1. All checks above are true.
  5. HIS03-1: Ensure That Files Are Compressed Correctly Before Serving

    Applicability

    This technique is Advisory to meet 4.3 Compress Your Files.

    Description

    Provide a mechanism to serve content correctly using a recognized encoding method to reduce the payload size of websites and applications. While this means there will be an additional rendering effort on the client-side (decoding the file), the increased speed in loading such files has an PPP benefit that justifies the added effort.

    For machine testability, identifying content encoding methods for file formats can be done by checking the content-encoding headers for various files as they are requested from the server. This can be achieved on an individual basis or through server configuration files. If the content is not being encoded (unless it has been encoded at the source and no further optimization can be made), recommendations can be made to use a compatible compression technique.

    Examples

    1. The Solid Chocolate Company serves all of their content using GZIP compression headers ensuring that data gets to the visitor as quickly as possible (in addition to compressing image and media assets at source).

    Tests

    Procedure

    1. Check all text assets for content-encoding headers to verify compression takes place.
    2. Check content-encoding headers work for not in-use formats for forward-compatibility efforts.
    3. Check if the compression method used is the most optimal type available.

    Expected Results

    1. All checks above are true.
  6. HIS03-2: Compress All Assets That Can Be Optimized Before CMS Submission

    Applicability

    This technique is Advisory to meet 4.3 Compress Your Files.

    Description

    Provide additional mechanisms in place for non-technical users so that the potential for compression is just only checked once files are in place within a website or application's structure (uploaded), but also during the upload process, thereby taking the opportunity to meet sustainability criteria on-the-fly as part of the content management system.

    To pass this success criterion, a CMS would need to take assets that are uploaded and identify compression that could be applied either by changing file formats or by applying algorithms to it (or both). If improvements can be made, these should replace the original files by default. Machines can verify this is occurring by identifying if the best format is used or if further compression could be applied (this could form part of a CMS sustainability rating).

    Examples

    1. Tooling such as Squoosh and SVGOMG can help website creators find optimizations before uploading or submitting content to a product or service (other services offer APIs that can be integrated into tooling).
    2. There is a whole range of plugins available for WordPress (and other CMS products) that can assist with automatic image optimization. This can help reduce the ecological impact they have before public visibility.

    Tests

    Procedure

    1. Check that all assets are optimized for the web in size, format, and quality before publication.
    2. If assets have been altered prior to publication, check users are pre-warned of any changes.
    3. Check that the files published are the optimized versions (more than one may be necessary).
    4. Check if a CMS will be necessary or useful and if so, ensure static assets are uploaded there.

    Expected Results

    1. All checks above are true.
  7. HIS04-1: Provide Custom Error Pages To Handle Unexpected Events

    Applicability

    This technique is Advisory to meet 4.4 Use Error Pages and Redirects Carefully.

    Description

    Provide a mechanism for if and when visitors land on a region of your website or application that either does not exist or appears to be broken. Such encounters can be disorientating and without useful signposting and interactions to get visitors out of such events, additional unnecessary page loads or support requests can occur leading to emissions.

    Machine testing for errors can simply involve triggering the events and identifying if the page loaded in response is one that is server-generated or one that has been customized by the website or application. If it has been customized does it provide useful resources to help resolve the problem? This could be measured using analytics (which may require internal access).

    Examples

    1. HubSpot has some great examples of 404 (page not found) error pages showcasing good user-experience design in addition to providing a few helpful pieces of advice regarding how they should function.
    2. This GitHub package provides support for multiple servers and offers a decent boilerplate for anyone wishing to ensure coverage for a wide range of common errors that can occur during site browsing.

    Tests

    Procedure

    1. Check that the website or application has a custom set of pages to respond to errors.
    2. Check that each error status type has its own page and unique situation handling.
    3. Check that such pages provide signposting to resolve the error they encountered.
    4. Check that existing links to navigation related pages remain fully functional.
    5. Check that links to customer support are provided when solutions do not work.

    Expected Results

    1. All checks above are true.
  8. HIS04-2: Resolve All Anchor References Which Trigger Errors or Redirects

    Applicability

    This technique is Advisory to meet 4.4 Use Error Pages and Redirects Carefully.

    Description

    Reduce the amount of wasted rendering effort on the part of visitors and users of a website or application by ensuring that products and services regularly and routinely check that all of the links within pages are correct and do not need to be updated. This can be a common issue amongst large sites or those with older content (as the material can be moved or disappear online), so links must be updated to reflect this).

    Regarding machine testability, verifying that all links within domains and subdomains of a product or service are correctly linked to and don't resolve in redirect notices or server / missing (not found) errors is essential for meeting this success criterion as it showcases the currency of the content. It should also be noted that links apply equally to images, media, HTML head references, and other materials referenced within the page, not just content anchor links.

    Examples

    1. Open source products like Link Checker can verify the status of references across multiple pages. This can be a quick and easy way of locating issues that have built up across an entire product or service.
    2. The W3C has a simple web-based interface for validating links for errors on a page-by-page basis which can be helpful if you don't want to have to install applications or self-host packages to run tests.

    Tests

    Procedure

    1. Check that no anchor reference or in-page reference contains links that do not resolve (server errors).
    2. Check that no anchor reference or in-page reference contains dead (erroneous) links.
    3. Check that no anchor reference or in-page reference contains redirection links.
    4. Check that links to external websites do not result in forbidden errors.

    Expected Results

    1. All checks above are true.
  9. HIS06-4: Check That Pages Have Not Been Subjected to Hijacking

    Applicability

    This technique is Advisory to meet 4.6 Automate To Fit the Needs.

    Description

    Provide a mechanism for identifying compromised content within a page of a website or application. As having a secure product or service is critical to meeting the societal (people) aspect of PPP and compromised websites can place unnecessary burdens on hardware in certain circumstances, being able to identify and remove such materials is essential.

    Machine testability for these events will require regular monitoring of pages for common features of hijacking events such as critical links suddenly redirecting outside of the primary domain, pages being redesigned with hacking notices, or simply being taken down and replaced with other data. Maintaining a library of established dangerous links, patterns, and features can help flag such issues.

    Examples

    1. Compromised websites may present in many forms, but if they are detected by Google, the search giant will intercept the visitor before they get exposed to the problem code and present them with an error.

    Tests

    Procedure

    1. Check links do not contain abnormalities such as significant or feature-critical external redirections.
    2. Check pages do not contain tracking, mining, or advertising that has occurred suddenly.
    3. Check pages do not contain closure, ransom, or hacking notices.
    4. Check pages do not trigger sudden redirections to other domains.
    5. Check the domain has not been included within a security firm or Google compromised lists.

    Expected Results

    1. All checks above are true.
  10. HIS07-1: Ensure That Stored Data Has a Defined and Reasonable Expiry Date

    Applicability

    This technique is Advisory to meet 4.7 Maintain a Relevant Refresh Frequency.

    Description

    Ensure that mechanisms are in place to hold data for only as long as it is required. This will reduce redundancy from things like stale cookies, and it will also assist older devices that may have lower capacity disk space allowances by reducing the overall size of caches which will free up important space (that otherwise reduces performance).

    Identifying technologies for stashing data for a set time such as cookies, local databases, or other methods can be machine-tested. It's also worth identifying techniques that may be slower performing and making recommendations based on such implementations. Otherwise, verify that an expiry date does exist and if time has elapsed (or no date exists), flag the issue.

    Examples

    1. Cookies are a classic example of data stored for a particular reason, but if they are left unchecked, accumulate in vast numbers. While they don't take up a lot of space, they still should be purged if no longer required.

    Tests

    Procedure

    1. Check the expiration date of locally stored data and if that date has exceeded, remove the data.
    2. Check the expiration date of server-side stored data and if that date has elapsed, remove the information.
    3. If no expiration date exists for client-side or server-side data, either add one or remove the data.

    Expected Results

    1. All checks above are true.
  11. HIS09-2: Deliver Content Using the Correct, Most Secure Protocol

    Applicability

    This technique is Advisory to meet 4.9 Enable Asynchronous Processing and Communication.

    Description

    Guarantee that visitors who experience a website or application will have their experience delivered using a secure delivery route. Transmitting data using non-secure means (now that SSL certificates can be obtained free of charge) should never occur as there are inherent risks that can otherwise be avoided during the browsing session.

    Machine testing for protocol testing will involve examining that the page is being served across HTTPS in opposition to HTTP (there should not be an occasion where both are available, especially if dynamic or interactive content exists). Furthermore, insecure protocols for non-browsing usage should be tested to verify they are disabled and replaced (if required) with more secure options.

    Examples

    1. FTP is a classic example of a protocol that while disabled for browsing in several web browsers, could pose security issues if not blocked entirely. It should instead be replaced with SSH (preferred) or SFTP (at minimum).

    Tests

    Procedure

    1. Check that the product or service is being delivered over HTTPS rather than HTTP.
    2. Check that alternative protocol usage is only done using secure means.

    Expected Results

    1. All checks above are true.
  12. HIS10-1: Provide Distributed Access to Static Resources via a CDN

    Applicability

    This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.

    Description

    Offer a mechanism for visitors to gain access to a static website or application assets in a location that is closer to them than the origin host. By utilizing a content delivery network, large media files, images, fonts, and other static assets can be distributed to regional locations so that visitors can access them quicker (as the route to the file is shorter).

    Machine testing for content delivery networks should take into account the balance as to whether adding assets to multiple locations (for performance) incurs more of a sustainability cost than loading them quicker. If the value in faster loading is greater, and a CDN is in use, this qualifies as a pass. If no CDN is being used, or one is in use but the added value is neutral or negative, question its impact and whether the material should be instead hosted onsite.

    Examples

    1. While it doesn't provide insights into sustainability, CDN Compare offers a great deal of context between several popular content delivery networks and the offerings they provide to help you make up your mind.

    Tests

    Procedure

    1. Check the existing location of static resources on the existing website.
    2. If they exist on the first-party domain, calculate the sustainability value of using a CDN.
    3. Check if the value of using a CDN will be beneficial, if so use one.

    Expected Results

    1. All checks above are true.
  13. HIS10-2: Ensure That the CDN and Its Chain of Locations Are Sustainable

    Applicability

    This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.

    Description

    Test the sustainability of the CDN against the WSGs as a service provider as a third-party service in addition to any sustainability criteria and best practices that exist externally for infrastructure and hosting. This not only applies to the provider itself but to all of the nodes that it uses to supply distribution, and any third-parties it uses as part of its extended network. As such, calculating and vetting providers can be complex.

    For machine testability, providers should have information about sustainability offered within a sustainability statement along with details of compliance being met plus any sustainability features they provide that enhance their service. If this information is publicly available (and uses recognized sources), it could be parsed by machine, otherwise, a sustainable provider list should be sourced.

    Examples

    1. Akamai (PDF) provides a yearly PPP report that showcases and documents all of the changes they have made towards becoming more sustainable. While they have not fully become sustainable, they are getting there.

    Tests

    Procedure

    1. Check if the CDN provider has a sustainability statement in place.
    2. Check if the CDN provider meets any sustainability compliance obligations.
    3. Check that CDN against the WSGs for its sustainability compliance.
    4. Check that the CDN supply network (nodes) are also as sustainable as the provider.

    Expected Results

    1. All checks above are true.
  14. HIS10-3: Provide a Copy of the Product or Service Close to the Visitor

    Applicability

    This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.

    Description

    Provide a mechanism for mirroring the content (website or application) as close to the visitor as possible to reduce the loading time as effectively as possible. This can be done using CDNs or by analyzing metrics data and deciding based on the locale of visitors where the origin host choice would be best placed (both can be useful methods).

    Machine testing for this requires internal access to analytics logs to determine where visitor locations are and from this identify the best location to place information (if a CDN is used, this may be automatically done on your behalf meeting compliance requirements). If no internal access or data is available, general statistics about Internet usage could help identify trends.

    Examples

    1. Our World In Data, the World Bank, and Wikipedia provide some awesome information on average Internet usage worldwide. While this can't replace real-world demographics, it can be helpful for new projects.

    Tests

    Procedure

    1. Check if you only serve a local (single country) audience, if so, avoid CDN usage.
    2. Check using analytics the average distribution of your product or service's visitors.
    3. Check a hosting or CDN provider's locations to match as closely to your customers.

    Expected Results

    1. All checks above are true.
  15. HIS10-4: Avoid Hosting Dynamic Resources or Third-Party Scripts on the CDN

    Applicability

    This technique is Advisory to meet 4.10 Consider CDNs and Edge Caching.

    Description

    Prevent non-first-party dynamic resources from being loaded through a content delivery network. The justification behind this is due to browser mechanisms such as CORS and cache partitioning, unfavorable things can occur if you try to load dynamic resources from third-party websites (this is a security measure to prevent malicious code injection).

    As this issue doesn't affect static resources such as HTML, CSS, or JSON, when testing against the criteria, these types of resources can be excluded. For more dynamic resources like JavaScript or server-side code, if the host URL differs from your own (this includes things like frameworks), flagging up an error and recommending self-hosting or integration with existing tooling should be done.

    Examples

    1. While D3 is a well-respected JavaScript framework for producing graphs and complex charts. It has a recommendation for vanilla HTML users that states a preference for a CDN package (going against our guidelines).
    2. Underscore is another JavaScript framework that offers CDN packages while also offering direct download options. In this case, the download option is provided first (and code examples reflect this) which is good.

    Tests

    Procedure

    1. Check that your CDN only contains static website assets.
    2. Check that JavaScript on CDNs is first-party hosted only.

    Expected Results

    1. All checks above are true.
  16. HIS12-2: Provide a Review or Expiration Date on Website Content

    Applicability

    This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.

    Description

    Provide a mechanism that ensures all content within a website or application gets regularly reviewed and if necessary refreshed, updated, or if it has reached the end-of-life, deleted or archived. As such, publicly providing a date when events will occur should offer an incentive for providers to action these labels (and visitors can note outdated content).

    Machine testability should offer a grace period for content that has passed the date when a review should take place (as circumstances can lead to dates being missed), however, failings should still be flagged as potential issues that need to be resolved. Labels within the content should be detectable by scripts that issue such dates, and if they don't exist, recommendations can be issued.

    Examples

    1. On the National Health Service website, at the bottom of an article about a medical condition, a clear piece of text mentioning when the content was last reviewed and is next due for review is given.

    Tests

    Procedure

    1. Check that all articles (or content where material may go out-of-date) have a review or expiration date.

    Expected Results

    1. All checks above are true.
  17. HIS12-3: Provide a Classification Policy to Content if It's Deemed Sustainable

    Applicability

    This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.

    Description

    Increase findability within content by offering mechanisms within the page for visitors to filter the results of potential content they are seeking by category, tag, or other variables. How this feature is presented to visitors can differ between implementations as there are several well-defined patterns, but they can reduce problematic friction in the user-experience.

    Machine testability for such features will involve examining the page's source code to identify repeating patterns across several pages (if correctly labeled, patterns such as tags or categories may be semantically easy to find). The sustainability of such implementations must account for the impact it has upon rendering across each page, against the benefit it has to the visitor experience.

    Examples

    1. A List Apart provides a topics page which groups various pieces of content (articles) into categories. By doing this it reduces the number of pages an individual will need to click through to find what they are after.
    2. Maggie Appleton has a really visually appealing way of laying out all of her different topics, content types, and even how well-developed her thoughts are. With all of this, you can filter her content accordingly.

    Tests

    Procedure

    1. Check that content within the website (or potentially within long articles) can be filtered by classification.
    2. Check that all classification patterns used on websites are repeated across all pages and clearly labeled.

    Expected Results

    1. All checks above are true.
  18. HIS12-5: Ensure That Backup Providers Are Classified As Sustainable

    Applicability

    This technique is Advisory to meet 4.12 Store Data According to Visitor Needs.

    Description

    Ensure that any third-party backup service provider is examined for sustainability impacts. This will involve examining the impact they have simply in hosting your data, but also how sustainable their platform is, as well as their business. If their model meets any obligations laid out and aligns well against the WSGs, it may meet success criteria.

    To identify how sustainable a service is, a list of recommended sustainable backup providers would be an ideal source to utilize (assuming that one exists and it is itself reputable). If no list exists, checking providers' claims, sustainability statements, and measuring their service against criteria such as the WSGs can be helpful to some extent, though utilizing established standards like the GRI and recommendations around infrastructure will also help.

    Examples

    1. An application producing large numbers of logs that need to be rotated on a regular cycle could backup those logs to an external sustainable source and remove them from the origin host to eliminate existing emissions.

    Tests

    Procedure

    1. Check if the backup provider has a sustainability statement in place.
    2. Check if the backup provider meets any sustainability compliance obligations.
    3. Check that backup provider against the WSGs for its sustainability compliance.

    Expected Results

    1. All checks above are true.

Business Strategy and Product Management

Each of the below can be shown or hidden by clicking on the technique you wish to display.

  1. BSPM01-1: Provide a Publicly Visible Sustainability and / or Ethical Policy Statement

    Applicability

    This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.

    Description

    Ensure that the product or service in question has the documentation to back up any claims relating to both ethical policy decisions and sustainability both offline and digitally relating to its practices. This can include anything within the scope of PPP and provide coverage of compliance with relevant legislation, standards, and best practices.

    Using heuristics, machine testing can identify key passages within the text (potentially by the headlines) to check for various sections that should be present within a sustainability statement or code of ethics policy document. In addition, it's critical that the document exists and be easily found so verify its location and check that where it's referenced in the document is appropriate.

    Examples

    1. The US Secretaries and Exchange Commission has a useful example regarding a code of ethics policy that leans towards ethical values. While it's focused on one industry it could easily be adapted to other sectors.
    2. Electronics firm Nokia has a page dedicated to sustainability listing their commitments towards net-zero, their goals towards people and the planet, plus other important assets relating to PPP emissions.

    Tests

    Procedure

    1. Check that a publicly visible sustainability statement exists.
    2. Check that a Code of Ethics statement can be located.
    3. Check that the sustainability statement contains details on PPP reports.
    4. Check that the sustainability statement has concrete dates planned.
    5. Check that the sustainability statement details legal compliance.
    6. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  2. BSPM01-2: Provide Details of Any Beyond-the-Scope Sustainability Features

    Applicability

    This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.

    Description

    Provide a mechanism within a sustainability statement to identify actions that a product or service may have undertaken that go beyond the WSGs to become sustainable. These may include work from other specifications, supplementary materials to the WSGs, or third-party tooling that has been created by the organization to improve sustainability.

    Machine testability for such features would include identifying references to the documentation within a sustainability statement, also if tooling itself is aware of anything that is beyond the scope of the WSGs and feels it warrants inclusion for improving sustainability for people and the planet, this could also be identified within the tooling and calculated into the overall scoring metrics.

    Examples

    1. Third-party testing tools such as EcoGrader or Website Carbon will have their own implementations of the WSGs and therefore may choose to not only score differently but potentially extend the criteria being tested.
    2. Other specifications such as those from the W3C or RFCs from the IETF may choose to reference sustainability best practices within the scope of their own work, based on evidence and beyond the WSGs offerings.

    Tests

    Procedure

    1. Check for any third-party sustainability features beyond the WSG scope.
    2. Check that any such features are met in both guidelines and success criteria.
    3. Check that no conflicts are caused with existing WSG compliance.
    4. If conflicts do exist, prioritize the WSGs unless a higher classification of document exists (law, etc).

    Expected Results

    1. All checks above are true.
  3. BSPM01-5: Provided Interactive Instructions for a Product or Service

    Applicability

    This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.

    Description

    Go beyond extensive documentation for a website or application and to offer optional interactive tuition for your product or service. This can take the form of guided tours, in-app assistance, or even video tutorials. The important thing is that visitors can acquaint themselves with the environment you provide and reduce problematic friction. Learning how to use a project quickly can also reduce wasted hardware utilization.

    Unlike documentation, instructional material that is integrated within a product or service will be harder to machine test against (as it will be tightly merged into a codebase), however, being able to identify the settings to enable such features should be achievable (as input boxes can be read), or if such instruction is provided on a dedicated subdomain, locating that resource.

    Examples

    1. Stonly provides a guided tour for their guided tour software! It might seem ironic but the tutorial showcases using a mixture of videos, images, and text (along with features like text-to-speech) how useful it can be.
    2. Animation software Pencil2D provides a collection of video guides that help people get to grips with the software. They demystify the subject for beginners and can help regular users refresh their memory as well.

    Tests

    Procedure

    1. Check if complex features could use additional contextual help.
    2. Check if interactive assistance or instruction should be offered.
    3. If additional aids would provide sustainability benefits, offer them.

    Expected Results

    1. All checks above are true.
  4. BSPM01-6: Include Reliable Evidence for Any Sustainability Claims

    Applicability

    This technique is Advisory to meet 5.1 Have an Ethical and Sustainability Product Strategy.

    Description

    Provide a mechanism for visitors to a website or application to validate claims of the use of terms such as green, sustainable, or eco-friendly within the product or service. This can be inclusive of when a website claims to be powered by renewable energy or claims to be more sustainable than others based on testing tools that exist on the market.

    For machine testability, metrics data could be used to validate such claims if the research has been provided in the public domain. If a testing tool has been used to make the claim, these can be validated by the accuracy of the said tool and its methodology. If it is based on just first-party information, scan for artifacts that may help verify sources such as research, carbon.txt files, etc.

    Examples

    1. A website hires an independent third-party sustainability consultant to measure the carbon impact of its products and services. Upon completion, a report is produced citing evidence that verifies their status.

    Tests

    Procedure

    1. Check if sustainability-related keywords or claims are made.
    2. Check that any claims made are verified with supporting evidence.
    3. Check for artifacts that may trace sustainability such as carbon.txt files.

    Expected Results

    1. All checks above are true.
  5. BSPM02-1: Check That an Employee Is Assigned As the Sustainability Representative

    Applicability

    This technique is Advisory to meet 5.2 Assign a Sustainability Representative.

    Description

    Verify that the website or application service provider has an individual who is responsible for ensuring the sustainability of the product or service. While they may have responsibilities other than this at the business or have to answer to those with more power or influence than them, it's still an important role that needs at least one officer.

    For machine testability, check the sustainability statement for contact details to verify that the individual responsible for managing such statements, coverage, and reporting, is the lead on sustainability. If no details are available, there may be hints in staff pages for agencies or companies that can be scanned against, otherwise, flag as a potential failure point. In the case of individuals, this can be ignored as they will be responsible for all aspects of a project.

    Examples

    1. Having a Digital Sustainability Lead or Sustainability Expert who is in charge of meeting compliance targets (in the same way that you would have accessibility experts in the team) can showcase your skill set.

    Tests

    Procedure

    1. Check for a named employee and their contact details on the sustainability statement.

    Expected Results

    1. All checks above are true.
  6. BSPM03-3: Produce Awareness Raising Resources on Sustainability

    Applicability

    This technique is Advisory to meet 5.3 Raise Awareness and Inform.

    Description

    Provide a mechanism for visitors of a product or service to gain greater awareness not only of digital sustainability but of the website or application's journey to become more sustainable. This can involve content creation, use of social media, or other documentation that will engage their particular audience and showcase improvements over time.

    Machine testing for awareness and informing can be identified by using search terms (for example "sustainability", "PPP", or "carbon"), and identifying the postings and density of those occurrences. The more content that relates to relevant subject matter (and how current it is), the more likely campaigns to increase awareness are taking place internally and externally in that community.

    Examples

    1. The BBC provides a huge wealth of information about sustainability from articles, videos, news stories, and more from its BBC Earth websites. This catalog of well-produced material is environmentally focused.

    Tests

    Procedure

    1. Check that the product or service's social media accounts for sustainability hashtag postings.
    2. Check a product or services blog for sustainability-related postings.
    3. Check a product or services footer for a sustainability hub or resources area for visitors.

    Expected Results

    1. All checks above are true.
  7. BSPM05-2: Calculate the Environmental Impact of Your Competitors

    Applicability

    This technique is Advisory to meet 5.5 Estimate a Product or Service's Environmental Impact.

    Description

    Provide a sustainability benchmark of competitors to ensure that you maintain a regular schedule of improvements to your own work. This is something that is observed in many other aspects of business but is a great principle for having the edge over others who may want to lead the change on being environmentally friendly.

    It's important to consider any third-party service in isolation when analyzing services that are independent of your own. This means that you should run that product or service against the WSGs without consideration of your own results and only compare them once testing is complete. For machine testing, any competitors could be provided by the user after the website or application has been analyzed, scanned, and compared - and those sites could be then tested.

    Examples

    1. A manufacturer of bicycles analyzes how sustainable their website is compared to their competitors (knowing that they cannot optimize their manufacturing site any further), so they use the WSGs as a benchmark.
    2. Two businesses are about to merge but they do not know which website they should use to preserve their ethical reputation, so they run an analysis of both to identify things to take from each creation.

    Tests

    Procedure

    1. Check your own product or service against the WSGs for a baseline score.
    2. Check competitor websites against the WSGs to identify where they sit sustainably.
    3. Check again regularly over time to monitor potential changes that may occur.

    Expected Results

    1. All checks above are true.
  8. BSPM06-1: Have a Clearly Defined Set of Sustainability Goals

    Applicability

    This technique is Advisory to meet 5.6 Define Clear Organizational Sustainability Goals and Metrics.

    Description

    Provide a driving force for sustainable change laid out in the sustainability statement. This can contain as little or as much information as you wish but it should aim to contain realistic timescales and targets aligned with roadmapped features for users to be able to realistically identify if goals are being met over the lifespan of the project.

    Machine testing such criteria should at least start with the existence of the sustainability statement, and the existence of a section on sustainability goals. If these exist then heuristic testing can identify key passages within the text relating to timeframes and potential mapped features (if these are listed), or if not, then at least bullet-pointing the number of goals being provided.

    Examples

    1. Sportswear firm Adidas has a page that showcases their sustainability goals for the coming year in short, well-defined paragraphs, along with links to previous reports and other sustainability-related information.
    2. Computing giant Dell has a dedicated page to sustainability showcasing its commitments towards PPP targets and providing specific year-based deadlines to achieve them along with report links and their related blog.
    3. United States Mega-brand Walmart has an extremely well crafted sustainability hub and sub-section of their corporate website that provides interactive information on their challenge to become sustainable.

    Tests

    Procedure

    1. Check for the existence of a publicly visible sustainability statement.
    2. Check that a journey or path is provided to introduce the subject.
    3. Check that current carbon emissions data is provided for context.
    4. Check that goals over a 1, 2, 5, 10, and beyond year plan are defined.
    5. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  9. BSPM07-1: Verify and Showcase Any Third-Party Sustainability Certifications

    Applicability

    This technique is Advisory to meet 5.7 Verify Your Efforts Using Established Third-Party Business Certifications.

    Description

    Provide a mechanism for product or service owners who have achieved certifications either as an individual or as a business in a related sustainability field (with a recognized achievement or certification) to have that accounted for in the calculation of their sustainability journey. Any such certifications should be included within a statement.

    For machine testability, this would require having a compiled list of recognized certifications and achievements (that are recognized as sustainable) which could then be identified by embedded links such as images for that scheme. Testing would also need to verify member's achievements. This can be used in weighted scoring to give additional marks for going beyond the WSG criteria.

    Examples

    1. Organizations that have chosen to commit to become a B-Corporation will have taken steps to become more sustainable and will be required to continue to make sustainable changes to retain their status.

    Tests

    Procedure

    1. Check that any certifications are marked up so they can be machine-read.
    2. Check that any certifications allow for validation to prove the authenticity of ownership.
    3. Check that all verified certifications are weighted into the sustainability scoring process.

    Expected Results

    1. All checks above are true.
  10. BSPM09-1: Check for Any Included Disclosure Reporting Policies

    Applicability

    This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.

    Description

    Provide a simple mechanism to identify what reporting scheme a business may operate under (such as GRI), or if they have any available guidance regarding disclosing and reporting sustainability outside of their sustainability statement. This may come under some publicly available policy documents, or it may require internal access to locate the data.

    The important step in this process is to scan through the pages of the site to identify every policy page and flag any that relate to disclosures or reporting (that mention sustainability). These will be key references that if they do not have good information architecture could be hard for the visitor to find. If such policies exist, ensure they are referenced within the sustainability statement.

    Examples

    1. The Global Reporting Initiative is a recognized standard that individuals may choose to use because of the lack of cost attached to access. It's also widely respected and can be used for conformance with legislation.
    2. The International Standards Organization has provided several standards that can help with disclosure reporting and alignment with conformance to the United Nations Sustainable Development Goals.

    Tests

    Procedure

    1. Check that the business operates under a disclosure reporting policy.

    Expected Results

    1. All checks above are true.
  11. BSPM09-2: Include Public Facing Carbon Reports on an Annual Basis

    Applicability

    This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.

    Description

    Ensure that policy decisions match up to regulatory and compliance targets for businesses and individuals aiming to become sustainable. By ensuring that you not only report your existing carbon emissions but also how you plan to reduce any remaining emissions (and be transparent with the public on this matter), you encourage trust in your brand.

    For machine testability, these reports should be linked to from within the sustainability statement on your public-facing website, and if possible in an open format such as HTML (otherwise, an accessible PDF format should be used). Remember that such documents should be regularly maintained if new information that is important to your sustainability journey should come to light.

    Examples

    1. Hilton offers an interesting infographic approach to carbon reporting by taking the essential information about how they have made reductions and lay it out in an PPP format (their evidence is available in table format).

    Tests

    Procedure

    1. Check that a public-facing carbon report is published annually.
    2. Check if an open format is provided, or as a fallback - PDF format.
    3. Check that carbon reports are linked to from within a sustainability statement.
    4. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  12. BSPM09-3: Check for Any Compliance Related Disclosure Documentation

    Applicability

    This technique is Advisory to meet 5.9 Support Mandatory Disclosures and Reporting.

    Description

    Encourage transparency and visibility around the shift to sustainability for products and services. By actively seeking reports that have to be submitted for compliance reasons (and verifying the data if using an open standard), we can more easily monitor how well a business or individual is meeting their obligations and reaching any targets.

    Because certain content within disclosure documentation may be confidential, internal access for tooling may be required if the organization in question is unwilling to provide a publicly visible anonymized version that will still offer the same useful public data (for those interested). If the data is available, this can be used within monitoring tools to track compliance checkpoints over time.

    Examples

    1. A product that creates CSRD reports using a wizard-style interface submits that information as part of a package of data to a service that tests for sustainability and this gets added to the evidence gathering.

    Tests

    Procedure

    1. Check if a CSRD report has been provided, if so, evaluate the data sustainability.

    Expected Results

    1. All checks above are true.
  13. BSPM11-1: Provide a Product or Service Maintenance Agreement

    Applicability

    This technique is Advisory to meet 5.11 Follow a Product Management and Maintenance Strategy.

    Description

    Provide a location where visitors and customers of your website or application can easily identify to what extent you will maintain and continue to provide updates for the product or service you offer. This agreement should include details not just for the product you offer (paid or otherwise), but for the website hosting the materials, and its assets.

    For machine testability, first examine that the agreement exists and that it can be easily understood (many policy documents can be written in legalese so having a plain English version should be a priority), testing tools can also identify key passages that could be highlighted as requirements for their particular needs (such as timeframes for coverage or exclusionary features).

    Examples

    1. PandaDoc has a simple software maintenance agreement template that could be used in cases where money is potentially being exchanged. It can be customized which could also be useful for importing into projects.
    2. HighGear has a working SaaS Maintenance and Support Agreement that provides details regarding the ongoing support individuals will receive for the service and what should and shouldn't be expected from the product.

    Tests

    Procedure

    1. Check that a product or service maintenance agreement exists.
    2. Check that the content provides coverage for the product or service.
    3. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  14. BSPM12-3: Identify the Iteration Record of the Website or Application

    Applicability

    This technique is Advisory to meet 5.12 Implement Continuous Improvement Procedures.

    Description

    Verify that the product or service has a proven track record of iterating over its lifetime. Naturally, the quality of the iteration (whether those changes have been improvements) will be more difficult to quantify, however, the fact that a product or service is active and being maintained is usually a good indicator of sustainable development.

    Machine testing for the iteration record is easier with open source projects as a timestamped record is usually available through the repository provider, of which you can analyze the code for impactful issues. If such access is not possible, archival or caching tools for websites can help provide a useful record of iteration over time and may (using a DIFF tool) showcase iterative alterations.

    Examples

    1. The Wayback Machine from the Internet Archive allows you to locate the historical archives of many pages and websites that have long since disappeared from the web (though they may not all function accurately).

    Tests

    Procedure

    1. Check if a website or application has a history of being updated.
    2. Check that the frequency is less than 12 months since the last update.
    3. Check (if access to a repository is provided) that more minor than major updates occur.
    4. Check (if access to a repository is provided) that structure-over-style updates occur.

    Expected Results

    1. All checks above are true.
  15. BSPM12-5: Clearly Label Changes Using a Release Notes Feature

    Applicability

    This technique is Advisory to meet 5.12 Implement Continuous Improvement Procedures.

    Description

    Provide a mechanism for users or visitors of a website or application to firstly identify when changes have taken place on a product or service and secondly be able to learn about the new features in more detail (if provided) so that they can quickly adjust themselves to changes that have been made for sustainability or other reasons (reducing confusion).

    Each product or service should have an associated set of release notes that should be regularly maintained, as such, version numbers within those notes should match the version number of the available versions of the product or service. In addition, release notes should be detailed and sectioned based on criteria such as additions, removals, changes, and fixes - not just a single bullet that offers little context. Links and images with more detail can also be helpful.

    Examples

    1. Providing a change.log document using the changelog semantics, and ensuring that it remains human readable can help products and services parse versioning data successfully, using it to highlight product changes.
    2. GitHub provides release notes with versioning alongside for open source projects. Our own Web Sustainability Guidelines has a version history (releases) page that offers details on additions, updates, and fixes.

    Tests

    Procedure

    1. Check that a changelog has been provided for each product or service.
    2. Check that modifications are categorized by the type of change.
    3. Check that significant changes such as new features are well documented externally.
    4. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  16. BSPM14-1: Align Product or Service Compliance With the SDGs

    Applicability

    This technique is Advisory to meet 5.14 Establish if a Digital Product or Service Is Necessary.

    Description

    Provide additional compliance criteria for testing tools to ensure that sustainability goals are being met. As such testing tools can analyze websites to vet each product or service against the United Nations Sustainable Development Goals to decide if enough evidence exists within WSG compliance (and other criteria) to pass or fail applicable categories.

    Because the SDGs may have scope beyond the WSGs, both individuals and testing tools must develop criteria that meet the strict requirements of each category to be considered a compliance pass or fail. As such, it may require additional input from the tester, internal access, or additional measurements and evidence gathering undertaken (with scores adjusted).

    Examples

    1. The Royal Academy for Engineering showcases each of the SDGs as a map and upon clicking each one you get taken to a dedicated page with information provided about how engineers are working to solve the issue.

    Tests

    Procedure

    1. Check if the product aligns with the SDGs and if not, identify ways to address shortcomings.

    Expected Results

    1. All checks above are true.
  17. BSPM16-1: Provide a Publicly Visible and Transparent Supplier Policy

    Applicability

    This technique is Advisory to meet 5.16 Create a Supplier Standards of Practice.

    Description

    Both increase transparency within an organization and to ensure that visitors and customers can see how suppliers and potential suppliers in the supply chain are vetted to ensure that they meet sustainable guidance set up by the organization. This can relate to the things purchased, or even the business and the way it operates.

    Machine testing a supplier policy first requires detection that the policy exists in a public area of a website or application. Once detected, the list of suppliers can be gathered and unless the provider has already been validated by a verified third-party, conditions for testing can be identified and listed by the tool. If not, consider testing that supplier as a first-party against the WSGs (if a digital service is given), otherwise, test against other compliance targets.

    Examples

    1. Mobile Phone company Telefonica has a Supply Chain Sustainability Policy document (PDF) that outlines its ethical standpoint regarding people and the planet as defined by the SDGs and how it does trade.

    Tests

    Procedure

    1. Check that a publicly visible supplier policy exists on the website.
    2. Check that information covering all three aspects of PPP are covered.
    3. Check that third-party supplier against the WSGs for sustainability.
    4. If the supplier has a poor impact record post-testing, reconsider usage.
    5. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  18. BSPM16-3: Publicly List the Sustainability Impact of Any Partnerships

    Applicability

    This technique is Advisory to meet 5.16 Create a Supplier Standards of Practice.

    Description

    Provide a mechanism where members of the public who view a supplier policy document within your website can easily view the sustainability impact that working with each supplier will have on your product or service. This can be measured in many ways, but a detailed report of any benefits, issues, and solutions over time is helpful.

    Identifying that a list of partners exists is key, if a list of partners exists then attempting to verify the impact of those services should first be found on the website itself. If the information is not found, flagging the issue and attempting to test the partners individually against the WSGs would be an ideal next step (in a similar fashion to competitor analysis) to identify problematic relationships.

    Examples

    1. A business with an impeccable sustainability record is offered the opportunity to partner with an oil and gas company for financial gain. Because of the conflict this would create, they chose to turn it down.

    Tests

    Procedure

    1. Check that all partnerships of the product or service are listed online.
    2. Check those partners against public lists of known greenwashers.
    3. Check those partners against the WSGs for sustainability conformance.
    4. If the supplier has a poor impact record post-testing, reconsider usage.

    Expected Results

    1. All checks above are true.
  19. BSPM17-1: Ensure That the Business Is a Living Wage Employer

    Applicability

    This technique is Advisory to meet 5.17 Share Economic Benefits.

    Description

    Provide a mechanism to ensure that all employees are paid a living wage. The mechanism for including this within a product or service for a business could be as simple as a living wage badge in their sustainability statement (or website footer), or for additional points, publishing transparent pay grades across the business on the website.

    Heuristic testing by examining the source code of a document on the employee benefits (pay grades) on a website would be able to identify whether that business is a living wage employer, as would comparing any existing job openings with pay grades against living wage rates. If these are not available, a living wage badge may be an indicator but evidence will need to back the claim.

    Examples

    1. The Living Wage website provides a search service to allow individuals to find businesses and employers who provide a Living Wage. This can be broken down by sector, region, industry, and even service type.

    Tests

    Procedure

    1. Check for a listing in the living wage directory or a verifiable declaration.
    2. Check the jobs page of a product or service to identify listed wages.
    3. Check those wages against the Living Wage current rates.
    4. If the wage is not shown or doesn't match, flag a potential issue.

    Expected Results

    1. All checks above are true.
  20. BSPM17-3: Ensure That the Business Provides Benefits for Employees

    Applicability

    This technique is Advisory to meet 5.17 Share Economic Benefits.

    Description

    Verify that a business offers a variety of benefits to its employees to make their working life better. This has coverage in the people aspect of PPP and therefore is important for sustainability. These benefits do not have to be limited to in-work additions but could be anything that employees utilize and find beneficial in or out of the workplace.

    Machine testing for such benefits can be difficult as there isn't a universal list of what can be offered, however, there is a commonality between the types of things that workplaces often provide. As such, analyzing words in job openings or culture / workplace pages on websites can identify these benefits and weigh the value (potentially) based on their impact (sustainably or otherwise).

    Examples

    1. Fat Beehive is a great example of a careers / life at work page done well that showcases the benefits that will be gained from working within the organization and also the benefits that will be gained outside of payment.
    2. Rocketmakers provide information about their values (the benefits they hope to get from workers) as well as the benefits they offer (outside of the wage). It's a visually striking layout as well which helps sell the appeal.

    Tests

    Procedure

    1. Check the work, culture, career, and jobs pages for potential benefits.
    2. Calculate values for such benefits based on their potential usefulness.
    3. Check if the added value of such benefits is likely to benefit workers.

    Expected Results

    1. All checks above are true.
  21. BSPM19-2: Provide a Publicly Visible Accessibility Statement

    Applicability

    This technique is Advisory to meet 5.19 Use Justice, Equity, Diversity, Inclusion (JEDI) Practices.

    Description

    Ensure that visitors are made aware of (or able to find out) the lengths that a product or service has gone to, ensuring that individuals with accessibility needs have been included in the design and development of the product or service. It should also aim to describe any limitations of the product or service and how to submit a support request.

    For machine testability, attempt to first establish that the accessibility statement exists and then try to identify the headlines within the document to establish its content. From there you can use heuristic testing of the source code to work out what accessible features have been provided, and what potential limitations may exist for inclusive design and accessibility needs.

    Examples

    1. United Kingdom charity Scope provides a comprehensive website accessibility statement that showcases their commitment to accessibility along with adaptations made and issues they are aware of to resolve.
    2. The European Environment Agency provides a simple but effective accessibility statement that underlines compatibility issues and levels of support with existing web standards that visitors need to be aware of.

    Tests

    Procedure

    1. Check for a publicly visible accessibility statement on the website.
    2. Check for a section detailing accessibility compatibility.
    3. Check for a section on accessibility features built-in.
    4. Check for a section listing support and known issues.
    5. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  22. BSPM20-1: Provide Links to the Privacy Policy and Relevant Legislation

    Applicability

    This technique is Advisory to meet 5.20 Promote Responsible Data Practices.

    Description

    Ensure that visitors can access a publicly visible privacy policy and that the content is human-readable, containing references to legislation that the website or application meets for compliance purposes (and how this is achieved). As privacy falls under the people aspect of PPP conformance, this has sustainability implications and should be followed.

    If the product or service is accessible or operates within multiple jurisdictions, testers should take this into account when identifying relevant legislation that should be included within the policy document. The document should be visible within the footer of a website or a section alongside other relevant policy files and documents such as the accessibility and sustainability statement.

    Examples

    1. DK Oldies has a simple privacy policy that breaks down everything into categories and bullet points for each of the term conditions to make the information easier to comprehend for those who wish to read everything.
    2. Sony Pictures has a more comprehensive privacy policy that uses bullet points to break down information into key facts but offers more technical language and detail that could be problematic for some people.

    Tests

    Procedure

    1. Check that a publicly visible privacy policy on the website.
    2. Check that the privacy policy includes sections on required legislation.
    3. If in Germany, check that an Impressum document is available.
    4. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  23. BSPM21-1: Provide a Website Archive Containing Outdated or Old Versions

    Applicability

    This technique is Advisory to meet 5.21 Implement Appropriate Data Management Procedures.

    Description

    Provide a mechanism for retrieving old information that may have expired in terms of currency, but may still have a practical use as a reference for individuals wishing to identify information that had prior use or relevance for a product or service. This technique is most useful when the information is content-focused or showcasing iterative change.

    For machine testability, several criteria should be taken into account. Firstly the outdated material should be isolated either on a subdomain or a subfolder to avoid clashing with existing content. This material should also be reduced to a minimalist form (content only) to reduce sustainability impacts. Furthermore, all links should be checked to ensure that they resolve to a working reference.

    Examples

    1. A magazine decides to maintain issues spanning 12 months (one year) on its website to maintain currency, however, for content older than this, it will be archived on a slimmer website with click-to-reveal media available.

    Tests

    Procedure

    1. Check if content exists beyond 5-10 years on the website or application.
    2. Check if the mentioned content is being accessed rarely rather than regularly.
    3. Check (if both of the above are true) that a website archive holds this material.
    4. Check that the mentioned archive is slimmed down to further reduce sustainability impacts.

    Expected Results

    1. All checks above are true.
  24. BSPM21-2: Enable Users To Manage Their Accounts and Data

    Applicability

    This technique is Advisory to meet 5.21 Implement Appropriate Data Management Procedures.

    Description

    Offer a mechanism for users of a product or service to allow the management of any information (data) or accounts (subscription or otherwise) they may hold. Best practices with such systems will include allowing the deletion of accounts and / or data without human intervention and providing single steps (with confirmation) to achieve this.

    Machine testing such features will undoubtedly require internal access for a product or service, however, if the ability to create an account can be automated or undertaken, then the visibility of the source code within the account management system can be verified by tooling. As such, where internal access is required, gain permission, then check for the ability to delete accounts (and verify that this action works), if it does, pass the check.

    Examples

    1. A Software as a Service provider has within the account manager, a profile section allowing customers to click a big red button to "delete account and data". Upon hitting confirm, this erases their details from the servers.

    Tests

    Procedure

    1. Check that users can delete their accounts and data without human intervention.

    Expected Results

    1. All checks above are true.
  25. BSPM22-1: Provide a Publicly Facing Policy Around Emerging Technologies

    Applicability

    This technique is Advisory to meet 5.22 Promote and Implement Responsible Emerging Technology Practices.

    Description

    Offer a document that describes in detail any mechanisms that a product or service has in place regarding emerging technologies such as artificial intelligence, cryptocurrencies, or other impactful (sustainably speaking) aspects of the web. If none of these features are likely to be used by you or your suppliers, state so, otherwise, provide details.

    For machine testability, it's important to ensure that such a document exists as emerging technologies are having an increasingly impactful effect on the sustainability of the Internet. As such, if no statement exists, flag this concern. If one does exist, check what technologies are being utilized, to what extent, and weigh these in subsequent calculations (raising issues as appropriate).

    Examples

    1. A service that provides customers with the ability to remotely monitor and control appliances in their homes utilizes the Information of Things. In their policy document, they outline how this benefits and risks sustainability.
    2. A SaaS product uses Artificial Intelligence to screen customers for health issues before seeing a medical professional. The policy covers impact issues such as risks of discrimination, emissions, and other PPP factors.

    Tests

    Procedure

    1. Check that a publicly facing policy exists around emerging technology usage.
    2. Check (if such technology is used) to what extent and how much of an impact this is causing.
    3. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  26. BSPM23-1: Verify All Partnerships and Sponsors Against the WSGs

    Applicability

    This technique is Advisory to meet 5.23 Include Responsible Financial Policies.

    Description

    Ensure that if an individual or organization is being endorsed or financed by an external group or organization, the influence that the group may provide is sustainably minded (and will not divert the product or service away from such activities). Ensuring all endorsements are sustainable can also help service providers avoid claims of greenwashing.

    Machine testing such variables will require firstly all partnerships and sponsors to be correctly labeled as such, if they are labeled within the product or service then the tester can test each of these services against the WSGs as a first-party (an isolated service) and verify how sustainable they are in comparison. If they score poorly, a warning of conflict of interest issues can be provided.

    Examples

    1. In 2024, The Science Museum (London) decided to run a gallery promoting sustainability, however, due to their sponsorship by Oil and Gas giants this caused repetitional damage and greenwashing accusations.

    Tests

    Procedure

    1. Check that all partnerships and sponsors are publicly listed on the website.
    2. Check that all listings are correctly marked up so they can be machine-read.
    3. Check that all partnerships and sponsors are vetted against public lists of known greenwashers.
    4. Check that all partnerships and sponsors are tested against the WSGs for sustainability.
    5. If the supplier has a poor impact record post-testing, reconsider usage.

    Expected Results

    1. All checks above are true.
  27. BSPM24-1: Verify and Showcase Any Third-Party Sustainability Projects

    Applicability

    This technique is Advisory to meet 5.24 Include Organizational Philanthropy Policies.

    Description

    Identify any third-party projects connected with the product or service that can be verified as sustainability causes or schemes of value that may benefit either the business on its sustainability journey or directly lead to improvements for people or the planet. Schemes can include reinvesting in the ecosystem, or community, or reducing emissions.

    For machine testability, this would require having a compiled list of recognized schemes (that are recognized as sustainable) which could then be identified by embedded links such as images for that scheme. Testing could also check schemes that support verifying members as required. This could be used in weighted scoring to give additional marks for going beyond our guidelines.

    Examples

    1. 1% for the Planet is a certified scheme that asks businesses involved to commit one percent of their earnings towards planetary projects. As a well-established scheme, it has generated a lot of income for environmental causes.
    2. Krystal hosting showcases a great number of projects that they are actively involved with. This ranges from financial investment to carbon offsetting and climate / sustainability commitment memberships.

    Tests

    Procedure

    1. Check that all third-party projects are publicly listed on the website.
    2. Check that all listings are correctly marked up so they can be machine-read.
    3. Check that third-party projects are weighted into the sustainability scoring process.

    Expected Results

    1. All checks above are true.
  28. BSPM27-1: Provide a Sustainability and Environmental Budget

    Applicability

    This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.

    Description

    Provide a clearly defined set of targets to achieve when working to either make an existing project more sustainable or create a newly sustainable website or application. Just as with performance budgets, you can define the variables that are most achievable and iterate with more variables (and improvements) to be included over a project's lifecycle.

    For machine testability, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner to ensure that any results can be compared against the budget for alignment. If the budget is met, a pass can be given, if not, a warning can be issued with recommendations where potential improvements can be made.

    Examples

    1. A video streaming service wishes to build a sustainability budget based on the WSGs so defines KPIs around each success criterion with a defined numerical value that must be achieved during machine testing stages.

    Tests

    Procedure

    1. Check that any budget information provided by the visitor is in a valid format.
    2. Check to see if the visitor has submitted a JSON budget with predefined labels and values.
    3. Check that the WSG conformance (against success criteria) matches or is lower than the budget.

    Expected Results

    1. All checks above are true.
  29. BSPM27-2: Provide a Performance Budget for Your Product or Service

    Applicability

    This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.

    Description

    Provide a clearly defined set of targets to achieve when working to either make an existing project more performant or create a new website or application fast and optimized. As with a sustainability budget, you should define the variables that are most relevant to your project and consider targets that will make a sustainable difference to your visitors.

    For machine testability, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner to ensure that any results can be compared against the budget for alignment. If the budget is met, a pass can be given, if not, a warning can be issued with recommendations where potential improvements can be made.

    Examples

    1. Performance Budget allows you to set targets for the web performance of your product or service based on either a series of assets or Core Web Vitals. You can import these defined targets into Google Lighthouse.

    Tests

    Procedure

    1. Check that any budget information provided by the visitor is in a valid format.
    2. Check to see if the visitor has submitted a JSON budget with predefined labels and values.
    3. Check that the performance criteria match or is lower than the budget.

    Expected Results

    1. All checks above are true.
  30. BSPM27-4: Benchmark the Improvement in Revised Budget Achievements

    Applicability

    This technique is Advisory to meet 5.27 Define Performance and Environmental Budgets.

    Description

    Provide a mechanism for service providers to monitor the sustainability or performance improvements that occur over time when revising their budgets to meet stricter targets after they meet existing ones. This will require constant monitoring of existing products or services to ensure that goals are being met and that budgets are improving for the better.

    As with all budget-related guidance, the budget (often provided in a format like JSON) will either be required to be submitted to the testing agent or be declared in some other manner so that previous and current targets can be reviewed over time. In addition, monitoring tools should note when targets are reached and indicate the potential to achieve new goals to better their work.

    Examples

    1. An application that decides to redesign its product after user-testing indicates that it would significantly improve the sustainability of the service. Post redesign, the budget is met so new targets are chosen.

    Tests

    Procedure

    1. Check that any budget information provided by the visitor is in a valid format.
    2. Check to see if the visitor has submitted a JSON budget with predefined labels and values.
    3. Check the submitted information against previously submitted budgets.
    4. Check that the new budgets are equal or tighter (smaller) targets than previously.
    5. Check the website or application's budget analysis data against previous tests.
    6. Check that the website or application meets budget requirements regularly.

    Expected Results

    1. All checks above are true.
  31. BSPM28-1: Produce an Open Source Policy and Verify Licensing

    Applicability

    This technique is Advisory to meet 5.28 Use Open Source Tools.

    Description

    Consider the place that open source has in a product or service. If open source (for example) is used within code libraries such as frameworks, these should be identified and the license type should be verified to ensure compliance occurs (for instance, attribution). If the business in question is supportive of open source, this should also be noted publicly.

    Identifying an open source license should be straightforward, especially if the product or service utilizes a public repository system like GitHub. In addition, source code can be scanned for required attribution references and if an open source policy exists within either the license agreement on the website or in its own document, this can be highlighted as working towards compliance.

    Examples

    1. The Todo Group has a collection on GitHub of Open Source Policies which can provide useful inspiration in describing how various companies got into open source and what their policy for such community work is.

    Tests

    Procedure

    1. Check that a publicly facing open source policy exists on the website.
    2. Check that the policy contains content about creation and contribution.
    3. Check that the content is written in plain English for readability.

    Expected Results

    1. All checks above are true.
  32. BSPM28-3: Identify Contributions Made to Open Source Projects

    Applicability

    This technique is Advisory to meet 5.28 Use Open Source Tools.

    Description

    Provide a mechanism to identify any open source work an individual or business may have made which gives back to the community. Open source and working on collaborative projects is a sustainable way of helping to evolve knowledge around digital issues and as such should be referenced in meeting sustainability objectives (especially high profile work).

    Machine testing open source will require first identifying a public repository associated with the website or project (this is usually indicated using a social link button). Once this has been established, identifying the projects to which the group or members are the source or a contributor to (including activity levels) can be weighed to determine compliance with the success criteria.

    Examples

    1. Many web designers and developers have a link to their GitHub repository on their personal portfolio. This allows prospective clients and casual viewers to see all of the cool open source work they have contributed to.

    Tests

    Procedure

    1. Check a product or service's website for a link to a public repository.
    2. Check for projects where group members are the source or contributor.
    3. Check activity levels for open source work are consistent and not irregular.

    Expected Results

    1. All checks above are true.
  33. BSPM29-2: Maintain Transparent Communication When Problems Occur

    Applicability

    This technique is Advisory to meet 5.29 Create a Business Continuity and Disaster Recovery Plan.

    Description

    During downtime or when issues occur, it's critical that visitors or users can understand what is happening to the product or service, why it is happening (if known), and when the service is likely to be restored. This will help ensure continuity of service and if a backup or alternative can be offered, provide a way of reducing wasted resources attempting to keep reloading the non-functioning feature.

    Machine testing for how visitors are kept informed and provided with alternative methods of interacting with the product or service could be through the use of a third-party provider that takes the slack when the first-party becomes unavailable, the use of archived material (identified by the content) could also be an indicator or the content provided on some kind of automatically updating channel that lists ongoing issues.

    Examples

    1. System status pages can be a useful customer metric for keeping up-to-date with ongoing issues and reducing the burden of customer support if something widespread happens to a product or service. It does produce reliance on a third-party but its sustainability benefits relate to the business governance and the data from the product can feed into improving ideation which can include improving all-round sustainability.

    Tests

    Procedure

    1. Check for backups that are stored in a read-only, recoverable state (internal access).
    2. Check for a public-facing policy relating to disaster recovery and continuity.
    3. Check for a system status page and that everything is running correctly.

    Expected Results

    1. All checks above are true.

Test Suite

Interoperability is important to web professionals. Better interoperability among implementations means that web professionals can create websites, applications, and tooling designed to be sustainable, and ensure that it is successfully repeatable (and testable) in several environments. It means reducing the potential for Web sustainability issues to occur in complex projects and reducing the implementation time where automation and tooling can assist during the creation, development, and maintenance process. Writing tests in a way that allows them to be run in all browsers gives implementors confidence that they are shipping software that will be compatible, and consistent with other implementations.

Good test suites drive interoperability. They are a key part of making sure web standards are implemented correctly and consistently. More tests encourage more interoperability. Wrong tests drive interoperability on wrong behavior. As such, Web Sustainability needs good test suites. It's an evolving field and most of the test suites are still works in progress: so they may contain errors.

The primary focus of this test suite is to provide interoperability for tool makers (in terms of automation and compliance with the WSGs). In addition, we also aim to provide meaningful testable metrics that can be measured and help identify the true impact of the Success Criteria within the WSGs, and therefore better understand the impact that digital has on the ecosystem.

Implementors could use the dataset provided to expand upon the results and offer more nuanced research. Additionally, there is the potential for toolmakers to create more accurate products that measure the carbon impact of products. While the scope of such matters may potentially stretch beyond this group's remit, the results could feed back into - and impact further iterations of our work.

Table of Results

The below table contains links to our test suite results generated using a cross-section of machine-readable techniques from the previous section.

The tests themselves (along with any corresponding reports generated) are stored on GitHub in our public repository under the test-suite folder.

Show / Hide Table of Results for the Test Suite.
UX WebDev Hosting Business
Guideline SC 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
1 PASS PASS PASS PASS FAIL FAIL PASS FAIL PASS PASS FAIL FAIL PASS PASS
2 PASS PASS PASS FAIL FAIL PASS PASS PASS PASS
3 PASS PASS PASS PASS FAIL FAIL PASS
4 FAIL FAIL PASS PASS PASS FAIL
5 FAIL FAIL PASS PASS FAIL FAIL FAIL PASS FAIL
6 PASS FAIL PASS PASS PASS FAIL FAIL PASS FAIL FAIL FAIL PASS PASS
7 PASS PASS PASS PASS PASS PASS PASS PASS PASS FAIL
8 PASS PASS PASS PASS PASS PASS FAIL FAIL FAIL FAIL FAIL
9 PASS PASS PASS PASS PASS FAIL PASS FAIL PASS PASS PASS FAIL
10 PASS PASS PASS PASS PASS PASS PASS PASS FAIL FAIL
11 PASS FAIL PASS PASS PASS PASS PASS FAIL PASS FAIL FAIL FAIL FAIL
12 PASS PASS PASS PASS PASS PASS FAIL PASS PASS FAIL PASS FAIL FAIL FAIL PASS FAIL PASS FAIL
13 PASS PASS FAIL
14 PASS PASS PASS PASS PASS PASS PASS PASS FAIL FAIL FAIL
15 PASS PASS PASS PASS PASS PASS FAIL FAIL
16 PASS PASS PASS PASS PASS PASS PASS FAIL PASS
17 FAIL PASS PASS PASS PASS PASS PASS FAIL PASS FAIL
18 PASS PASS PASS FAIL
19 PASS PASS PASS PASS PASS PASS FAIL PASS FAIL FAIL FAIL
20 PASS PASS PASS PASS PASS FAIL FAIL
21 PASS PASS FAIL FAIL PASS FAIL PASS PASS
22 FAIL FAIL PASS PASS FAIL PASS FAIL FAIL FAIL FAIL
23 PASS PASS PASS PASS PASS PASS FAIL
24 PASS FAIL FAIL FAIL FAIL PASS FAIL
25 PASS PASS PASS FAIL
26 PASS PASS FAIL FAIL FAIL FAIL
27 FAIL PASS PASS FAIL PASS FAIL
28 FAIL PASS FAIL PASS
29 PASS PASS PASS FAIL PASS FAIL PASS

If attempting to create a test to include within or expand the test suite, remember to follow these review guidelines and be considerate that our tests follow the formatting structure of tests created for the CSS Working Group (for interoperability as well as convenience). For an indication of how tests should be structured, use our documented template test as a starting point.

Also, be sure to remember that qualitative as well as quantitative tests are equally valuable as long as they can be tested by machine (and thus automated in some way). As tests will provide a means of identifying whether Success Criteria can offer Automated testing over Manual interventions, notifications of this potential attribute (and methods) will be referenced within the main specification.

Glossary

Commissioner

The person, team of people, organization, in-house department, or other entity that commissioned the evaluation. In many cases the evaluation commissioner may be the website owner or website developer, in other cases it may be another entity.

Conformance

Satisfying all the requirements of a given standard, guideline or specification.

Evaluator

The person, team of people, organization, in-house department, or other entity responsible for carrying out the evaluation.

Interoperability

Interoperability is the ability of two or more systems or components to exchange information and to use the information that has been exchanged.

Web page

A non-embedded resource obtained from a single URI using HTTP plus any other resources that are used in the rendering or intended to be rendered together with it by a user agent.

Web page state

Dynamically generated web pages sometimes provide significantly different content, functionality, and appearance depending on the user, interaction, device, and other parameters. In the context of this methodology such web page states can be treated as ancillary to web pages (recorded as an additional state of a web page in a web page sample) or as individual web pages.

Website

A coherent collection of one or more related web pages that together provide common use or functionality. It includes static web pages, dynamically generated web pages, and mobile websites and applications.

Website developer

The person, team of people, organization, in-house department, or other entity that is involved in the website development process including but not limited to content authors, designers, front-end developers, back-end programmers, quality assurance testers, and project managers.

Website owner

The person, team of people, organization, in-house department, or other entity that is responsible for the website.

Acknowledgments

Additional information about participation in the Sustainable Web Design Community Group (SWD-CG) can be found within the wiki of the community group.

Participants of the SWD-CG Active in the Development of This Document

Alexander Dawson, Andy Blum, Francesco Fullone, Ian Jacobs, Laurent Devernay, Len Dierickx, Ɓukasz Mastalerz, Mike Gifford, Morgan Murrah, Thibaud Colas, Tim Frick, Tzviya Siegman, Zoe Lopez-Latorre