This document lists the requirements of the W3C Testing Resource Center.
Introduction ============ The Resource Center is meant to be the canonical place to find all W3C testing related information. It should be the home of a short but complete documentation center which explains the tools and processes used across W3C to write, review and run tests. It should be home to an instance of the test framework, so it is easy to run entire test suites without having to install anythinh. And Finally, it should host all test-related data and means to access that data (dashboards, API, plugins, etc).
Documentation ============= Documentation should be authored using markdown format, versioned using Git and published using a static-site generator such as Jekyll. Initially, GitHub pages can be used for hosting using a custom domain name. This may be revisited should this solution not meet our quality requirements. A list of [pre-existing test-related documentation](http://www.w3.org/wiki/Testing/Resource_Center_TF/Existing_Documentation) is available on the wiki. And so is a list of the [documentation we want to have on launch](http://www.w3.org/wiki/Testing/Resource_Center_TF/Documentation). The Resource Center Task Force is directly responsible for collecting, authoring and organizing this documentation. Test Framework ============== The Requirements for the test framework are described in [Test Framework Requirements](http://w3c.github.com/testing-task-forces/test-framework.html). These include deployment of an instance of the framework within the Test Center and collection of the results in a database. Test Coverage Data ================== Spec test coverage is a lot more complicated to define than code coverage. We'll be using different heuristics in order to define coverage, such as word count, lines of WebIDL, number of [[RFC2119]] keywords, algorithm steps, etc. and refining those as we go along. We will be using two scripts to collect data. One script will crawl the test repository and count the number of tests / assertions / test pages (still TBD) per spec and spec section (using the available meta data). The other script will parse the specifications using the heuristics described above and will estimate the required number of tests. There needs to be a way to override and/or tune this estimation, perhaps within the test repository itself, so that known coverage can be manually inputed, and different types of specs handled. Test coverage data will be available through an API, in JSON format. This test coverage data will also be available as a plugin that can be included in WebPlatform.org and directly within specs to provide contextual info on test coverage of a given feature. Test Results ============ Test results data will be available through an API, in JSON format. This test results data will also be available as a plugin that can be included in WebPlatform.org and directly within specs to provide contextual info on test results of a given feature. API-accessible test results data will include: * time of test * user agent header * other identifying info of the device/UA under test, if available * test identity * result (pass, fail, etc) Test results will be stored indefinitely. Test Sign-up area ================= Contributor Dashboard =====================