The fact that something is possible to measure, and may even be highly desirable and useful to expose to developers, does not mean that it can be exposed as runtime JavaScript API in the browser, due to various privacy and security constraints. The goal of this document is to explain why that is the case and to provide guidance for what needs to be considered when making or evaluating a proposal for such APIs.

This is a work in progress and may change without notice.

Introduction

Performance is a feature. Modern browsers provide extensive tooling to help developers understand and optimize their applications: network-layer insights about the requests (timing, cache, request and response payload, etc); rendering insights (cost of layout and style recalculations, image decoding, per-layer rendering costs, idle time, etc); CPU and memory profiling; JavaScript debugging, and much more. As a developer, you can open the "developer tools" console in any browser and get immediate access to a rich set of data:

However, experience shows that data gathered in this mode—by the developer, on their own "development" machine—is not sufficient. Real users run applications on a vast array of different hardware and under different conditions that are hard or impossible to replicate—and even anticipate—in the lab. Hence, Real User Monitoring (RUM), which provides a restricted subset of performance APIs and metrics.

Why restricted? Due to many privacy and security considerations:

  1. The fact that a certain capability or metric can be recorded by the platform or browser, and may even be exposed in browser's developer tools, does not automatically mean that it should or can be exposed as a runtime API because such data may reveal private or sensitive information about the visitor.
  2. APIs and metrics that are exposed at runtime to the application (i.e. RUM APIs) must take additional precautions against unintentionally exposing new or more accurate privacy and security attacks against the visitor:
    1. Some APIs and metrics may require additional opt-in mechanisms.
    2. Some APIs and metrics may need to provide data at lower resolution - e.g. through reduced accuracy, aggregation, delayed or out-of-band delivery, etc.
    3. Some APIs and metrics may not be possible to expose at all, even with above precautions in place.

In short, just because something is possible to measure, and perhaps is even highly desirable and useful to many developers, does not mean that it can be exposed as a runtime JavaScript API. The goal of this document is to explain why that is the case and to provide guidance for what needs to be considered when making or evaluating a proposal for such APIs.

Types of profiling

Local ("DevTools") profiling

Whenever you open the developer tools console in the browser, or use other tools running on the same device to collect some form of a profiler, you are able to gather detailed traces about each and every sub-component of the system: everything from which JavaScript functions are executing, and all the way down to what each thread is doing in your favorite browser, what operations the OS and the underlying hardware is processing, and so on.

Such profiling tools capture rich traces with a lot of data and are often invaluable in helping identify and resolve performance issues. The fact that the developer can observe this data about their system does not expose new concerns from a privacy or security standpoint: the developer is collecting information about their system; if personally identifiable information (PII) is present in the trace (e.g. authentication or user data, etc.) then it is their own.

  1. The developer controls the device; the device contains developer's PII.
  2. The act of collecting a trace from own device, via whatever tool, is an implicit opt-in to gather a detailed report that may contain their private data.

Developers should take precautions and scrub PII and other sensitive data if and when such tracing reports are shared with others, and especially in the public!

Remote ("RUM") profiling

Understanding how the application performs on the end user's device is critical to delivering a well functioning product. However, collecting such "real user measurement" data is a form of remote (RUM) profiling, which is subject to additional considerations:

  1. Performance APIs and metrics must follow the same-origin security model, which is a critical security mechanism for isolating web applications, and users' data, from each other—e.g. cookies, navigation history, databases and so on.

    The above requirement is further complicated by the fact that the browser offers a number of shared caches that many performance-minded developers are understandably interested in (e.g. HTTP response cache, memory caches, etc.), but that may be (ab)used in a side-channel attack and "leak" data about the user—e.g. timing of a response may reveal if the user has previously or recently visited another origin; the amount of time to execute some code, or paint some content, can reveal the user's state; and so on.

  2. The data contained on the device belongs to the owner of the device—not the developer or the application owner—and likely contains private and sensitive information both about the owner, and other applications on their device. As a result, the browser cannot allow the same level of "deep profiling" access to the application by default, as that may reveal sensitive data about the user and other applications on the device.
  3. The APIs and metrics that are exposed to the application must take precautions against enabling new or higher-resolution privacy and security attacks against the user.

In practice, and as a result of the above considerations, many RUM APIs are limited in what they can expose to developers: some "deep profiling" use cases are simply not possible; some require reduced resolution; some may surface that a performance problem exists but may not be able to expose why the problem is there.

We cannot and should not expect parity between local and remote profiling capabilities.

Privacy and Security considerations

Fingerprinting and security questionnaires

Review and consider the implications of the metric or API against the considerations in:

  1. Fingerprinting Guidance for Web Specification Authors
  2. Self-Review Questionnaire, Security and Privacy

Same-origin security model

Ensure that the proposed API or metric conforms with the same-origin security model: can the new API be used to reveal information about another origin? If so, it may need to be restricted, or disallowed.

For example, Resource Timing API ([[RESOURCE-TIMING]]) provides high-resolution timestamps for each resource fetch, but some resources are fetched from different origins and exposing high-resolution timestamps about such resources can leak data about the user's navigation history. To mitigate this, Resource Timing API defines the Timing-Allow-Origin response header, which acts as an opt-in mechanism that must be provided by the origin before such data is exposed to another origin.

Side-channel and timing attacks

Can the new API or metric be (ab)used to enable new or more accurate form of side-channel or timing attacks?

For example, application developers may want to know precise information about renderer-related activities: if and when layout or style-recalc occurs, how long each frame took to render, if and when some content is painted, and so on. However, access to such high-resolution data could be (ab)used by an attacker to launch a number of high-resolution attacks against the user:

Note that some forms of above attacks are already possible today. However, the key is not whether it is simply possible, but whether the new API could enable higher resolution (more accurate, or much faster) form of attack.

As a practical example, to address the above concerns the Frame Timing API ([[FRAME-TIMING]]) was specifically designed to surface slow-only frames—i.e. there are no explicit signals about paint, layout, or exact duration of the frame. As such, Frame Timing does not expose any attack capabilities beyond what is already possible with existing methods.

As a corollary to the above, RUM APIs are limited in what they can expose with respect to the rendering pipeline. Proceed with caution in this space.

As another example, application developers may want to know accurate memory usage of their applications, such that they can detect memory leaks and regressions in their code; adjust application logic at runtime, and so on.

This is valuable data, but the same API could also be (ab)used to measure the memory footprint of third-party resources and dependencies, which can leak information about what content is being rendered—e.g. an attacker can figure out if the user is authenticated by comparing the memory footprint of the loaded iframe, which can vary based on whether the user gets presented with a login page vs authenticated content.

One plausible mitigation to the above could be reduced accuracy of the reported used memory. Although, even there, it's not immediately clear what the minimum thresholds should be, and if such data is useful to developers once such a restriction is applied.

Best practices for Performance APIs

  1. Provide the minimum resolution to address the use case.

    Real-time, high-resolution metrics enable higher accuracy attacks. Consider strategies to reduce resolution:

    • Round or bucket the provided metrics. For example, DOMHighResolutionTimestamp ([[HR-TIME]]) is capped to 5 microseconds.
    • Delay delivery of the metric, batch multiple metrics. Processing multiple metrics is more efficient, and delayed delivery reduces accuracy and speed of many runtime attacks.
  2. Avoid synchronous performance APIs and metrics; use PerformanceObserver.

    • Synchronous delivery and access can create performance bottlenecks.
    • Asynchronous delivery can help slow down and mitigate some forms of attacks - see (1).
  3. Require opt-in mechanisms where necessary.

    It may be desirable to expose some forms of performance data about other origins that an application depends on. However, such data should only be made available with the consent of those origins.

    For example, timing data about cross-origin resource fetches is subject to Timing-Allow-Origin opt-in.

  4. Protect origin-sensitive configuration and data.

    Some use cases, such as error or policy violation reporting, require that changes in configuration of where and whether the report is delivered must be protected from other origins—e.g. if the page sets a security policy and wants to receive violation reports at a designated report-uri, then a third party script should not be able to modify this policy. In such cases, it may not be possible to expose a JavaScript API at all, and a different mechanism (e.g. HTTP headers) must be used.

    For example, Network Error Logging ([[NETWORK-ERROR-LOGGING]]) intentionally does not expose a JavaScript API to configure where reports are delivered, as that may allow a malicious script to hijack and "hide" such failures.

    Some types of data should only be exposed to the origin, and should not be accessible to script.

    For example, Network Error Logging reports are delivered via an out-of-band reporting API ([[REPORTING]]) mechanism to the endpoint designated by the origin. These reports are not exposed at runtime, as that could enable third party resources to enumerate them and acquire new and private information: past navigation history, IP of the user when navigation was initiated, and other sensitive information.

FAQ

  1. The API or metric exposes new, or more accurate, data that may have security and privacy implications. Does this mean we can't make it available to the web?

    No. Runtime attacks are an ever-present and existing threat. The fact of their existence does not prevent us from making progress and exposing new and useful performance APIs and metrics to web applications. However, we need to be careful and consider their implications on a case by case basis: outline the risks, consider possible mitigations, understand and clearly document the tradeoffs, and solicit wide review before proceeding with implementation.

  2. Can we use out-of-band reporting (OOBR) to mitigate above risks?

    No. OOBR may be used as a mechanism to isolate some types of sensitive configuration and report data that belongs to the application origin from resources that belong to another origin and are executed by the application. However, the reverse is not true: simple use of OOBR does not grant privilege to the application to report arbitrary data or metrics that may reveal information about another origin, or the user, as that would violate both the user's privacy and the same-origin policy enforced by platform.

  3. But I really need "deep" profiling, any other options?

    Maybe. If you have administrative control of the user's device (e.g. via corporate policy), or can get the user to explicitly opt-in into a mode where they allow such profiling and reporting, you may be able to obtain higher-resolution profiling data. However, this is only applicable in select cases and is generally discouraged, as it is virtually guaranteed to reveal a significant amount of private and sensitive data about the user.

Acknowledgments

Sincere thanks to Philippe Le Hegaret, Todd Reifsteck, Nat Duca, and Yoav Weiss for their helpful comments and contributions to this work.