This is a work in progress and may change without notice.
However, experience shows that data gathered in this mode—by the developer, on their own "development" machine—is not sufficient. Real users run applications on a vast array of different hardware and under different conditions that are hard or impossible to replicate—and even anticipate—in the lab. Hence, Real User Monitoring (RUM), which provides a restricted subset of performance APIs and metrics.
Why restricted? Due to many privacy and security considerations:
Such profiling tools capture rich traces with a lot of data and are often invaluable in helping identify and resolve performance issues. The fact that the developer can observe this data about their system does not expose new concerns from a privacy or security standpoint: the developer is collecting information about their system; if personally identifiable information (PII) is present in the trace (e.g. authentication or user data, etc.) then it is their own.
Developers should take precautions and scrub PII and other sensitive data if and when such tracing reports are shared with others, and especially in the public!
Understanding how the application performs on the end user's device is critical to delivering a well functioning product. However, collecting such "real user measurement" data is a form of remote (RUM) profiling, which is subject to additional considerations:
The above requirement is further complicated by the fact that the browser offers a number of shared caches that many performance-minded developers are understandably interested in (e.g. HTTP response cache, memory caches, etc.), but that may be (ab)used in a side-channel attack and "leak" data about the user—e.g. timing of a response may reveal if the user has previously or recently visited another origin; the amount of time to execute some code, or paint some content, can reveal the user's state; and so on.
In practice, and as a result of the above considerations, many RUM APIs are limited in what they can expose to developers: some "deep profiling" use cases are simply not possible; some require reduced resolution; some may surface that a performance problem exists but may not be able to expose why the problem is there.
We cannot and should not expect parity between local and remote profiling capabilities.
Review and consider the implications of the metric or API against the considerations in:
Ensure that the proposed API or metric conforms with the same-origin security model: can the new API be used to reveal information about another origin? If so, it may need to be restricted, or disallowed.
For example, Resource Timing API ([[RESOURCE-TIMING]]) provides
high-resolution timestamps for each resource fetch, but some resources
are fetched from different origins and exposing high-resolution
timestamps about such resources can leak data about the user's navigation
history. To mitigate this, Resource Timing API defines the
response header, which acts as an opt-in mechanism that must be
provided by the origin before such data is exposed to another origin.
Can the new API or metric be (ab)used to enable new or more accurate form of side-channel or timing attacks?
For example, application developers may want to know precise information about renderer-related activities: if and when layout or style-recalc occurs, how long each frame took to render, if and when some content is painted, and so on. However, access to such high-resolution data could be (ab)used by an attacker to launch a number of high-resolution attacks against the user:
Note that some forms of above attacks are already possible today. However, the key is not whether it is simply possible, but whether the new API could enable higher resolution (more accurate, or much faster) form of attack.
As a practical example, to address the above concerns the Frame Timing API ([[FRAME-TIMING]]) was specifically designed to surface slow-only frames—i.e. there are no explicit signals about paint, layout, or exact duration of the frame. As such, Frame Timing does not expose any attack capabilities beyond what is already possible with existing methods.
As a corollary to the above, RUM APIs are limited in what they can expose with respect to the rendering pipeline. Proceed with caution in this space.
As another example, application developers may want to know accurate memory usage of their applications, such that they can detect memory leaks and regressions in their code; adjust application logic at runtime, and so on.
This is valuable data, but the same API could also be (ab)used to measure the memory footprint of third-party resources and dependencies, which can leak information about what content is being rendered—e.g. an attacker can figure out if the user is authenticated by comparing the memory footprint of the loaded iframe, which can vary based on whether the user gets presented with a login page vs authenticated content.
One plausible mitigation to the above could be reduced accuracy of the reported used memory. Although, even there, it's not immediately clear what the minimum thresholds should be, and if such data is useful to developers once such a restriction is applied.
Provide the minimum resolution to address the use case.
Real-time, high-resolution metrics enable higher accuracy attacks. Consider strategies to reduce resolution:
DOMHighResolutionTimestamp([[HR-TIME]]) is capped to 5 microseconds.
Avoid synchronous performance APIs and metrics; use
Require opt-in mechanisms where necessary.
It may be desirable to expose some forms of performance data about other origins that an application depends on. However, such data should only be made available with the consent of those origins.
For example, timing data about cross-origin resource fetches is
Protect origin-sensitive configuration and data.
Some use cases, such as error or policy violation reporting, require
that changes in configuration of where and whether the report is
delivered must be protected from other origins—e.g. if the page sets a
security policy and wants to receive violation reports at a designated
report-uri, then a third party script should not be able
to modify this policy. In such cases, it may not be possible to expose
must be used.
Some types of data should only be exposed to the origin, and should not be accessible to script.
For example, Network Error Logging reports are delivered via an out-of-band reporting API ([[REPORTING]]) mechanism to the endpoint designated by the origin. These reports are not exposed at runtime, as that could enable third party resources to enumerate them and acquire new and private information: past navigation history, IP of the user when navigation was initiated, and other sensitive information.
The API or metric exposes new, or more accurate, data that may have security and privacy implications. Does this mean we can't make it available to the web?
No. Runtime attacks are an ever-present and existing threat. The fact of their existence does not prevent us from making progress and exposing new and useful performance APIs and metrics to web applications. However, we need to be careful and consider their implications on a case by case basis: outline the risks, consider possible mitigations, understand and clearly document the tradeoffs, and solicit wide review before proceeding with implementation.
Can we use out-of-band reporting (OOBR) to mitigate above risks?
No. OOBR may be used as a mechanism to isolate some types of sensitive configuration and report data that belongs to the application origin from resources that belong to another origin and are executed by the application. However, the reverse is not true: simple use of OOBR does not grant privilege to the application to report arbitrary data or metrics that may reveal information about another origin, or the user, as that would violate both the user's privacy and the same-origin policy enforced by platform.
But I really need "deep" profiling, any other options?
Maybe. If you have administrative control of the user's device (e.g. via corporate policy), or can get the user to explicitly opt-in into a mode where they allow such profiling and reporting, you may be able to obtain higher-resolution profiling data. However, this is only applicable in select cases and is generally discouraged, as it is virtually guaranteed to reveal a significant amount of private and sensitive data about the user.
Sincere thanks to Philippe Le Hegaret, Todd Reifsteck, Nat Duca, and Yoav Weiss for their helpful comments and contributions to this work.