WebVR

Editor’s Draft,

This version:
https://w3c.github.io/webvr/
Issue Tracking:
GitHub
Inline In Spec
Editors:
(Mozilla)
(Google)
(Mozilla)
(Mozilla)
(Microsoft)
(Microsoft)
Participate:
File an issue (open issues)
Mailing list archive
W3C’s #webvr IRC

Abstract

This specification describes support for accessing virtual reality (VR) devices, including sensors and head-mounted displays on the Web.

Status of this document

This specification was published by the WebVR Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

Changes to this document may be tracked at https://github.com/w3c/webvr/commits/gh-pages.

If you wish to make comments regarding this document, please send them to public-webvr@w3.org (subscribe, archives).

DO NOT IMPLEMENT

The version of the WebVR API represented in this document is incomplete and changing rapidly. Do not attempt to implement it at this time.

1. Introduction

Hardware that enables Virtual Reality applications requires high-precision, low-latency interfaces to deliver an acceptable experience. Other interfaces, such as device orientation events, can be repurposed to surface VR input but doing so dilutes the interface’s original intent and often does not provide the precision necessary for high-quality VR. The WebVR API provides purpose-built interfaces to VR hardware to allow developers to build compelling, comfortable VR experiences.

2. Security, Privacy, and Comfort Considerations

The WebVR API provides powerful new features which bring with them several unique privacy, security, and comfort risks that user agents must take steps to mitigate.

2.1. Gaze Tracking

While the API does not yet expose eye tracking capabilites a lot can be inferred about where the user is looking by tracking the orientation of their head. This is especially true of VR devices that have limited input capabilities, such as Google Carboard, which frequently require users to control a "gaze cursor" with their head orientation. This means that it may be possible for a malicious page to infer what a user is typing on a virtual keyboard or how they are interacting with a virtual UI based solely on monitoring their head movements. For example: if not prevented from doing so a page could estimate what URL a user is entering into the user agent’s URL bar.

To prevent this risk the UA MUST blur the active session when the users is interacting with sensitive, trusted UI such as URL bars or system dialogs. Additionally, to prevent a malicious page from being able to monitor input on a other pages the UA MUST blur the active session on non-focused pages.

2.2. Trusted Environment

If the virtual environment does not consistently track the user’s head motion with low latency and at a high frame rate the user may become disoriented or physically ill. Since it is impossible to force pages to produce consistently performant and correct content the UA MUST provide a tracked, trusted environment and a VR Compositor which runs asynchronously from page content. The compositor is responsible for compositing the trusted and untrusted content. If content is not performant, does not submit frames, or terminates unexpectedly the UA should be able to continue presenting a responsive, trusted UI.

Additionally, page content has the ability to make users uncomfortable in ways not related to performance. Badly applied tracking, strobing colors, and content intended to offend, frighten, or intimidate are examples of content which may cause the user to want to quickly exit the VR experience. Removing the VR device in these cases may not always be a fast or practical option. To accomodate this the UA SHOULD provide users with an action, such as pressing a reserved hardware button or performing a gesture, that escapes out of WebVR content and displays the UA’s trusted UI.

When navigating between pages in VR the UA should display trusted UI elements informing the user of the security information of the site they are navigating to which is normally presented by the 2D UI, such as the URL and encryption status.

2.3. Context Isolation

The trusted UI must be drawing by an independent rendering context whose state is isolated from any rendering contexts used by the page. (For example, any WebGL rendering contexts.) This is to prevent the page from corrupting the state of the trusted UI’s context, which may prevent it from properly rendering a tracked environment. It also prevents the possibility of the page being able to capture imagery from the trusted UI, which could lead to private information being leaked.

Also, to prevent CORS-related vulnerabilities each page will see a new instance of objects returned by the API, such as VRDevice and VRSession. Attributes such as the source set by one page must not be able to be read by another. Similarly, methods invoked on the API MUST NOT cause an observable state change on other pages. For example: No method will be exposed that enables a system-level orientation reset, as this could be called repeatedly by a malicious page to prevent other pages from tracking properly. The UA MUST, however, respect system-level orientation resets triggered by a user gesture or system menu.

2.4. Fingerprinting

Given that the API describes hardware available to the user and it’s capabilities it will inevitably provide additional surface area for fingerprinting. While it’s impossible to completely avoid this, steps can be taken to mitigate the issue such as ensuring that device names are reasonably generic and don’t contain unique IDs. (For example: "Daydream View" instead of "Daydream View, Pixel XL (Black) - Serial: 1234-56-7890")

Discuss use of sensor activity as a possible fingerprinting vector.

3. Device Enumeration

3.1. VR

interface VR : EventTarget {
  // Methods
  Promise<sequence<VRDevice>> getDevices();

  // Events
  attribute EventHandler ondeviceconnect;
  attribute EventHandler ondevicedisconnect;
  attribute EventHandler onnavigate;
};

getDevices() Return a Promise which resolves to a list of available VRDevices.

ondeviceconnect is an Event handler IDL attribute for the deviceconnect event type.

ondevicedisconnect is an Event handler IDL attribute for the devicedisconnect event type.

onnavigate is an Event handler IDL attribute for the navigate event type.

3.2. VRDevice

interface VRDevice : EventTarget {
  // Attributes
  readonly attribute DOMString deviceName;
  readonly attribute boolean isExternal;

  attribute VRSession? activeSession;

  // Methods
  Promise<boolean> supportsSession(VRSessionCreateParametersInit parameters);
  Promise<VRSession> requestSession(VRSessionCreateParametersInit parameters);

  // Events
  attribute EventHandler onsessionchange;
  attribute EventHandler onactivate;
  attribute EventHandler ondeactivate;
};

A VRDevice represents a physical unit of VR hardware that can present imagery to the user somehow. On desktop devices this may take the form of a headset peripheral; on mobile devices it may represent the device itself in conjunction with a viewer harness. It may also represent devices without the ability to present content in stereo but with advanced (6DoF) tracking capabilities.

deviceName returns a human readable string describing the VRDevice.

isExternal returns true if the VRDevice hardware is a separate physical device from the system’s main device.

A VRDevice has an active session, initially null, which is the VRSession that is currently accessing and/or presenting to the device. Only one session per page can be active for a given device at a time.

In order to set or retrieve the active session a page must request a session from the device using the requestSession() method. When invoked it MUST return a new Promise promise and run the following steps in parallel:

  1. If the requested session description is not supported by the device, reject promise and abort these steps.

  2. If the device’s active session matches the requested session description, resolve promise with the active session and abort these steps.

  3. If the requested session description requires a user gesture and the algorithm is not triggered by user activation reject promise and abort these steps.

  4. If another page has an exclusive session for the device, reject promise and abort these steps.

  5. Let nextSession be a new VRSession which matches the session description.

  6. Let prevSession be the current active session.

  7. Fire an event named sessionchange on the device with it’s session attribute initialized to nextSession.

  8. Set the active session to nextSession.

  9. If prevSession is not null, end the session.

  10. Resolve promise with the active session.

When the supportsSession() method is invoked it MUST return a new Promise promise and run the following steps in parallel:

  1. If the requested session description is supported by the device, resolve promise with true.

  2. Else resolve promise with false.

The activeSession IDL attribute’s getter MUST return the VRDevice's active session.

onsessionchange is an Event handler IDL attribute for the sessionchange event type.

onactivate is an Event handler IDL attribute for the activate event type.

ondeactivate is an Event handler IDL attribute for the deactivate event type.

The following code finds the first available VRDevice.
let vrDevice;

navigator.vr.getDevices().then(devices => {
  // Use the first device in the array if one is available. If multiple
  // devices are present, you may want to present the user with a way to
  // select which device to use.
  if (devices.length > 0) {
    vrDevice = devices[0];
  }
});

4. Session

4.1. VRSession

interface VRSession : EventTarget {
  // Attributes
  readonly attribute VRDevice device;
  readonly attribute VRSessionCreateParameters createParameters;

  attribute double depthNear;
  attribute double depthFar;
  attribute VRLayer baseLayer;

  // Methods
  VRSourceProperties getSourceProperties(optional double scale);
  Promise<VRFrameOfReference> createFrameOfReference(VRFrameOfReferenceType type);
  VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem);
  Promise<void> endSession();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onresetpose;
};

A VRSession is the interface through with most interaction with a VRDevice happens. A page must request a session from the VRDevice, which may reject the request for a number of reasons. Once a session has been successfully acquired it can be used to poll the device pose, query information about the user’s environment and, if it’s an exclusive session, define imagery to show on the VRDevice.

The UA, when possible, SHOULD NOT initialize device tracking or rendering capabilities until a session has been acquired. This is to prevent unwanted side effects of engaging the VR systems when they’re not actively being used, such as increased battery usage or related utility applications from appearing when first navigating to a page that only wants to test for the presence of VR hardware in order to advertise VR features. Not all VR platforms offer ways to detect the hardware’s presence without initializing tracking, however, so this is only a strong recommendation.

device

createParameters

depthNear

depthFar

baseLayer

getSourceProperties()

createFrameOfReference()

getDevicePose()

Document how to poll the device pose

endSession()

Document what happens when we end the session

onblur is an Event handler IDL attribute for the blur event type.

onfocus is an Event handler IDL attribute for the focus event type.

onresetpose is an Event handler IDL attribute for the resetpose event type.

Document effects when we blur the active session

Example of acquiring a session here.

4.2. VRSessionCreateParameters

The VRSessionCreateParameters interface

dictionary VRSessionCreateParametersInit {
  required boolean exclusive = true;
};

interface VRSessionCreateParameters {
  readonly attribute boolean exclusive;
};

The VRSessionCreateParametersInit dictionary provides a session description, indicating the desired capabilities of a session to be returned from requestSession().

exclusive

Document restrictions and capabilities of an exclusive session

4.3. The VR Compositor

This needs to be broken up a bit more and more clearly decribe things such as the frame lifecycle.

The UA MUST maintain a VR Compositor which handles layer composition and frame timing. The compositor MUST use an independent rendering context whose state is isolated from that of the WebGL contexts provided as VRCanvasLayer sources to prevent the page from corruption the compositor state or reading back content from other pages.

There are no direct interfaces to the compositor, but applications may submit bitmaps to be composited via the layer system and observe the frame timing via calls to commit(). The compositor consists of two different loops, assumed to be running in separate threads or processes. The Frame Loop, which drives the page script, and the Render Loop, which continuously presents imagery provided by the Frame Loop to the VR device. The render loop maintains it’s own copy of the session’s layer list. Communication between the two loops is synchronized with a lock that limits access to the render loop’s layer list.

Both loops are started when a session is successfully created. The compositor’s render loop goes through the following steps:

  1. The layer lock is acquired.

  2. The render loop’s layer list images are composited and presented to the device.

  3. The layer lock is released.

  4. Notify the frame loop that a frame has been completed.

  5. return to step 1.

The render loop MUST throttle it’s throughput to the refresh rate of the VR device. The exact point in the loop that is most effective to block at may differ between platforms, so no perscription is made for when that should happen.

Upon session creation, the following steps are taken to start the frame loop:

  1. A new promise is created and set as the session’s current frame promise. The current frame promise is returned any time commit() is called.

  2. The sessionchange event is fired.

  3. The promise returned from requestSession() is resolved.

Then, the frame loop performs the following steps while the session is active:

  1. The render loop’s layer lock is acquired.

  2. Any dirty layers in the session’s layer list are copied to the render loop’s layer list.

  3. The render loop’s layer lock is released.

  4. Wait for the render loop to signal that a frame has been completed.

  5. The session’s current frame promise is set as the the previous frame promise.

  6. A new promise is created and set as the session’s current frame promise.

  7. The previous frame promise is resolved.

  8. Once the promise has been resolved, return to step 1.

5. Pose

5.1. Matrices

WebVR provides various transforms in the form of matrices. WebVR matrices are always 4x4 and given as 16 element Float32Arrays in column major order. They may be passed directly to WebGL’s uniformMatrix4fv function, used to create an equivalent DOMMatrix, or used with a variety of third party math libraries.

Translations specified by WebVR matrices are always given in meters.

5.2. VRDevicePose

interface VRDevicePose {
  readonly attribute Float32Array leftProjectionMatrix;
  readonly attribute Float32Array leftViewMatrix;

  readonly attribute Float32Array rightProjectionMatrix;
  readonly attribute Float32Array rightViewMatrix;

  readonly attribute Float32Array poseModelMatrix;
};

A VRDevicePose describes the position and orientation of a VRDevice relative to the VRCoordinateSystem it was queried with. It also describes the view and projection matrices that should be used by the application to render a frame of a VR scene.

The leftProjectionMatrix and rightProjectionMatrix are matrices describing the projection to be used for the left and right eye’s rendering, respectively. It is highly recommended that applications use this matrix without modification. Failure to use these projection matrices when rendering may cause the presented frame to be distorted or badly aligned, resulting in varying degrees of user discomfort.

The leftViewMatrix and rightViewMatrix are matrices describing the view transform to be used for the left and right eye’s rendering, respectively. The matrices represent the inverse of the model matrix of the associated eye. This value may be passed

poseModelMatrix

6. Layers

6.1. VRLayer

interface VRLayer {};

A VRLayer defines a source of bitmap images and a description of how the image is to be rendered in the VRDevice. Initially only one type of layer, the VRCanvasLayer, is defined but future revisions of the spec may extend the available layer types.

6.2. VRCanvasLayer

typedef (HTMLCanvasElement or
         OffscreenCanvas) VRCanvasSource;

[Constructor(VRSession session, optional VRCanvasSource source)]
interface VRCanvasLayer : VRLayer {
  // Attributes
  attribute VRCanvasSource source;

  // Methods
  void setLeftBounds(double left, double bottom, double right, double top);
  FrozenArray<double> getLeftBounds();

  void setRightBounds(double left, double bottom, double right, double top);
  FrozenArray<double> getRightBounds();

  Promise<DOMHighResTimeStamp> commit();
};

The source defines a canvas whose contents will be presented by the VRDevice when commit() is called.

Upon being set as the source of a VRCanvasLayer the source’s context MAY be lost. Additionally the current backbuffer of the source’s context MAY be lost, even if the context was created with the preserveDrawingBuffer context creation attribute set to true.

Note: In order to make use of a canvas in the event of context loss, the application should handle the webglcontextlost event on the source canvas and prevent the event’s default behavior. The application should then listen for a webglcontextrestored event to be fired and reload any necessary graphical resources in response.

The layer describes two viewports: the Left Bounds and Right Bounds. Each bounds contians four values (left, bottom, right, top) defining the texture bounds within the source canvas to present to the related eye in UV space (0.0 - 1.0) with the bottom left corner of the canvas at (0, 0) and the top right corner of the canvas at (1, 1). If the left bound is greater or equal to the right bound or the bottom bound is greater than or equal to the top bound the viewport is considered to be empty and no content from this layer will be shown on the related eye of the VRDevice.

The left bounds MUST default to [0.0, 0.0, 0.5, 1.0] and the right bounds MUST default to [0.5, 0.0, 1.0, 1.0].

Invoking the setLeftBounds() method with a given left, bottom, right, and top value sets the values of the left bounds left, bottom, right, and top respectively.

Invoking the setRightBounds() method with a given left, bottom, right, and top value sets the values of the right bounds left, bottom, right, and top respectively.

Invoking the getLeftBounds() method returns a FrozenArray of doubles containing the left bounds to left, bottom, right, and top values in that order.

Invoking the getRightBounds() method returns a FrozenArray of doubles containing the right bounds to left, bottom, right, and top values in that order.

commit() captures the source canvas’s bitmap and submits it to the VR compositor. Calling commit() has the same effect on the source canvas as any other operation that uses it’s bitmap, and canvases created without preserveDrawingBuffer set to true will be cleared.

Need an example snippet of a VRCanvasLayer render loop

6.3. VRSourceProperties

interface VRSourceProperties {
  readonly attribute double scale;
  readonly attribute unsigned long width;
  readonly attribute unsigned long height;
};

The VRSourceProperties interface describes the ideal dimensions of a layer image for a VRDevice, taking into account the native resolution of the device and the distortion required to counteract an distortion introduced by the device’s optics.

scale returns the scale provided to getSourceProperties() if one was given. If not, it returns the scale recommended by the system.

width and height describe the dimensions a layer image should ideally be in order to appear on the device at a scale:1 pixel ratio. The given dimenions MUST only account for a single eye.

Because a VRCanvasLayer contains content for both eyes, the source canvas should be twice the size perscribed by the VRSourceProperties in order to get the desired output.
var sourceProperties = vrSession.getSourceProperties();

canvasLayer.source.width = sourceProperties.width * 2;
canvasLayer.source.height = sourceProperties.height;

7. Coordinate Systems

Pretty much nothing in this section is documented

7.1. VRCoordinateSystem

interface VRCoordinateSystem {
  Float32Array? getTransformTo(VRCoordinateSystem other);
};

7.2. VRFrameOfReference

enum VRFrameOfReferenceType {
  "headModel",
  "eyeLevel",
  "stage",
};

interface VRFrameOfReference : VRCoordinateSystem {
  readonly attribute VRStageBounds? bounds;
};

7.3. VRStageBounds

interface VRStageBounds {
  readonly attribute double minX;
  readonly attribute double maxX;
  readonly attribute double minZ;
  readonly attribute double maxZ;
};

The VRStageBounds interface describes a space known as a "Stage". The stage is a bounded, floor relative play space that the user can be expected to safely be able to move within. Other VR platforms sometimes refer to this concept as "room scale" or "standing VR".

minX, maxX, minZ, and maxZ define an axis-aligned rectangle on the floor relative to the origin of the VRCoordinateSystem the bounds were queried with. Bounds are specified in meters. The origin MAY be outside the bounds rectangle. minX MUST be less than maxX and minZ MUST be less than maxZ.

Note: Content should not require the user to move beyond these bounds; however, it is possible for the user to ignore the bounds resulting in position values outside of the rectangle they describe if their physical surroundings allow for it.

8. Events

8.1. VRDeviceEvent

[Constructor(DOMString type, VRDeviceEventInit eventInitDict)]
interface VRDeviceEvent : Event {
  readonly attribute VRDevice device;
};

dictionary VRDeviceEventInit : EventInit {
  required VRDevice device;
};

device The VRDevice associated with this event.

session The VRSession associated with this event, if any.

8.2. VRSessionEvent

[Constructor(DOMString type, VRSessionEventInit eventInitDict)]
interface VRSessionEvent : Event {
  readonly attribute VRSession session;
};

dictionary VRSessionEventInit : EventInit {
  required VRSession session;
};

session The VRSession associated with this event.

8.3. Event Types

The UA MUST provide the following new events. Registration for and firing of the events must follow the usual behavior of DOM4 Events.

The UA MAY fire a deviceconnect event on the VR object to indicate that a VRDevice has been connected. The event MUST be of type VRDeviceEvent.

The UA MAY dispatch a devicedisconnect event on the VR object to indicate that a VRDevice has been disconnected. The event MUST be of type VRDeviceEvent.

The UA MAY dispatch a navigate event on the VR object to indicate that the current page has been navigated to from a browsing context that was actively presenting VR content. The event’s session MUST be an instance of a VRSession with identical capabilities to the active session from the previous page, such that presenting to the session will feel like a seamless transition to the user. The event MUST be of type VRSessionEvent.

The UA MUST dispatch a sessionchange event on a VRDevice to indicate that the VRDevice has begun or ended a new VRSession. This event should not fire on subsequent calls to requestSession() if the returned session is the same as the current activeSession. The event MUST be of type VRSessionEvent.

The UA MAY dispatch a activate event on a VRDevice to indicate that something has occurred which suggests the VRDevice should be begin an exclusive session. For example, if the VRDevice is capable of detecting when the user has put it on, this event SHOULD fire when they do so. The event MUST be of type VRDeviceEvent.

The UA MAY dispatch a deactivate event on a VRDevice to indicate that something has occurred which suggests the VRDevice should end the active session. For example, if the VRDevice is capable of detecting when the user has taken it off, this event SHOULD fire when they do so. The event MUST be of type VRDeviceEvent.

A UA MAY dispatch a blur event on a VRSession to indicate that presentation to the VRSession by the page has been suspended by the UA, OS, or VR hardware. While a VRSession is blurred it remains active but it may have it’s frame production throttled. This is to prevent tracking while the user interacts with potentially sensitive UI. For example: The UA SHOULD blur the presenting application when the user is typing a URL into the browser with a virtual keyboard, otherwise the presenting page may be able to guess the URL the user is entering by tracking their head motions. The event MUST be of type VRSessionEvent.

A UA MAY dispatch a focus event on a VRSession to indicate that presentation to the VRSession by the page has resumed after being suspended. The event MUST be of type VRSessionEvent.

A UA MUST dispatch a resetpose event on a VRSession when the system resets the VRDevice's position or orientation. The event MUST be of type VRSessionEvent.

Navigator interface is all alone. :( Does this belong somewhere else, or is this reasonable? This is about how WebUSB and WebBluetooth handle it.

partial interface Navigator {
  [SameObject] readonly attribute VR vr;
};

10. Acknowledgements

The following individuals have contributed to the design of the WebVR standard:

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[PROMISES-GUIDE]
Domenic Denicola. Writing Promise-Using Specifications. 16 February 2016. Finding of the W3C TAG. URL: https://www.w3.org/2001/tag/doc/promises-guide
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[WebIDL]
Cameron McCormack; Boris Zbarsky; Tobie Langel. Web IDL. URL: https://www.w3.org/TR/WebIDL-1/

IDL Index

interface VR : EventTarget {
  // Methods
  Promise<sequence<VRDevice>> getDevices();

  // Events
  attribute EventHandler ondeviceconnect;
  attribute EventHandler ondevicedisconnect;
  attribute EventHandler onnavigate;
};

interface VRDevice : EventTarget {
  // Attributes
  readonly attribute DOMString deviceName;
  readonly attribute boolean isExternal;

  attribute VRSession? activeSession;

  // Methods
  Promise<boolean> supportsSession(VRSessionCreateParametersInit parameters);
  Promise<VRSession> requestSession(VRSessionCreateParametersInit parameters);

  // Events
  attribute EventHandler onsessionchange;
  attribute EventHandler onactivate;
  attribute EventHandler ondeactivate;
};

interface VRSession : EventTarget {
  // Attributes
  readonly attribute VRDevice device;
  readonly attribute VRSessionCreateParameters createParameters;

  attribute double depthNear;
  attribute double depthFar;
  attribute VRLayer baseLayer;

  // Methods
  VRSourceProperties getSourceProperties(optional double scale);
  Promise<VRFrameOfReference> createFrameOfReference(VRFrameOfReferenceType type);
  VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem);
  Promise<void> endSession();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onresetpose;
};

dictionary VRSessionCreateParametersInit {
  required boolean exclusive = true;
};

interface VRSessionCreateParameters {
  readonly attribute boolean exclusive;
};

interface VRDevicePose {
  readonly attribute Float32Array leftProjectionMatrix;
  readonly attribute Float32Array leftViewMatrix;

  readonly attribute Float32Array rightProjectionMatrix;
  readonly attribute Float32Array rightViewMatrix;

  readonly attribute Float32Array poseModelMatrix;
};

interface VRLayer {};

typedef (HTMLCanvasElement or
         OffscreenCanvas) VRCanvasSource;

[Constructor(VRSession session, optional VRCanvasSource source)]
interface VRCanvasLayer : VRLayer {
  // Attributes
  attribute VRCanvasSource source;

  // Methods
  void setLeftBounds(double left, double bottom, double right, double top);
  FrozenArray<double> getLeftBounds();

  void setRightBounds(double left, double bottom, double right, double top);
  FrozenArray<double> getRightBounds();

  Promise<DOMHighResTimeStamp> commit();
};

interface VRSourceProperties {
  readonly attribute double scale;
  readonly attribute unsigned long width;
  readonly attribute unsigned long height;
};

interface VRCoordinateSystem {
  Float32Array? getTransformTo(VRCoordinateSystem other);
};

enum VRFrameOfReferenceType {
  "headModel",
  "eyeLevel",
  "stage",
};

interface VRFrameOfReference : VRCoordinateSystem {
  readonly attribute VRStageBounds? bounds;
};

interface VRStageBounds {
  readonly attribute double minX;
  readonly attribute double maxX;
  readonly attribute double minZ;
  readonly attribute double maxZ;
};

[Constructor(DOMString type, VRDeviceEventInit eventInitDict)]
interface VRDeviceEvent : Event {
  readonly attribute VRDevice device;
};

dictionary VRDeviceEventInit : EventInit {
  required VRDevice device;
};

[Constructor(DOMString type, VRSessionEventInit eventInitDict)]
interface VRSessionEvent : Event {
  readonly attribute VRSession session;
};

dictionary VRSessionEventInit : EventInit {
  required VRSession session;
};

partial interface Navigator {
  [SameObject] readonly attribute VR vr;
};

Issues Index

Discuss use of sensor activity as a possible fingerprinting vector.
Document how to poll the device pose
Document what happens when we end the session
Document effects when we blur the active session
Example of acquiring a session here.
Document restrictions and capabilities of an exclusive session
This needs to be broken up a bit more and more clearly decribe things such as the frame lifecycle.
Need an example snippet of a VRCanvasLayer render loop
Pretty much nothing in this section is documented
Navigator interface is all alone. :( Does this belong somewhere else, or is this reasonable? This is about how WebUSB and WebBluetooth handle it.