WebVR

Editor’s Draft,

This version:
https://w3c.github.io/webvr/
Issue Tracking:
GitHub
Inline In Spec
Editors:
(Mozilla)
(Google)
(Mozilla)
(Mozilla)
(Microsoft)
(Microsoft)
Participate:
File an issue (open issues)
Mailing list archive
W3C’s #webvr IRC

Abstract

This specification describes support for accessing virtual reality (VR) devices, including sensors and head-mounted displays on the Web.

Status of this document

This specification was published by the WebVR Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.

Changes to this document may be tracked at https://github.com/w3c/webvr/commits/gh-pages.

If you wish to make comments regarding this document, please send them to public-webvr@w3.org (subscribe, archives).

DO NOT IMPLEMENT

The version of the WebVR API represented in this document is incomplete and changing rapidly. Do not implement it at this time.

1. Introduction

Hardware that enables Virtual Reality applications requires high-precision, low-latency interfaces to deliver an acceptable experience. Other interfaces, such as device orientation events, can be repurposed to surface VR input but doing so dilutes the interface’s original intent and often does not provide the precision necessary for high-quality VR. The WebVR API provides purpose-built interfaces to VR hardware to allow developers to build compelling, comfortable VR experiences.

2. Security, Privacy, and Comfort Considerations

The WebVR API provides powerful new features which bring with them several unique privacy, security, and comfort risks that user agents must take steps to mitigate.

2.1. Gaze Tracking

While the API does not yet expose eye tracking capabilites a lot can be inferred about where the user is looking by tracking the orientation of their head. This is especially true of VR devices that have limited input capabilities, such as Google Carboard, which frequently require users to control a "gaze cursor" with their head orientation. This means that it may be possible for a malicious page to infer what a user is typing on a virtual keyboard or how they are interacting with a virtual UI based solely on monitoring their head movements. For example: if not prevented from doing so a page could estimate what URL a user is entering into the user agent’s URL bar.

To prevent this risk the UA MUST blur the active session when the users is interacting with sensitive, trusted UI such as URL bars or system dialogs. Additionally, to prevent a malicious page from being able to monitor input on a other pages the UA MUST blur the active session on non-focused pages.

2.2. Trusted Environment

If the virtual environment does not consistently track the user’s head motion with low latency and at a high frame rate the user may become disoriented or physically ill. Since it is impossible to force pages to produce consistently performant and correct content the UA MUST provide a tracked, trusted environment and a VR Compositor which runs asynchronously from page content. The compositor is responsible for compositing the trusted and untrusted content. If content is not performant, does not submit frames, or terminates unexpectedly the UA should be able to continue presenting a responsive, trusted UI.

Additionally, page content has the ability to make users uncomfortable in ways not related to performance. Badly applied tracking, strobing colors, and content intended to offend, frighten, or intimidate are examples of content which may cause the user to want to quickly exit the VR experience. Removing the VR device in these cases may not always be a fast or practical option. To accomodate this the UA SHOULD provide users with an action, such as pressing a reserved hardware button or performing a gesture, that escapes out of WebVR content and displays the UA’s trusted UI.

When navigating between pages in VR the UA should display trusted UI elements informing the user of the security information of the site they are navigating to which is normally presented by the 2D UI, such as the URL and encryption status.

2.3. Context Isolation

The trusted UI must be drawing by an independent rendering context whose state is isolated from any rendering contexts used by the page. (For example, any WebGL rendering contexts.) This is to prevent the page from corrupting the state of the trusted UI’s context, which may prevent it from properly rendering a tracked environment. It also prevents the possibility of the page being able to capture imagery from the trusted UI, which could lead to private information being leaked.

Also, to prevent CORS-related vulnerabilities each page will see a new instance of objects returned by the API, such as VRDevice and VRSession. Attributes such as the context set by one page must not be able to be read by another. Similarly, methods invoked on the API MUST NOT cause an observable state change on other pages. For example: No method will be exposed that enables a system-level orientation reset, as this could be called repeatedly by a malicious page to prevent other pages from tracking properly. The UA MUST, however, respect system-level orientation resets triggered by a user gesture or system menu.

2.4. Fingerprinting

Given that the API describes hardware available to the user and its capabilities it will inevitably provide additional surface area for fingerprinting. While it’s impossible to completely avoid this, steps can be taken to mitigate the issue such as ensuring that device names are reasonably generic and don’t contain unique IDs. (For example: "Daydream View" instead of "Daydream View, Pixel XL (Black) - Serial: 1234-56-7890")

Discuss use of sensor activity as a possible fingerprinting vector.

3. Device Enumeration

3.1. VR

[SecureContext, Exposed=Window] interface VR : EventTarget {
  // Methods
  Promise<sequence<VRDevice>> getDevices();

  // Events
  attribute EventHandler ondeviceconnect;
  attribute EventHandler ondevicedisconnect;
};

getDevices() Return a Promise which resolves to a list of available VRDevices.

ondeviceconnect is an Event handler IDL attribute for the deviceconnect event type.

ondevicedisconnect is an Event handler IDL attribute for the devicedisconnect event type.

3.2. VRDevice

[SecureContext, Exposed=Window] interface VRDevice : EventTarget {
  // Attributes
  readonly attribute DOMString deviceName;
  readonly attribute boolean isExternal;

  // Methods
  Promise<void> supportsSession(optional VRSessionCreationOptions parameters);
  Promise<VRSession> requestSession(optional VRSessionCreationOptions parameters);

  // Events
  attribute EventHandler ondeactivate;
};

A VRDevice represents a physical unit of VR hardware that can present imagery to the user somehow. On desktop devices this may take the form of a headset peripheral; on mobile devices it may represent the device itself in conjunction with a viewer harness. It may also represent devices without the ability to present content in stereo but with advanced (6DoF) tracking capabilities.

deviceName returns a human readable string describing the VRDevice.

isExternal returns true if the VRDevice hardware has a separate physical display from the system’s main display.

A VR headset connected to a desktop PC would typically set isExternal to true since the PC monitor would be considered the primary display. A mobile phone used in a VR harness or a standalone device would set isExternal to false.

When the supportsSession() method is invoked it MUST return a new Promise promise and run the following steps in parallel:

  1. If the requested session description is supported by the device, resolve promise.

  2. Else reject promise.

The activeSession IDL attribute’s getter MUST return the VRDevice's active session.

ondeactivate is an Event handler IDL attribute for the deactivate event type.

The following code finds the first available VRDevice.
let vrDevice;

navigator.vr.getDevices().then((devices) => {
  // Use the first device in the array if one is available. If multiple
  // devices are present, you may want to present the user with a way to
  // select which device to use.
  if (devices.length > 0) {
    vrDevice = devices[0];
  }
});

4. Session

4.1. VRSession

[SecureContext, Exposed=Window] interface VRSession : EventTarget {
  // Attributes
  readonly attribute VRDevice device;
  readonly attribute boolean exclusive;
  readonly attribute VRPresentationContext outputContext;

  attribute double depthNear;
  attribute double depthFar;
  attribute VRLayer baseLayer;

  // Methods
  Promise<VRFrameOfReference> requestFrameOfReference(VRFrameOfReferenceType type, optional VRFrameOfReferenceOptions options);

  long requestFrame(VRFrameRequestCallback callback);
  void cancelFrame(long handle);

  Promise<void> end();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onresetpose;
  attribute EventHandler onend;
};

A VRSession is the interface through with most interaction with a VRDevice happens. A page must request a session from the VRDevice, which may reject the request for a number of reasons. Once a session has been successfully acquired it can be used to poll the device pose, query information about the user’s environment and, if it’s an exclusive session, define imagery to show on the VRDevice.

The UA, when possible, SHOULD NOT initialize device tracking or rendering capabilities until a session has been acquired. This is to prevent unwanted side effects of engaging the VR systems when they’re not actively being used, such as increased battery usage or related utility applications from appearing when first navigating to a page that only wants to test for the presence of VR hardware in order to advertise VR features. Not all VR platforms offer ways to detect the hardware’s presence without initializing tracking, however, so this is only a strong recommendation.

device

exclusive

outputContext

depthNear

depthFar

baseLayer

requestFrameOfReference()

requestFrame()

cancelFrame()

Document how to poll the device pose

end()

Document what happens when we end the session

onblur is an Event handler IDL attribute for the blur event type.

Document effects when we blur the active session

onfocus is an Event handler IDL attribute for the focus event type.

onresetpose is an Event handler IDL attribute for the resetpose event type.

onend is an Event handler IDL attribute for the end event type.

Example of acquiring a session here.

4.2. VRSessionCreationOptions

The VRSessionCreationOptions interface

dictionary VRSessionCreationOptions {
  boolean exclusive = false;
  VRPresentationContext outputContext;
};

The VRSessionCreationOptions dictionary provides a session description, indicating the desired properties of a session to be returned from requestSession().

exclusive

outputContext

Document restrictions and capabilities of an exclusive session

4.3. The VR Compositor

This needs to be broken up a bit more and more clearly decribe things such as the frame lifecycle.

The UA MUST maintain a VR Compositor which handles layer composition and frame timing. The compositor MUST use an independent rendering context whose state is isolated from that of the WebGL contexts provided as VRWebGLLayer sources to prevent the page from corruption the compositor state or reading back content from other pages.

5. Frame Loop

5.1. VRFrameRequestCallback

callback VRFrameRequestCallback = void (VRPresentationFrame frame);

Each VRFrameRequestCallback object has a cancelled boolean flag. This flag is initially false and is not exposed by any interface.

5.2. VRPresentationFrame

[SecureContext, Exposed=Window] interface VRPresentationFrame {
  readonly attribute FrozenArray<VRView> views;

  VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem);
};

A VRPresentationFrame provides all the values needed to render a single frame of a VR scene to the VRDevice's display. Applications can only aquire a VRPresentationFrame by calling requestFrame() on a VRSession with a VRFrameRequestCallback. When the callback is called it will be passed a VRPresentationFrame.

views

getDevicePose()

6. Views

6.1. VRView

[SecureContext, Exposed=Window] interface VRView {
  readonly attribute VREye eye;
  readonly attribute Float32Array projectionMatrix;

  VRViewport? getViewport(VRLayer layer);
};

enum VREye {
  "left",
  "right"
};

A VRView describes a single view into a VR scene. It provides several values directly, and acts as a key to query view-specific values from other interfaces.

eye describes the eye that this view is expected to be shown to. This value is primarily to ensure that prerendered stereo content can present the correct portion of the content to the correct eye.

The projectionMatrix is a matrix describing the projection to be used for the view’s rendering. It is highly recommended that applications use this matrix without modification. Failure to use the provided projection matrices when rendering may cause the presented frame to be distorted or badly aligned, resulting in varying degrees of user discomfort.

getViewport()

6.2. VRViewport

[SecureContext, Exposed=Window] interface VRViewport {
  readonly attribute long x;
  readonly attribute long y;
  readonly attribute long width;
  readonly attribute long height;
};

x

y

width

height

7. Pose

7.1. Matrices

WebVR provides various transforms in the form of matrices. WebVR matrices are always 4x4 and given as 16 element Float32Arrays in column major order. They may be passed directly to WebGL’s uniformMatrix4fv function, used to create an equivalent DOMMatrix, or used with a variety of third party math libraries.

Translations specified by WebVR matrices are always given in meters.

7.2. VRDevicePose

[SecureContext, Exposed=Window] interface VRDevicePose {
  readonly attribute Float32Array poseModelMatrix;

  Float32Array getViewMatrix(VRView view);
};

A VRDevicePose describes the position and orientation of a VRDevice relative to the VRCoordinateSystem it was queried with. It also describes the view and projection matrices that should be used by the application to render a frame of a VR scene.

poseModelMatrix

The getViewMatrix() method returns a matrix describing the view transform to be used when rendering the passed VRView. The matrices represent the inverse of the model matrix of the associated viewpoint.

8. Layers

8.1. VRLayer

[SecureContext, Exposed=Window] interface VRLayer {};

A VRLayer defines a source of bitmap images and a description of how the image is to be rendered in the VRDevice. Initially only one type of layer, the VRWebGLLayer, is defined but future revisions of the spec may extend the available layer types.

8.2. VRWebGLLayer

typedef (WebGLRenderingContext or
         WebGL2RenderingContext) VRWebGLRenderingContext;

[SecureContext, Exposed=Window, Constructor(VRSession session,
             VRWebGLRenderingContext context,
             optional VRWebGLLayerInit layerInit)]
interface VRWebGLLayer : VRLayer {
  // Attributes
  readonly attribute VRWebGLRenderingContext context;

  readonly attribute boolean antialias;
  readonly attribute boolean depth;
  readonly attribute boolean stencil;
  readonly attribute boolean alpha;
  readonly attribute boolean multiview;

  readonly attribute WebGLFramebuffer framebuffer;
  readonly attribute unsigned long framebufferWidth;
  readonly attribute unsigned long framebufferHeight;

  // Methods
  void requestViewportScaling(double viewportScaleFactor);
};

The context defines the WebGL or WebGL 2 context that is rendering the visuals for this layer.

<--Upon being set as the source of a VRCanvasLayer the source’s context MAY be lost. Additionally the current backbuffer of the source’s context MAY be lost, even if the context was created with the preserveDrawingBuffer context creation attribute set to true.

Note: In order to make use of a canvas in the event of context loss, the application should handle the webglcontextlost event on the source canvas and prevent the event’s default behavior. The application should then listen for a webglcontextrestored event to be fired and reload any necessary graphical resources in response.-->

antialias

depth

stencil

alpha

multiview

framebuffer

framebufferWidth

framebufferHeight

requestViewportScaling()

<--The layer describes two viewports: the Left Bounds and Right Bounds. Each bounds contians four values (left, bottom, right, top) defining the texture bounds within the source canvas to present to the related eye in UV space (0.0 - 1.0) with the bottom left corner of the canvas at (0, 0) and the top right corner of the canvas at (1, 1). If the left bound is greater or equal to the right bound or the bottom bound is greater than or equal to the top bound the viewport is considered to be empty and no content from this layer will be shown on the related eye of the VRDevice.

The left bounds MUST default to [0.0, 0.0, 0.5, 1.0] and the right bounds MUST default to [0.5, 0.0, 1.0, 1.0].

Invoking the setLeftBounds() method with a given left, bottom, right, and top value sets the values of the left bounds left, bottom, right, and top respectively.

Invoking the setRightBounds() method with a given left, bottom, right, and top value sets the values of the right bounds left, bottom, right, and top respectively.

Invoking the getLeftBounds() method returns a FrozenArray of doubles containing the left bounds to left, bottom, right, and top values in that order.

Invoking the getRightBounds() method returns a FrozenArray of doubles containing the right bounds to left, bottom, right, and top values in that order.

commit() captures the VRCanvasLayer/source canvas’s bitmap and submits it to the VR compositor. Calling commit() has the same effect on the source canvas as any other operation that uses its bitmap, and canvases created without preserveDrawingBuffer set to true will be cleared.-->

Need an example snippet of setting up and using a VRWebGLLayer.

8.3. VRWebGLLayerInit

dictionary VRWebGLLayerInit {
  boolean antialias = true;
  boolean depth = false;
  boolean stencil = false;
  boolean alpha = true;
  boolean multiview = false;
  double framebufferScaleFactor;
};

The VRWebGLLayerInit dictionary indicates the desired properites of a VRWebGLLayer's framebuffer.

9. WebGL Context Compatiblity

partial dictionary WebGLContextAttributes {
    VRDevice compatibleVRDevice = null;
};

partial interface WebGLRenderingContextBase {
    Promise<void> setCompatibleVRDevice(VRDevice device);
};

Describe context compatibility requirements

setCompatibleVRDevice()

10. Canvas Rendering Context

[SecureContext, Exposed=Window] interface VRPresentationContext {
  readonly attribute HTMLCanvasElement canvas;
};

canvas

11. Coordinate Systems

Pretty much nothing in this section is documented

11.1. VRCoordinateSystem

[SecureContext, Exposed=Window] interface VRCoordinateSystem : EventTarget {
  Float32Array? getTransformTo(VRCoordinateSystem other);
};

getTransformTo()

11.2. VRFrameOfReference

enum VRFrameOfReferenceType {
  "headModel",
  "eyeLevel",
  "stage",
};

dictionary VRFrameOfReferenceOptions {
  boolean disableStageEmulation = false;
  double stageEmulationHeight = 0.0;
};

[SecureContext, Exposed=Window] interface VRFrameOfReference : VRCoordinateSystem {
  readonly attribute VRStageBounds? bounds;
  readonly attribute double emulatedHeight;

  attribute EventHandler onboundschange;
};

bounds

emulatedHeight

onboundschange

11.3. VRStageBounds

[SecureContext, Exposed=Window] interface VRStageBounds {
  readonly attribute FrozenArray<VRStageBoundsPoint> geometry;
};

The VRStageBounds interface describes a space known as a "Stage". The stage is a bounded, floor-relative play space that the user can be expected to safely be able to move within. Other VR platforms sometimes refer to this concept as "room scale" or "standing VR".

A polygonal boundary is given by the geometry point array, which represents a loop of points at the edges of the safe space. The points MUST be given in a clockwise order as viewed from above, looking towards the negative end of the Y axis. The bounds are assumed to originate at the floor (Y == 0) and extend infinitely high. The shape it describes MAY not be convex. The values reported are relative to the stage origin, but MAY not contain it.

Note: Content should not require the user to move beyond these bounds; however, it is possible for the user to ignore the bounds resulting in position values outside of the rectangle they describe if their physical surroundings allow for it.

11.4. VRStageBoundsPoint

[SecureContext, Exposed=Window] interface VRStageBoundsPoint {
  readonly attribute double x;
  readonly attribute double z;
};

The x and z values of a VRStageBoundsPoint describe the offset from the stage origin along the X and Z axes respectively of the point, given in meters.

12. Events

12.1. VRDeviceEvent

[SecureContext, Exposed=Window, Constructor(DOMString type, VRDeviceEventInit eventInitDict)]
interface VRDeviceEvent : Event {
  readonly attribute VRDevice device;
};

dictionary VRDeviceEventInit : EventInit {
  required VRDevice device;
};

device The VRDevice associated with this event.

12.2. VRSessionEvent

[SecureContext, Exposed=Window, Constructor(DOMString type, VRSessionEventInit eventInitDict)]
interface VRSessionEvent : Event {
  readonly attribute VRSession session;
};

dictionary VRSessionEventInit : EventInit {
  required VRSession session;
};

session The VRSession associated with this event.

12.3. VRCoordinateSystemEvent

[SecureContext, Exposed=Window, Constructor(DOMString type, VRCoordinateSystemEventInit eventInitDict)]
interface VRCoordinateSystemEvent : Event {
  readonly attribute VRCoordinateSystem coordinateSystem;
};

dictionary VRCoordinateSystemEventInit : EventInit {
  required VRCoordinateSystem coordinateSystem;
};

coordinateSystem The VRCoordinateSystem associated with this event.

12.4. Event Types

The UA MUST provide the following new events. Registration for and firing of the events must follow the usual behavior of DOM4 Events.

The UA MAY fire a deviceconnect event on the VR object to indicate that a VRDevice has been connected. The event MUST be of type VRDeviceEvent.

The UA MAY dispatch a devicedisconnect event on the VR object to indicate that a VRDevice has been disconnected. The event MUST be of type VRDeviceEvent.

The UA MAY dispatch a deactivate event on a VRDevice to indicate that something has occurred which suggests the VRDevice should end the active session. For example, if the VRDevice is capable of detecting when the user has taken it off, this event SHOULD fire when they do so. The event MUST be of type VRDeviceEvent.

A UA MAY dispatch a blur event on a VRSession to indicate that presentation to the VRSession by the page has been suspended by the UA, OS, or VR hardware. While a VRSession is blurred it remains active but it may have its frame production throttled. This is to prevent tracking while the user interacts with potentially sensitive UI. For example: The UA SHOULD blur the presenting application when the user is typing a URL into the browser with a virtual keyboard, otherwise the presenting page may be able to guess the URL the user is entering by tracking their head motions. The event MUST be of type VRSessionEvent.

A UA MAY dispatch a focus event on a VRSession to indicate that presentation to the VRSession by the page has resumed after being suspended. The event MUST be of type VRSessionEvent.

A UA MUST dispatch a resetpose event on a VRSession when the system resets the VRDevice's position or orientation. The event MUST be of type VRSessionEvent.

A UA MUST dispatch a end event on a VRSession when the session ends, either by the application or the UA. The event MUST be of type VRSessionEvent.

A UA MUST dispatch a boundschange event on a VRFrameOfReference when the stage bounds change. This includes changes to the geometry points or the bounds attribute changing to or from null. The event MUST be of type VRCoordinateSystemEvent.

Navigator interface is all alone. :( Does this belong somewhere else, or is this reasonable? This is about how WebUSB and WebBluetooth handle it.

partial interface Navigator {
  [SameObject] readonly attribute VR vr;
};

vr

14. Acknowledgements

The following individuals have contributed to the design of the WebVR specification:

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[DOM]
Anne van Kesteren. DOM Standard. Living Standard. URL: https://dom.spec.whatwg.org/
[HTML]
Anne van Kesteren; et al. HTML Standard. Living Standard. URL: https://html.spec.whatwg.org/multipage/
[PROMISES-GUIDE]
Domenic Denicola. Writing Promise-Using Specifications. 16 February 2016. Finding of the W3C TAG. URL: https://www.w3.org/2001/tag/doc/promises-guide
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. March 1997. Best Current Practice. URL: https://tools.ietf.org/html/rfc2119
[WebIDL]
Cameron McCormack; Boris Zbarsky; Tobie Langel. Web IDL. 15 December 2016. ED. URL: https://heycam.github.io/webidl/

IDL Index

[SecureContext, Exposed=Window] interface VR : EventTarget {
  // Methods
  Promise<sequence<VRDevice>> getDevices();

  // Events
  attribute EventHandler ondeviceconnect;
  attribute EventHandler ondevicedisconnect;
};

[SecureContext, Exposed=Window] interface VRDevice : EventTarget {
  // Attributes
  readonly attribute DOMString deviceName;
  readonly attribute boolean isExternal;

  // Methods
  Promise<void> supportsSession(optional VRSessionCreationOptions parameters);
  Promise<VRSession> requestSession(optional VRSessionCreationOptions parameters);

  // Events
  attribute EventHandler ondeactivate;
};

[SecureContext, Exposed=Window] interface VRSession : EventTarget {
  // Attributes
  readonly attribute VRDevice device;
  readonly attribute boolean exclusive;
  readonly attribute VRPresentationContext outputContext;

  attribute double depthNear;
  attribute double depthFar;
  attribute VRLayer baseLayer;

  // Methods
  Promise<VRFrameOfReference> requestFrameOfReference(VRFrameOfReferenceType type, optional VRFrameOfReferenceOptions options);

  long requestFrame(VRFrameRequestCallback callback);
  void cancelFrame(long handle);

  Promise<void> end();

  // Events
  attribute EventHandler onblur;
  attribute EventHandler onfocus;
  attribute EventHandler onresetpose;
  attribute EventHandler onend;
};

dictionary VRSessionCreationOptions {
  boolean exclusive = false;
  VRPresentationContext outputContext;
};

callback VRFrameRequestCallback = void (VRPresentationFrame frame);

[SecureContext, Exposed=Window] interface VRPresentationFrame {
  readonly attribute FrozenArray<VRView> views;

  VRDevicePose? getDevicePose(VRCoordinateSystem coordinateSystem);
};

[SecureContext, Exposed=Window] interface VRView {
  readonly attribute VREye eye;
  readonly attribute Float32Array projectionMatrix;

  VRViewport? getViewport(VRLayer layer);
};

enum VREye {
  "left",
  "right"
};

[SecureContext, Exposed=Window] interface VRViewport {
  readonly attribute long x;
  readonly attribute long y;
  readonly attribute long width;
  readonly attribute long height;
};

[SecureContext, Exposed=Window] interface VRDevicePose {
  readonly attribute Float32Array poseModelMatrix;

  Float32Array getViewMatrix(VRView view);
};

[SecureContext, Exposed=Window] interface VRLayer {};

typedef (WebGLRenderingContext or
         WebGL2RenderingContext) VRWebGLRenderingContext;

[SecureContext, Exposed=Window, Constructor(VRSession session,
             VRWebGLRenderingContext context,
             optional VRWebGLLayerInit layerInit)]
interface VRWebGLLayer : VRLayer {
  // Attributes
  readonly attribute VRWebGLRenderingContext context;

  readonly attribute boolean antialias;
  readonly attribute boolean depth;
  readonly attribute boolean stencil;
  readonly attribute boolean alpha;
  readonly attribute boolean multiview;

  readonly attribute WebGLFramebuffer framebuffer;
  readonly attribute unsigned long framebufferWidth;
  readonly attribute unsigned long framebufferHeight;

  // Methods
  void requestViewportScaling(double viewportScaleFactor);
};

dictionary VRWebGLLayerInit {
  boolean antialias = true;
  boolean depth = false;
  boolean stencil = false;
  boolean alpha = true;
  boolean multiview = false;
  double framebufferScaleFactor;
};

partial dictionary WebGLContextAttributes {
    VRDevice compatibleVRDevice = null;
};

partial interface WebGLRenderingContextBase {
    Promise<void> setCompatibleVRDevice(VRDevice device);
};

[SecureContext, Exposed=Window] interface VRPresentationContext {
  readonly attribute HTMLCanvasElement canvas;
};

[SecureContext, Exposed=Window] interface VRCoordinateSystem : EventTarget {
  Float32Array? getTransformTo(VRCoordinateSystem other);
};

enum VRFrameOfReferenceType {
  "headModel",
  "eyeLevel",
  "stage",
};

dictionary VRFrameOfReferenceOptions {
  boolean disableStageEmulation = false;
  double stageEmulationHeight = 0.0;
};

[SecureContext, Exposed=Window] interface VRFrameOfReference : VRCoordinateSystem {
  readonly attribute VRStageBounds? bounds;
  readonly attribute double emulatedHeight;

  attribute EventHandler onboundschange;
};

[SecureContext, Exposed=Window] interface VRStageBounds {
  readonly attribute FrozenArray<VRStageBoundsPoint> geometry;
};

[SecureContext, Exposed=Window] interface VRStageBoundsPoint {
  readonly attribute double x;
  readonly attribute double z;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, VRDeviceEventInit eventInitDict)]
interface VRDeviceEvent : Event {
  readonly attribute VRDevice device;
};

dictionary VRDeviceEventInit : EventInit {
  required VRDevice device;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, VRSessionEventInit eventInitDict)]
interface VRSessionEvent : Event {
  readonly attribute VRSession session;
};

dictionary VRSessionEventInit : EventInit {
  required VRSession session;
};

[SecureContext, Exposed=Window, Constructor(DOMString type, VRCoordinateSystemEventInit eventInitDict)]
interface VRCoordinateSystemEvent : Event {
  readonly attribute VRCoordinateSystem coordinateSystem;
};

dictionary VRCoordinateSystemEventInit : EventInit {
  required VRCoordinateSystem coordinateSystem;
};

partial interface Navigator {
  [SameObject] readonly attribute VR vr;
};

Issues Index

Discuss use of sensor activity as a possible fingerprinting vector.
Document how to poll the device pose
Document what happens when we end the session
Document effects when we blur the active session
Example of acquiring a session here.
Document restrictions and capabilities of an exclusive session
This needs to be broken up a bit more and more clearly decribe things such as the frame lifecycle.
Need an example snippet of setting up and using a VRWebGLLayer.
Describe context compatibility requirements
Pretty much nothing in this section is documented
Navigator interface is all alone. :( Does this belong somewhere else, or is this reasonable? This is about how WebUSB and WebBluetooth handle it.