Copyright © 2012-2014 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and permissive document license rules apply.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
Comments on this document are welcomed.
This document was published by the Web Real-Time Communication Working Group and the Device APIs Working Group as an Editor's Draft.
Comments regarding this document are welcome. Please send them to public-media-capture@w3.org (archives).
Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by groups operating under the W3C Patent Policy. W3C maintains a public list of any patent disclosures (Web Real-Time Communication Working Group) and a public list of any patent disclosures (Device APIs Working Group) made in connection with the deliverables of each group; these pages also include instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
This document is governed by the 1 March 2019 W3C Process Document.
This document specific the takePhoto() and grabFrame() methods, and corresponding camera settings for use with MediaStreams as defined in Media Capture and Streams [[!GETUSERMEDIA]].
The API defined in this document taks a valid MediaStream and returns an encoded image in the form of a Blob
(as defined in [FILE-API]). The image is
provided by the capture device that provides the MediaStream. Moreover,
picture-specific settings can be optionally provided as arguments that can be applied to the image being captured.
The User Agent must support Promises in order to implement the Image Capture API. Any Promise object is assumed to have resolver object, with resolve() and reject() methods associated with it.
The MediaStreamTrack
passed to the constructor must have its kind
attribute set to "video
" otherwise an exception will be thrown.
setOptions()
method of an ImageCapture
object is invoked, then a valid PhotoSettings
object must be passed in the method to the
ImageCapture
object. In addition, a new Promise object is returned. If the UA can successfully apply the settings, then the UA must return a SettingsChangeEvent
event to the
resolver object's resolve() method. If the UA cannot successfully apply the settings, then the UA
must return an ImageCaptureErrorEvent
to the resolver object's reject() method whose errorDescription
is set to OPTIONS_ERROR. takePhoto()
method of an ImageCapture
object is invoked, a new Promise object is returned.
If the readyState
of the VideoStreamTrack
provided in the constructor is not "live", the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose errorDescription
is set to INVALID_TRACK. If the UA is unable to execute the takePhoto()
method for any
other reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose errorDescription
is set to PHOTO_ERROR.
Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
MediaStreamTrack
into a Blob
containing a single still image. The method of doing
this will depend on the underlying device. Devices
may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo,
and then resume streaming. In this case, the stopping and restarting of streaming should
cause mute
and unmute
events to fire on the Track in question. BlobEvent
event containing the Blob
to the resolver object's resolve() method.grabFrame()
method of an ImageCapture
object is invoked, a new Promise object is returned. If the readyState
of the MediaStreamTrack
provided in the contructor is not "live", the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose errorDescription
is set to INVALID_TRACK. If the UA is unable to execute the grabFrame()
method for any
other reason, then the UA must return an ImageCaptureErrorEvent
event to the resolver object's reject() method with a
new ImageCaptureError
object whose errorDescription
is set to FRAME_ERROR. Otherwise it must
queue a task, using the DOM manipulation task source, that runs the following steps:
MediaStreamTrack
into an ImageData
object (as defined in [CANVAS-2D]) containing a single still frame in RGBA format. The width
and height
of the
ImageData
object are derived from the constraints of the MediaStreamTrack
. The method of doing
this will depend on the underlying device. Devices
may temporarily stop streaming data, reconfigure themselves with the appropriate photo settings (which may be a subset of the settings provided in photoCapabilities
), take the photo (and convert to an ImageData object),
and then resume streaming. In this case, the stopping and restarting of streaming should
cause mute
and unmute
events to fire on the Track in question. FrameGrabEvent
event containing the ImageData
to the resolver object's resolve() method. {Note: grabFrame()
returns data only once upon being invoked.}FrameGrabEvent
ImageData
object whose width
and height
attributes indicates the dimensions of the captured frame. FrameGrabEventInit
DictionaryImageData
object containing the data to deliver via this event.ImageCaptureErrorEvent
ImageCaptureError
object whose errorDescription
attribute indicates the type of error occurrence. ImageCaptureErrorEventInit
DictionaryImageCaptureError
object containing the data to deliver via this event.BlobEvent
Blob
object whose type attribute indicates the encoding of the blob data. An implementation must return a Blob in a format that is capable of being viewed in an HTML <img>
tag. BlobEventInit
DictionaryBlob
object containing the data to deliver via this event.SettingsChangeEvent
PhotoSettings
object whose type attribute indicates the current photo settings. SettingsChangeEventInit
DictionaryPhotoSettings
object containing the data to deliver via this event.ImageCaptureError
The ImageCaptureError
object is passed to an onerror
event handler of an
ImageCapture
object if an error occurred when the object was created or any of its methods were invoked.
errorDescription
attribute returns the appropriate DOMString for the error event. Acceptable values are FRAME_ERROR, OPTIONS_ERROR, PHOTO_ERROR, and ERROR_UNKNOWN.MediaSettingsRange
MediaSettingsItem
The MediaSettingsItem
interface is now defined, which allows for a single setting to be managed.
PhotoCapabilities
The photoCapabilities attribute of the ImageCapture
object provides
the photo-specific settings options and current settings values. The following definitions are assumed
for individual settings and are provided for information purposes:
Mode | Kelvin range |
---|---|
incandescent | 2500-3500 |
fluorescent | 4000-5000 |
warm-fluorescent | 5000-5500 |
daylight | 5500-6500 |
cloudy-daylight | 6500-8000 |
twilight | 8000-9000 |
shade | 9000-10000 |
WhiteBalanceModeEnum
.ExposureMode
.FillLightMode
.FocusMode
.ExposureMode
FillLightMode
flash
to guarantee firing of the flash for the takePhoto()
or getFrame()
methods.takePhoto()
or getFrame()
methods. MediaStreamTrack
is activeFocusMode
PhotoSettings
The PhotoSettings
object is optionally passed into the ImageCapture.setOptions()
method
in order to modify capture device settings specific to still imagery. Each of the attributes in this object
are optional.
ExposureMode
.FillLightMode
.FocusMode
.navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); if (captureDevice) { captureDevice.grabFrame().then(processFrame(imgData)); } } function processFrame(e) { imgData = e.imageData; width = imgData.width; height = imgData.height; for (j=3; j < imgData.width; j+=4) { // Set all alpha values to medium opacity imgData.data[j] = 128; } // Create new ImageObject with the modified pixel values var canvas = document.createElement('canvas'); ctx = canvas.getContext("2d"); newImg = ctx.createImageData(width,height); for (j=0; j < imgData.width; j++) { newImg.data[j] = imgData.data[j]; } // ... and do something with the modified image ... } function failedToGetMedia{ console.log('Stream failure'); }
navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); if (captureDevice) { if (captureDevice.photoCapabilities.redEyeReduction) { captureDevice.setOptions({redEyeReductionSetting:true}).then(captureDevice.takePhoto().then(showPicture(blob),function(error){alert("Failed to take photo");})); } else console.log('No red eye reduction'); } } function showPicture(e) { var img = document.querySelector("img"); img.src = URL.createObjectURL(e.data); } function failedToGetMedia{ console.log('Stream failure'); }
<html> <body> <p><canvas id="frame"></canvas></p> <button onclick="stopFunction()">Stop frame grab</button> <script> var canvas = document.getElementById('frame'); navigator.getUserMedia({video: true}, gotMedia, failedToGetMedia); function gotMedia(mediastream) { //Extract video track. var videoDevice = mediastream.getVideoTracks()[0]; // Check if this device supports a picture mode... var captureDevice = new ImageCapture(videoDevice); var frameVar; if (captureDevice) { frameVar = setInterval(captureDevice.grabFrame().then(processFrame()), 1000); } } function processFrame(e) { imgData = e.imageData; canvas.width = imgData.width; canvas.height = imgData.height; canvas.getContext('2d').drawImage(imgData, 0, 0,imgData.width,imgData.height); } function stopFunction(e) { clearInterval(myVar); } </script> </body> </html>