This specification defines WebVMT, the Web Video Map Tracks format, which is an enabling technology whose main use is for marking up external metadata track resources in connection with the HTML <track>
element. WebVMT files provide map presentation, annotation and interpolation synchronized to web media content, and more generally any form of data that is time-aligned with audio or video content, including those from location-aware devices such as dashcams, drones and smartphones.
This document is a Note, it has not been widely reviewed and should be considered as experimental only. It may serve as the base for an upcoming W3C Recommendation.
This document is an explanatory specification, intended to communicate and develop the draft WebVMT format through discussion with user communities.
This section details example scenarios in which WebVMT can add significant value with identified benefits.
A missing person is reported to the rescue services, who deploy a drone to search inaccessible areas of coastline or moorland for their target. The drone relays back a live video stream from its camera and geolocation data from its GPS receiver to a remote human operator who is piloting it.
As the search continues, the operator spots a target on the video feed and can instantly call up an electronic map, synchronized to the video, which has been automatically following the drone’s position and plotting its ground track. The display gives the operator immediate context for the video, and allows them to override the automatic map control and zoom in to pinpoint the target’s precise location from the features visible in the video and on the map/satellite view. They mark the location and then zoom out to assess the surrounding terrain and advise the recovery team of the best approach to the target. For example, the terrain may dictate very different approach routes if either the person has twisted their ankle at the top of a cliff, or has fallen and is lying at the bottom of the same cliff, though the co-ordinates are almost identical in both cases.
The operator has been able to make important decisions quickly, which may be life critical, and deploy recovery resources effectively.
A survey drone is equipped with a camera which records an image of the ground directly below it. The pilot is a remote human operator, tasked with surveying a defined area from particular height in order to capture the required data.
As the survey progresses, zones are automatically marked on the map to represent areas which have been included. Once the drone has finished its sweep, the operator can quickly confirm whether the required area has been completely covered. If any areas have been missed, the pilot can use the map to navigate and make additional passes to fill the gaps, before returning to base.
Adding WebVMT files to the survey archive provides a geospatial index to the videos, allowing a particular geographic location to be found more rapidly by virtue of their small file size in comparison to their linked media. Online video archives can be indexed more quickly using this web-friendly format.
The operator has been easily able to verify the quality of their own work and correct any errors, saving time and additional effort in redeployment. Video footage has been indexed by geolocation rapidly and in a search-engine-friendly format.
An outdoor sportsperson, e.g. snowboarder or cyclist, is equipped with a helmet camera and/or mobile phone to record video footage and GPS data. They set off to find new challenges and practice their skills, e.g. off-piste or on mountain trails, and discover new routes and areas that they would like to explore in future, chatting to the camera as they go. Afterwards, they upload the video to share their experience with the online community, so others can quickly identify locations of particularly interesting sections of the featured trail. Using the synchronized map view in their browser, community members can easily see where they need to go in order to explore these places for themselves.
The operator has been able to fully engage in their sporting activity, without making any written notes, while simultaneously recording the details needed to guide others to the same locations. Their changing location over time can also be used to calculate speed and distance information, which can be displayed alongside the footage.
A TV production company is covering a sports event that takes place over a large area, e.g. rallying, road cycling or sailing, using a number of mobile video devices including competitor cams, e.g. dash cams or helmet cams, and drones to provide shots of inaccessible areas, e.g. remote terrain or over water.
Feeds from all the cameras are streamed to the production control room, where their geolocation data are combined on a map showing the locations of every competitor and camera, each labelled for easy identification. The live map enables the director to quickly choose the best shot and anticipate where and when to deploy their drone cameras to catch competitors at critical locations on the course as the competition develops in real time.
Multiple operators can function concurrently, both autonomously and under central direction. Mobile assets can be monitored and deployed from an operations centre to provide optimum coverage of the developing live event.
Important details of a remote area have been captured on video. It is not possible to revisit the location for safety reasons or because it has physically changed in the intervening time. Footage can be retrospectively geotagged against a concurrent map to allow the viewer to better interpret and identify features seen in the footage. Explanatory annotations can be added to the WebVMT file to help future viewers' understanding and aggregate the collective analysis.
Multiple operators can contribute their observations to provide a group analysis, iteratively adding new details and discarding out-of-date information. Experts can offer insight about filmed locations, which would otherwise be inaccessible to them.
A TV production company designs a new game show which involves competitors searching for targets across a wide area, with an operations centre remotely monitoring their progress and providing updates. Competitors are equipped with body-worn video or helmet cameras to relay footage of their view.
Geolocation context allow central operators to better understand the participants' actions and to remotely direct them more efficiently. Competitors' positions can displayed to the TV audience on annotated 2D- or 3D-maps for clearer presentation.
A swarm of drones is deployed to perform a task, and their operations are monitored centrally. Geolocation details of the swarm are automatically collated and broadcast to the drone pilots, showing the locations of all the drones and each is circled with a suitable safety zone to warn their operators in case two units find themselves flying in close proximity.
Pilots are safely able to operate either autonomously or under the direction of central control. Extra zonal information can be added to the operators' maps to show the outer perimeter of their operating area and warn of fixed aerial hazards, e.g. a radio mast, or transient hazards, e.g. a helicopter.
Disaster strikes, e.g. hurricane or tsunami, and emergency response teams are deployed to the affected area. However, it is difficult to verify which problems people are facing, what resources would help them and exactly where these events are occurring. Maps are unreliable as the infrastructure has been damaged, though people on the ground have the relevant knowledge if it could be reliably recorded and shared.
Anyone with a basic smartphone could video events with reliable geospatial data, as GPS receivers can operate without the need for a mobile phone signal by using satellite data, to accurately document the problems they face. Even if the cell network is not operational, this information can be physically delivered to crisis coordinators to notify them of the issues that need to be addressed, including accurate location data in a common format. Response teams can quickly search archived video by location to verify latest updates with recent context. Crisis events can be reliably recorded, knowledge can be shared and aggregated, and relief resources can be accurately targeted and deployed to the correct locations.
A web-based police system is established to allow dashcam video evidence of driving offences to be submitted digitally by members of the public who have witnessed them. Detectives are able to identify the time and vehicles involved directly from the uploaded footage, and accurately determine the location at which the incident occurred from the digital timed metadata included.
The ability to accept open format data also makes the system available to cyclists and pedestrians who can record video with location on their helmet cameras and smartphones respectively, providing wider access to the service beyond the dashcam community. Metadata, e.g. location, from different video manufacturers is often recorded in mutually-incompatible formats, but WebVMT support enables synchronized location (and other) data to be extracted from recordings using manufacturers' or community tools, without affecting source video integrity, and submitted to the police system in a common format, significantly reducing development costs.
Officers have been able to identify incident locations quickly and accurately, without sacrificing evidence integrity. The online service has been made available to a wider audience of drivers, cyclists and pedestrians, without incurring additional development costs.
An area of interest is monitored operationally by a collection of different mobile video devices, e.g. drones, body-worn video, helicopter, etc. Video footage, possibly in different formats, is added to an archive with location (and other) metadata in a common format which forms a time-location index suitable for rapid parsing by a web crawler. Users can submit online queries to search by location and return a time-ordered sequence of video frame stills captured within a radial distance of the chosen location. Alternatively, sensor data can be searched, e.g. for high readings, to return matching geotagged video frames for further analysis.
Video archives can be quickly indexed using a common metadata format regardless of video encoding, e.g. MPEG, WebM, OGG, and video files are only accessed in case of a positive search result, which reduces bandwidth in comparison to embedded metadata. Linked files also allow different security permissions to be applied to the crawling and querying processes, so an AI algorithm can be authorised to read metadata without being able to access image content if there are security concerns over data privacy, e.g. illicit facial recognition.
Dashcam footage is searched to automatically identify vehicle collisions from impact acceleration profiles recorded in video metadata. Dashcam manufacturers typically embed metadata in an unpublished format and provide a proprietary video player to allow users to display it. Exporting embedded metadata to a linked file in a web-friendly format enables searchable video archive data to be shared quickly and easily, without affecting evidence integrity, and to be accessed through a common web interface.
Vehicles can be automatically monitored using a low-cost dashcam and web-based tools to ensure that collisions are accurately recorded by drivers and that commercial vehicles remain safe and undamaged. Interoperability means that users are not limited to a particular brand and can share evidence with insurers and the police in a common format without damaging its integrity.
Augmented reality (AR) software is used to control assets or view content in situ at a particular location. For example, nearby street lights can be switched off or on by a service engineer for maintenance purposes, or an architect can see how their structural design integrates with the surrounding landscape at its proposed location before any building work has started.
Video footage can be recorded with location, camera orientation and other metadata so AR overlays be generated on demand. Such recordings can be used to demonstrate how AR content is displayed and controlled in order to educate users with a 'golden tutorial', to provide 'proof of action' as evidence of work done for auditing purposes, or to create example data for AR software testing and debugging.
A user triggers an audio track which provides guidance about the local area or instruction for a known object, e.g. Web of Things (WoT) device at that location. The audio timeline is synchronized with events that can display AR content, control WoT devices and display points of interest on a map which provide guidance with real world context by highlighting places or objects of interest and showing possible actions.
Users can be guided by a virtual assistant through an area of interest or sequence of actions augmented with AR/VR and WoT devices to visualise events and by an annotated map or model to provide additional geospatial context. Greater insight is given to the user by showing detailed views of the location on a map or internal structure of the identified object using a virtual model.
No standard format currently exists by which web browsers can synchronise geolocation data with video. Though many browser-supported formats exist to present the two data streams separately, e.g. MPEG for video and GPX for geolocation, there is no viable synchronisation mechanism for video playback time with geolocation information.
Material Exchange Format (MXF) was developed by the Society of Motion Picture and Television Engineers (SMPTE) to synchronise metadata, including geolocation, with audio and video streams using a register of key-length-value (KLV) triples. The breadth of its scope has resulted in interoperability issues, as different vendors implement different parts of the standard, and has produced implementations from high-profile companies which are mutually incompatible. KLVs can also be embedded within MPEG files, though this does not address the synchronisation issue for other web video formats such as WebM.
Video camera manufacturers have taken various approaches, resulting in a number of non-standard solutions including embedding geolocation data within the MPEG metadata stream in disparate formats, such as Motion Imagery Standards Board (MISB) or Go-Pro Metadata Format (GPMF), or recording a separate geolocation file in a proprietary format alongside the associated video file. From a hardware perspective, a few high-end cameras provide geotagging out of the box and all require an add-on device to support this feature.
Geospatial data are not currently accessible in the video Document Object Model (DOM) in HTML nor via video playback APIs in smartphones, e.g. Android, though their host devices are typically equipped with both a video camera and Global Navigation Satellite System (GNSS) receiver capable of capturing the required information.
In sharp contrast, still photos have a well-established geotagging standard called Exif, which was published by the Japan Electronic Industries Development Association (JEIDA) in 1995 and defines a metadata tag to embed geolocation data within TIFF and JPEG images. This is widely supported by manufacturers of photographic equipment and software worldwide, including low-end smartphones, making this feature cheap and accessible to the public.
Historically, there has been no requirement for a comparable video standard, but the urgency for such a standard is growing fast due to the emerging markets for 'mobile video devices,' e.g. drones, dashcams, body-worn video and helmet cameras, as well as the rise in high-quality video and geolocation support in the global smartphone market.
Using current W3C recommendations, it is possible for a programmer to synchronise video-geolocation 'metadata' with a <video>
element using a <track>
child element. However, this is a non-trivial development task which requires an understanding of the video DOM events and Javascript file handling, making it inaccessible to the vast majority of web users. Video metadata tracks are an identified kind of track data in HTML, though metadata content is difficult to access due to the text-based nature of existing DOM support.
Establishing a standard file format would allow interoperability and information sharing between the public, the emergency services, police and other mobile video device users, e.g. drone pilots, giving cheaper and easier access to this important resource. Native web browser support for geotagged video using this file format would also make this freely accessible to most web users and enable integration with existing web services such as online maps and search engines. Current low-end smartphones already provide suitable hardware to concurrently capture video and geolocation streams, which would make this technology easily accessible to the general public, and encourage the user and developer communities to grow rapidly.
This proposal constitutes a lightweight markup language to synchronise video with geolocation data for display on electronic maps, such as OpenStreetMaps. It offers presentational control of the map display, e.g. pan and zoom, and annotation to highlight map features to the viewer, e.g. paths and zones.
WebVMT (Web Video Map Tracks) format is intended for marking up external map track resources, and its main use is for files synchronising video content with an annotated map presentation. Ideas have been borrowed from existing W3C formats, including WebVTT's HTML binding and its block and cue structures, and SVG's approach to drawing and interpolation, in order to display output on an electronic map.
The format mimics WebVTT's structure and syntax for media synchronisation, with cue details listed in an accessible text-based file linked to a <video>
or <audio>
DOM element by a child <track>
element in an HTML document.
<!doctype html> <html> <head> <title>WebVMT Basic Example</title> </head> <body> <!-- Video display --> <video controls width="640" height="360"> <source src="video.mp4" type="video/mp4"> <track src="maptrack.vmt" kind="metadata" for="vmt-map" tileurl="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png?key=VALID_OSM_KEY"> Your browser does not support the video tag. </video> <!-- Map display --> <div id="vmt-map" style="height: 360px; width:640px;"></div> </body> </html>
The WebVMT format file, e.g. maptrack.vmt, contains the map cues associated with the video, e.g. video.mp4.
The meaning of for
and tileurl
attributes for user agents is an open question. Initial solutions can be built using Javascript, with existing map libraries such as Leaflet, though the vision is that future user agents will handle map rendering in the longer term.
Map cues display their payload between a start time and end time. The end cue time may be omitted to represent an unknown time.
Here is a sample WebVMT file with a cue highlighting Tower Bridge in London on a static map.
WEBVMT MEDIA url:TowerBridge.mp4 mime-type:video/mp4 MAP lat:51.506 lng:-0.076 rad:250 00:00:02.000 --> 00:00:05.000 { "move-to": { "lat": 51.504362, "lng": -0.076153 } } { "line-to": { "lat": 51.506646, "lng": -0.074651 } }
Cues also allow dynamic presentation to pan and zoom the map. This example focusses attention on the Tower of London.
Cues without end times are displayed until the end of the video.
WEBVMT MEDIA url:../movies/TowerOfLondon.webm mime-type:video/webm MAP lat:51.162 lng:-0.143 rad:20000 00:00:03.000 --> { "pan-to": { "lat": 51.508, "lng": -0.077, "end": "00:00:05.000" } } 00:00:06.000 --> { "zoom": { "rad": 250 } }
Display style is controlled by CSS, which may be embedded in HTML or within the WebVMT file.
In this example, an HTML page has a CSS style sheet in a <style>
element that styles map cues for the video, e.g. drawing lines in red.
<!doctype html> <html> <head> <title>WebVMT Style Example</title> <style> video::cue { stroke: red; stroke-opacity: 0.9; } </style> </head> <body> <video controls width="640" height="360"> <source src="video.mp4" type="video/mp4"> <track src="maptrack.vmt" kind="metadata" for="vmt-map" tileurl="https://api2.ordnancesurvey.co.uk/mapping_api/v1/service/zxy/EPSG%3A3857/Outdoor%203857/\{z}/{x}/{y}.png?key=VALID_OS_KEY"> Your browser does not support the video tag. </video> <div id="vmt-map" style="height: 360px; width:640px;"></div> </body> </html>
Style block format is similar to WebVTT.
CSS style sheets can also be embedded within WebVMT files. Style blocks are placed after any headers but before the first cue, and start with the word STYLE
.
Comment blocks can be interleaved with style blocks.
WEBVMT MEDIA url:http://example.com/movies/Greenwich.mp4 mime-type:video/mp4 MAP lat:51.478 lng:-0.001 rad:50 STYLE ::cue { stroke: red; } NOTE Comments are allowed between style blocks STYLE ::cue { stroke-opacity: 0.9; } /* Style blocks cannot use blank lines nor "dash dash greater than" */ NOTE Prime Meridian marker 00:00:00.000 --> { "move-to": { "lat":51.477901, "lng": -0.001466 } } { "line-to": { "lat":51.477946, "lng": -0.001466 } } NOTE Style blocks may not appear after the first cue
Arbitrary data may be associated with a WebVMT cue using a sync
command, in a similar fashion to the GPX <extension>
element.
WEBVMT NOTE Associated video MEDIA url:Animals.mp4 mime-type:video/mp4 NOTE Map config MAP lat:51.1618 lng:-0.1428 rad:200 NOTE Cat, top left, after 5 secs until 25 secs 00:00:05.000 —-> 00:00:25.000 { “sync”: { “type”: “org.ogc.geoai.example”, “data”: { “animal”:”cat”, “frame-zone”:”top-left" } } } NOTE Dog, mid right, after 10 secs until 40 secs 00:00:10.000 —-> 00:00:40.000 { “sync”: { “type”: “org.ogc.geoai.example”, “data”: { “animal”: ”dog”, “frame-zone”: ”middle-right" } } }
Data values may be interpolated using an interp
command, in a similar way to the <animate>
element in SVG.
Sensor data can be interpolated between sample points to provide intermediate values where necessary, while retaining the original source data sample values.
Three interpolation schemes are supported:
A stepwise-interpolated value, e.g. vehicle gear selection, remains constant until the next sample time.
WEBVMT NOTE Required blocks omitted for clarity NOTE Step interpolation of sensor1 data gear = 4 after 2 secs until 6 secs 00:00:02.000 --> 00:00:06.000 { "sync": { "type": "org.webvmt.example", "id": "sensor1", "data": { "gear": "4" } } } NOTE Step interpolation of sensor1 data gear = 5 after 6 secs until 9 secs 00:00:06.000 --> 00:00:09.000 { "sync": { "id": "sensor1", "data": { "gear": "5" } } }
A linearly-interpolated value, e.g. temperature, changes to a final value at the next sample time in direct proportion to the elapsed sample interval.
WEBVMT NOTE Required blocks omitted for clarity NOTE Linear interpolation of sensor2 data temperature = 14 -> 16 after 4 secs until 6 secs 00:00:04.000 --> 00:00:06.000 { "sync": { "type": "org.webvmt.example", "id": "sensor2", "data": { "temperature": "14" } } } { "interp": { "to": { "data": { "temperature": "16" } } } } NOTE Linear interpolation of sensor2 data temperature = 16 -> 19 after 6 secs until 9 secs 00:00:06.000 --> 00:00:09.000 { "sync": { "id": "sensor2" } } { "interp": { "to": { "data": { "temperature": "19" } } } }
A discretely-interpolated value, e.g. headcount in a video frame, is only valid instanteously at the sample time.
WEBVMT NOTE Required blocks omitted for clarity NOTE Discrete interpolation of sensor3 data headcount = 12 at 4 secs 00:00:04.000 --> 00:00:04.000 { "sync": { "type": "org.webvmt.example", "id": "sensor3", "data": { "headcount": "12" } } } NOTE Discrete interpolation of sensor3 data headcount = 34 at 6 secs 00:00:06.000 --> 00:00:06.000 { "sync": { "id": "sensor3", "data": { "headcount": "34" } } }
Live streams can be recorded with interpolation using unbounded cues, i.e. a cue with an unknown end time.
In this example, the result is identical to the previous step interpolation example but without requiring knowledge of any future data values during the live capture process.
WEBVMT NOTE Required blocks omitted for clarity NOTE Step interpolation of live1 data gear = 4 after 4 secs until next update 00:00:04.000 --> { "sync": { "type": "org.webvmt.example", "id": "live1", "data": { "gear": "4" } } } NOTE Step interpolation of live1 data gear = 5 after 6 secs until next update 00:00:06.000 --> { "sync": { "id": "live1", "data": { "gear": "5" } } } NOTE End (step) interpolation of live1 data gear = 5 at 9 secs 00:00:09.000 --> 00:00:09.000 { "sync": { "id": "live1", "data": { "gear": "5" } } }
In the next example, the result is identical to the previous linear interpolation example but without requiring knowledge of any future data values during the live capture process.
WEBVMT NOTE Required blocks omitted for clarity NOTE Linear interpolation of live2 data temperature = 14 after 4 secs until next update 00:00:04.000 --> { "sync": { "type": "org.webvmt.example", "id": "live2", "data": { "temperature": "14" } } } { "interp": { "end": "00:00:06.000", "to": { "data": { "temperature": "16" } } } } NOTE Linear interpolation of live2 data temperature = 16 after 6 secs until next update 00:00:06.000 --> { "sync": { "id": "live2"} } { "interp": { "end": "00:00:09.000", "to": { "data": { "temperature": "19" } } } } NOTE End (linear) interpolation of live2 data temperature = 19 at 9 secs 00:00:09.000 --> 00:00:09.000 { "sync": { "id": "live2", "data": { "temperature": "19" } } }
Values may not be interpolated during capture as future data are unknown, e.g. for linear interpolation, though can be correctly interpolated after capture, once end values are known during subsequent playbacks.
A WebVMT path describes the trajectory of a moving object which consists of a timed sequence of locations. The object's location may be interpolated between consecutive values in the sequence to calculate the distance travelled over time.
The path
attribute may be set to identify an individual path. This allows a path:
In this example, an interpolated path is traced from London to Brighton:
WEBVMT NOTE Associated video MEDIA url:LondonBrighton.mp4 mime-type:video/mp4 start-time:2018-02-19T12:34:56.789Z path:cam1 NOTE Map config MAP lat:51.1618 lng:-0.1428 rad:20000 NOTE London overview 00:00:01.000 --> { "pan-to": { "lat": 51.4952, "lng": -0.1441 } } 00:00:02.000 --> { "zoom": { "rad": 10000 } } NOTE From London Victoria... 00:00:03.000 --> { "pan-to": { "lat": 50.830553, "lng": -0.141706, "end": "00:00:25.000" } } { "move-to": { "lat": 51.494477, "lng": -0.144753, "path": "cam1" } } { "line-to": { "lat": 51.155958, "lng": -0.16089, "path": "cam1", "end": "00:00:10.000" } } NOTE ...via Gatwick Airport... 00:00:10.000 --> { "line-to": { "lat": 50.830553, "lng": -0.141706, "path": "cam1", "end": "00:00:25.000" } } NOTE ...to Brighton (at 00:00:25.000) 00:00:27.000 --> { "zoom": { "rad": 20000 } }
Interpolation can also be applied to the attributes of a WebVMT command and a map annotation may be animated in this way.
This example tracks a drone with a circular 10-meter safety zone around it.
WEBVMT NOTE Associated video MEDIA url:SafeDrone.mp4 mime-type:video/mp4 NOTE Map config MAP lat:51.0130 lng:-0.0015 rad:1000 NOTE Drone starts at (51.0130, -0.0015) 00:00:05.000 --> { "pan-to": { "lat": 51.0070, "lng": -0.0020, "end": "00:00:25.000" } } { "move-to": { "lat": 51.0130, "lng": -0.0015, "path": "drone1" } } { "line-to": { "lat": 51.0090, "lng": -0.0017, "path": "drone1", "end": "00:00:10.000" } } NOTE Safety zone 00:00:05.000 --> 00:00:10.000 { "circle": { "lat": 51.0130, "lng": -0.0015, "rad": 10 } } { "interp": { "to": { "lat": 51.0090, "lng": -0.0017 } } } NOTE Drone arrives at (51.0090, -0.0017) 00:00:10.000 --> { "line-to": { "lat": 51.0070, "lng": -0.0020, "path": "drone1", "end": "00:00:25.000" } } { "circle": { "lat": 51.0090, "lng": -0.0017, "rad": 10 } } { "interp": { "end": "00:00:25.000", "to": { "lat": 51.0070, "lng": -0.0020 } } } NOTE Drone ends at (51.0070, -0.0020)
Embedded YouTube content can be displayed using an <iframe>
element, specifying the unique 10-character content identifier for the posted video, using the official YouTube IFrame API with the Javascript API enabled.
A child <track>
pseudo-element within the <iframe>
links it with WebVMT using the same syntax as for the <video>
DOM element.
<!doctype html> <html> <head> <title>WebVMT YouTube Example</title> </head> <body> <!-- Video display --> <iframe src="http://www.youtube.com/embed/YOUTUBE_VIDEO_ID?enablejsapi=1" width="640" height="360" frameborder="0"> <track src="maptrack.vmt" kind="metadata" for="vmt-map" tileurl="mapbox://styles/mapbox/streets-v9"> </iframe> <!-- Map display --> <div id="vmt-map" style="height: 360px; width:640px;"></div> </body> </html>
Note that the <track>
pseudo-element is actually replaced by the <iframe>
content when the page is loaded.
The url
in the MEDIA
block should match the src
attribute of the <iframe>
element without the query.
WEBVMT NOTE Associated YouTube video MEDIA url:http://www.youtube.com/embed/YOUTUBE_VIDEO_ID mime-type:video/mp4
This specification describes the conformance criteria for user agents (relevant to implementors) and WebVMT files (relevant to authors and authoring tool implementors).
Syntax defines what consists of a valid WebVMT file. Authors need to follow the requirements therein, and are encouraged to use a conformance checker. Parsing defines how user agents are to interpret a file labelled as text/vmt, for both valid and invalid WebVMT files. The parsing rules are more tolerant to author errors than the syntax allows, in order to provide for extensibility and to still render cues that have some syntax errors.
User agents fall into several (possibly overlapping) categories with different conformance requirements.
Implementations of this specification must not normalize Unicode text during processing.
The data model of WebVMT consists of four key elements: the linked media file, the video viewport, cues, and the map viewport. The linked media file contains audio or video data with which cues are synchronized. The video viewport is the rendering area for video output. Cues are containers consisting of a set of metadata lines. The map viewport is the rendering area for metadata output, for example graphical annotations overlaid on an online map.
The WebVMT file is a container file for chunks of data that are time-aligned with a video or audio resource. It can therefore be regarded as a serialisation format for time-aligned data.
A WebVMT file starts with a header and then contains a series of data blocks. If a data block has a start time, it is called a WebVMT cue. A comment is another kind of data block.
A WebVMT file carries cues which are identified as metadata and specified in the kind attribute of the track element in the HTML specification.
The data kind of a WebVMT file is externally specified, such as in a HTML file’s track element. The environment is responsible for interpreting the data correctly.
A WebVMT cue is rendered as an overlay on top of the map viewport.
A WebVMT cue is a text track cue that additionally consists of the following:
A WebVMT cue without an end time indicates that the cue is an unbounded text track cue, for example during live streaming when the time of the next data sample is unknown or when the duration of the media is unknown.
A WebVMT location consists of:
Location information is provided in terms of World Geodetic System coordinates, WGS84. Altitude is measured in meters above the WGS84 ellipsoid, and should not be confused with the height above mean sea level.
A WebVMT map is the map viewport and provides a rendering area for WebVMT cues.
A WebVMT map consists of:
The precise format of the map interface object is implementation dependent, for example OpenLayers API or Leaflet API.
For parsing, we also need the following:
A WebVMT media is metadata for the linked media with which WebVMT cues are synchronized, for example audio or video.
A WebVMT media enables a web crawler to rapidly search media metadata by providing sufficient information to construct a time-metadata index of the linked media file without opening it. Search engine data throughput is reduced as only matching media files selected by the user need be read, and non-matching media files are not accessed at all. Care should be taken to maintain WebVMT media details correctly, for example when a media file is renamed.
A WebVMT media consists of:
A null media URL indicates that no linked media file exists.
A null media MIME type indicates that no linked media file exists.
The media start time allows multiple WebVMT files to be aggregated. A null media start time indicates that no start time is associated, for example in the case of an animation.
A null media path indicates that no moving object is associated, for example when no linked media file exists.
A WebVMT command is an instruction to display WebVMT metadata content.
A WebVMT command consists of one of the following components:
WebVMT commands are executed in order from first to last in the WebVMT file.
A WebVMT map control command controls map presentation.
A WebVMT map control command consists of one of the following components:
A WebVMT pan is a command to set the location of the map center.
A WebVMT pan consists of:
A WebVMT zoom is a command to set the level of detail of the map.
A WebVMT zoom consists of:
A WebVMT zone consists of all the WebVMT zone fragments with the same zone identifier.
A WebVMT zone fragment command consists of one of the following components:
A WebVMT circle is a command to annotate a circular area to the map.
A WebVMT circle consists of:
A WebVMT polygon is a command to annotate a polygonal area to the map.
A WebVMT polygon consists of:
A WebVMT path consists of all the path segments with the same path identifier.
A path segment consists of a sequence of contiguous WebVMT path fragments that describe the trajectory of an object moving through the mapped space.
A WebVMT path may include non-contiguous path segments, but each path segment must contain a sequence of contiguous WebVMT path fragments.
A path segment consists of the following components, in the order given:
A WebVMT path fragment command consists of the one of the following components:
A WebVMT move command sets the start location of the first WebVMT path fragment in a path segment.
A WebVMT move consists of:
A WebVMT line command sets the end location of the WebVMT path fragment. The fragment start location is set by the preceding WebVMT path fragment in the WebVMT path.
A WebVMT line consists of:
A WebVMT line is a straight line from the start location to the end location. The location of the moving object can be linearly interpolated between the fragment start time and the fragment end time.
A WebVMT synchronized data synchronizes a sample from a data source with a WebVMT cue.
A WebVMT synchronized data command consists of:
A WebVMT interpolation changes an object attribute from a start value to an end value over a time interval.
A WebVMT interpolation consists of:
A WebVMT interpolation list consists of one or more WebVMT interpolations with all interpolation objects set to the preceding WebVMT command.
A WebVMT file must consist of a WebVMT file body encoded as UTF-8 and labeled with the MIME type text/vmt
.
A WebVMT file body consists of the following components, in the order given:
WEBVMT
" (U+0057 LATIN CAPITAL LETTER W, U+0045 LATIN CAPITAL LETTER E, U+0042 LATIN CAPITAL LETTER B, U+0056 LATIN CAPITAL LETTER V, U+004D LATIN CAPITAL LETTER M, U+0054 LATIN CAPITAL LETTER T).
A WebVMT line terminator consists of one of the following:
A WebVMT media metadata block consists of the following components, in the order given:
MEDIA
" (U+004D LATIN CAPITAL LETTER M, U+0045 LATIN CAPITAL LETTER E, U+0044 LATIN CAPITAL LETTER D, U+0049 LATIN CAPITAL LETTER I, U+0041 LATIN CAPITAL LETTER A).
The WebVMT media metadata block provides hints about the linked media file for web crawlers and search engines.
A WebVMT map initialisation block consists of the following components, in the order given:
MAP
" (U+004D LATIN CAPITAL LETTER M, U+0041 LATIN CAPITAL LETTER A, U+0050 LATIN CAPITAL LETTER P).
The WebVMT map initialisation block defines the state of the WebVMT map before any WebVMT cues are active.
A WebVMT style block consists of the following components, in the order given:
STYLE
" (U+0053 LATIN CAPITAL LETTER S, U+0054 LATIN CAPITAL LETTER T, U+0059 LATIN CAPITAL LETTER Y, U+004C LATIN CAPITAL LETTER L, U+0045 LATIN CAPITAL LETTER E).
-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN). The string represents a CSS style sheet; the requirements given in the relevant CSS specifications apply.
A WebVMT cue block consists of the following components, in the order given:
-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN).
A WebVMT cue block corresponds to one piece of time-aligned data in the WebVMT file. The WebVMT cue payload is the data associated with the WebVMT cue.
A WebVMT cue identifier is any sequence of one or more characters not containing the substring "-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN), nor containing any U+000A LINE FEED (LF) characters or U+000D CARRIAGE RETURN (CR) characters.
A WebVMT cue identifier must be unique amongst all the WebVMT cue identifiers of all WebVMT cues of a WebVMT file.
A WebVMT cue identifier can be used to identify a specific cue, for example from script or CSS.
The WebVMT cue timings part of a WebVMT cue block consists of the following components, in the order given:
-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN).
The WebVMT cue timings give the start and end offsets of the WebVMT cue block. Different cues can overlap. Cues are always listed ordered by their start time.
A WebVMT timestamp consists of the following components, in the order given:
A WebVMT timestamp is always interpreted relative to the current playback position of the media data with which the WebVMT file is to be synchronized.
A WebVMT comment block consists of the following components, in the order given:
NOTE
" (U+004E LATIN CAPITAL LETTER N, U+004F LATIN CAPITAL LETTER O, U+0054 LATIN CAPITAL LETTER T, U+0045 LATIN CAPITAL LETTER E).
-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN).
A WebVMT comment block is ignored by the parser.
WebVMT metadata text consists of any sequence of zero or more characters other than U+000A LINE FEED (LF) characters and U+000D CARRIAGE RETURN (CR) characters, each optionally separated from the next by a WebVMT line terminator. (In other words, any text that does not have two consecutive WebVMT line terminators and does not start or end with a WebVMT line terminator.)
The string represents a WebVMT command list.
WebVMT metadata text cues are only useful for scripted applications (e.g. using the metadata
text track kind in a HTML text track).
The WebVMT media settings list consists of zero or more of the following components, in any order, separated from each other by one or more U+0020 SPACE characters, U+0009 CHARACTER TABULATION (tab) characters, or WebVMT line terminators, except that the string must not contain two consecutive WebVMT line terminators. Each component must not be included more than once per WebVMT media settings list string.
A WebVMT media url setting consists of the following components, in the order given:
url
".
For the purpose of resolving a URL in the MEDIA
block of a WebVMT file, or any URLs in resources referenced from MEDIA
blocks of a WebVMT file, if the URL’s scheme is not "data
", then the user agent must act as if the URL failed to resolve. If the url
value does not match the src
attribute of the HTML <track>
element, then the src
value takes precedence.
A WebVMT media MIME type setting consists of the following components, in the order given:
mime-type
".
A WebVMT media start time setting consists of the following components, in the order given:
start-time
".
WebVMT media start time setting should include millisecond data in order to allow the WebVMT file to be accurately synchronized with Coordinated Universal Time (UTC).
A WebVMT media path setting consists of the following components, in the order given:
path
".
The WebVMT map settings list consists of the following components, in any order, separated from each other by one or more U+0020 SPACE characters, U+0009 CHARACTER TABULATION (tab) characters, or WebVMT line terminators, except that the string must not contain two consecutive WebVMT line terminators. Each component must be included once per WebVMT map settings list string.
The WebVMT map settings list defines the WebVMT map state before the first cue is active.
A WebVMT map center latitude setting consists of a WebVMT latitude setting.
A WebVMT map center longitude setting consists of a WebVMT longitude setting.
A WebVMT map center altitude setting consists of a WebVMT altitude setting.
When interpreted as numbers, the WebVMT map center latitude setting, WebVMT map center longitude setting and WebVMT map center altitude setting values represent the map center location.
A WebVMT latitude setting consists of the following components, in the order given:
lat
".
A WebVMT latitude consists of the following components, in the order given:
When interpreted as a number, a WebVMT latitude must be in the range -90..+90.
A WebVMT longitude setting consists of the following components, in the order given:
lng
".
A WebVMT longitude consists of the following components, in the order given:
When interpreted as a number, a WebVMT longitude must be in the range -180..+180.
A WebVMT altitude setting consists of the following components, in the order given:
alt
".
A WebVMT altitude consists of the following components, in the order given:
When interpreted as a number, a WebVMT altitude represents the height in meters above the WGS84 ellipsoid. Care should be taken not to confuse this with the height above mean sea level.
A WebVMT map zoom setting consists of the following components, in the order given:
rad
".
When interpreted as a number, the WebVMT map zoom setting must be positive and represents the map zoom radius.
A WebVMT command list consists of one or more of the following components in any order, separated from each other by a WebVMT line terminator:
A WebVMT map control command consists of one of the following components:
A WebVMT pan command consists of a JSON text representing the following JSON object:
pan-to
".
A WebVMT pan parameter list is a JSON object representing the following components in any order:
A WebVMT pan latitude attribute consists of a WebVMT latitude attribute.
A WebVMT pan longitude attribute consists of a WebVMT longitude attribute.
A WebVMT pan altitude attribute consists of a WebVMT altitude attribute.
A WebVMT pan end time attribute consists of a WebVMT end time attribute.
A WebVMT pan duration attribute consists of a WebVMT duration attribute.
A WebVMT zoom command consists of a JSON text representing the following JSON object:
zoom
".
A WebVMT zoom parameter list is a JSON object representing the following component:
A WebVMT zoom radius attribute consists of a WebVMT radius attribute.
When interpreted as a number, the WebVMT zoom radius attribute value represents the map zoom radius.
A WebVMT radius attribute consists of a JSON text consisting of the following components in the order given:
rad
".
A WebVMT zone annotation command consists of one of the following components:
A WebVMT circle command consists of a JSON text representing the following JSON object:
circle
".
A WebVMT circle parameter list consists of a JSON object representing the following components in any order:
A WebVMT circle center latitude attribute consists of a WebVMT latitude attribute.
A WebVMT circle center longitude attribute consists of a WebVMT longitude attribute.
A WebVMT circle center altitude attribute consists of a WebVMT altitude attribute.
A WebVMT circle radius attribute consists of a WebVMT radius attribute.
A WebVMT zone attribute consists of a JSON text consisting of the following components in the order given:
zone
".
A WebVMT zone identifier is any sequence of one or more characters not containing the substring "-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN), nor containing any U+000A LINE FEED (LF) characters or U+000D CARRIAGE RETURN (CR) characters.
A WebVMT zone identifier is a string which uniquely identifies a zone in the WebVMT file, for example a safety zone around a moving object.
A WebVMT polygon command consists of a JSON text representing the following JSON object:
polygon
".
A WebVMT polygon parameter list consists of the following JSON object:
A WebVMT zone perimeter list consists of the following JSON object:
perim
".
A WebVMT vertices list consists of a JSON array of three or more JSON objects each representing a WebVMT location attribute list.
A WebVMT location attribute list consists of a JSON text representing a list of the following JSON values in any order, separated from each other by a U+002C COMMA character (,):
A WebVMT latitude attribute consists of a JSON text consisting of the following components in the order given:
lat
".
When interpreted as a number, a WebVMT latitude attribute must be in the range -90..+90.
A WebVMT longitude attribute consists of a JSON text consisting of the following components in the order given:
lng
".
When interpreted as a number, a WebVMT longitude attribute must be in the range -180..+180.
A WebVMT altitude attribute consists of a JSON text consisting of the following components in the order given:
alt
".
When interpreted as a number, a WebVMT altitude represents the height in meters above the WGS84 ellipsoid. Care should be taken not to confuse this with the height above mean sea level.
A WebVMT path annotation command consists of one of the following components:
A WebVMT move command consists of a JSON text representing the following JSON object:
move-to
".
A WebVMT move parameter list is a JSON object representing the following components in any order:
A WebVMT fragment start latitude attribute consists of a WebVMT latitude attribute.
A WebVMT fragment start longitude attribute consists of a WebVMT longitude attribute.
A WebVMT fragment start altitude attribute consists of a WebVMT altitude attribute.
A WebVMT path attribute consists of a JSON text consisting of the following components in the order given:
path
".
A WebVMT path identifier is any sequence of one or more characters not containing the substring "-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN), nor containing any U+000A LINE FEED (LF) characters or U+000D CARRIAGE RETURN (CR) characters.
A WebVMT path identifier is a string which uniquely identifies a moving object in the WebVMT file, for example a camera.
A WebVMT line command consists of a JSON text representing the following JSON object:
line-to
".
A WebVMT line parameter list consists of a JSON object representing the following components in any order:
A WebVMT fragment end latitude attribute consists of a WebVMT latitude attribute.
A WebVMT fragment end longitude attribute consists of a WebVMT longitude attribute.
A WebVMT fragment end altitude attribute consists of a WebVMT altitude attribute.
A WebVMT fragment end time attribute consisting of a WebVMT end time attribute.
A WebVMT fragment duration attribute consisting of a WebVMT duration attribute.
A WebVMT synchronized data command consists of a JSON text representing the following JSON object:
sync
".
A WebVMT synchronized parameter list consists of a JSON object representing the following components in any order:
A WebVMT synchronized type attribute consists of a JSON text consisting of the following components in the order given:
type
".
A WebVMT synchronized data attribute consists of a JSON text consisting of the following components in the order given:
data
".
A WebVMT synchronized identifier attribute consists of a JSON text consisting of the following components in the order given:
id
".
A WebVMT synchronized path attribute consists of a WebVMT path attribute representing a synchronized path identifier.
A WebVMT interpolation subcommand consists of a JSON text representing the following JSON object:
interp
".
The WebVMT interpolation subcommand refers to the attributes of its parent command. The parent command is the interpolation object.
A WebVMT interpolation parameter list consists of a JSON object consisting of the following components in any order:
A WebVMT interpolation target attribute consists of a JSON text consisting of the following components in the order given:
to
".
A WebVMT interpolation target parameter list consists of a JSON object representing the interpolation attributes set to interpolation end values.
Attributes of the interpolation object omitted from a WebVMT interpolation target parameter list are not affected by that subcommand.
A WebVMT end time attribute consists of a JSON text consisting of the following components in the order given:
end
".
By default, the WebVMT end time attribute is set to the WebVMT cue end time value.
A WebVMT end time attribute represents the time at which a process ends.
A WebVMT duration attribute consists of a JSON text consisting of the following components in the order given:
dur
".
A WebVMT duration attribute represents the time interval for which a process lasts and supersedes the default value of the WebVMT end time attribute.
A WebVMT timespan is the positive time offset between two WebVMT timestamps and is represented in WebVMT timestamp format.
A WebVMT file whose cues all comply with the following rule is said to be a WebVMT file using only nested cues.
Given any two cues cue1 and cue2 with start and end time offsets (x1, y1) and (x2, y2) respectively:
The following example matches this definition:
WEBVMT NOTE Required blocks omitted for clarity 00:00.000 --> 01:24.000 { "circle": { "lat": 0, "lng": 0, "rad": 2000 } } 00:00.000 --> 00:44.000 { "move-to": { "lat": 0, "lng": 0, "path": "cam1" } } { "line-to": { "lat": 0.12, "lng": 0.34, "path": "cam1" } } 00:44.000 --> 01:19.000 { "line-to": { "lat": 0.56, "lng": 0.78, "path": "cam1" } } 01:24.000 --> 05:00.000 { "circle": { "lat": 0, "lng": 0, "rad": 30000 } } 01:35.000 --> 03:00.000 { "move-to": { "lat": 0.87, "lng": 0.65, "path": "cam2" } } { "line-to": { "lat": 0.43, "lng": 0.21, "path": "cam2" } } 03:00.000 --> 05:00.000 { "line-to": { "lat": 0, "lng": 0, "path": "cam2" } }
Notice how you can express the cues in this WebVMT file as a tree structure:
If the file has cues that can’t be expressed in this fashion, then they don’t match the definition of a WebVMT file using only nested cues. For example:
WEBVMT NOTE Required blocks omitted for clarity 00:00.000 --> 01:00.000 { "move-to": { "lat": 0.12, "lng": 0.34, "path": "cam3" } } { "line-to": { "lat": 0.56, "lng": 0.78, "path": "cam3" } } 00:30.000 --> 01:30.000 { "move-to": { "lat": 0.87, "lng": 0.65, "path": "cam4" } } { "line-to": { "lat": 0.43, "lng": 0.21, "path": "cam4" } }
In this ninety-second example, the two cues partly overlap, with the first ending before the second ends and the second starting before the first ends. This therefore is not a WebVMT file using only nested cues.
WebVMT file parsing is similar to WebVTT parsing, though many of those steps can be skipped as WebVMT files are metadata files.
A WebVMT parser, given an input byte stream, a text track list of cues |output|, and a collection of CSS style sheets |stylesheets|, must decode the byte stream using the UTF-8 decode algorithm, and then must parse the resulting string according to the WebVMT parser algorithm. This results in WebVMT cues being added to |output|, and CSS style sheets being added to |stylesheets|.
A WebVMT parser, specifically its conversion and parsing steps, is typically run asynchronously, with the input byte stream being updated incrementally as the resource is downloaded; this is called an incremental WebVMT parser.
A WebVMT parser verifies a file signature before parsing the provided byte stream. If the stream lacks this WebVMT file signature, then the parser aborts.
The WebVMT parser algorithm is as follows:
WEBVMT
", then abort these steps. The file does not start with the correct WebVMT file signature and was therefore not successfully processed.
WEBVMT
", or the seventh character is not a U+0020 SPACE character, a U+0009 CHARACTER TABULATION (tab) character, or a U+000A LINE FEED (LF) character, then abort these steps. The file does not start with the correct WebVMT file signature and was therefore not successfully processed.
When the algorithm above says to collect a WebVMT block, optionally with a flag |in header| set, the user agent must run the following steps:
-->
" (U+002D HYPHEN-MINUS, U+002D HYPHEN-MINUS, U+003E GREATER-THAN SIGN), then run these substeps:
STYLE
" (U+0053 LATIN CAPITAL LETTER S, U+0054 LATIN CAPITAL LETTER T, U+0059 LATIN CAPITAL LETTER Y, U+004C LATIN CAPITAL LETTER L, U+0045 LATIN CAPITAL LETTER E), and the remaining characters in |buffer| (if any) are all ASCII whitespace, then run these substeps:
Let |stylesheet| be the result of creating a CSS style sheet, with the following properties:
MAP
" (U+004D LATIN CAPITAL LETTER M, U+0041 LATIN CAPITAL LETTER A, U+0050 LATIN CAPITAL LETTER P), and the remaining characters in |buffer| (if any) are all ASCII whitespace, then run these substeps:
MEDIA
" (U+004D LATIN CAPITAL LETTER M, U+0045 LATIN CAPITAL LETTER E, U+0044 LATIN CAPITAL LETTER D, U+0049 LATIN CAPITAL LETTER I, U+0041 LATIN CAPITAL LETTER A), and the remaining characters in |buffer| (if any) are all ASCII whitespace, then run these substeps:
When the WebVMT parser algorithm says to collect WebVMT map settings from a string |input| for a text track, the user agent must run the following algorithm.
A WebVMT map object is a conceptual construct to represent a WebVMT map that is used as a root node for WebVMT node objects. This algorithm returns a WebVMT map object.
lat
"
lng
"
alt
"
rad
"
When the WebVMT parser algorithm says to collect WebVMT media settings from a string |input| for a text track, the user agent must run the following algorithm.
A WebVMT media object is a conceptual construct to represent a WebVMT media. This algorithm returns a WebVMT media object.
url
"
mime-type
"
start-time
"
path
"
When the algorithm above says to collect WebVMT cue timings from a string |input| for a WebVMT cue |cue|, the user agent must run the following algorithm.
When this specification says that a user agent is to collect a WebVMT timestamp, the user agent must run the following steps:
This section specifies some CSS pseudo-elements and pseudo-classes and how they apply to WebVMT. This section does not apply to user agents that do not support CSS.
The ::cue pseudo-element represents a cue.
The ::cue(selector) pseudo-element represents a cue or element inside a cue that match the given selector.
Similarly to all other pseudo-elements, these pseudo-elements are not directly present in the <video>
or <audio>
element’s document tree.
A WebVMT node object is a conceptual construct used to represent components of cue metadata so that its processing can be described without reference to the underlying syntax.
Pseudo-elements apply to elements that are matched by selectors. For the purpose of this section, that element is the matched element. The pseudo-elements defined in the following sections affect the styling of parts of WebVMT cues that are being rendered for the matched element.
A CSS user agent that implements the text tracks model must implement the ::cue and ::cue(selector) pseudo-elements.
The ::cue pseudo-element (with no argument) matches any WebVMT node objects constructed for the matched element.
The following properties apply to the ::cue pseudo-element with no argument; other properties set on the pseudo-element must be ignored:
The ::cue(selector) pseudo-element with an argument must have an argument that consists of a CSS selector. It matches any WebVMT node object constructed for the matched element that also matches the given CSS selector.
The following properties apply to the ::cue() pseudo-element with an argument:
Properties that do not apply must be ignored.
For the purpose of determining the cascade of the declarations in STYLE
blocks of a WebVMT file, the relative order of appearance of the style sheets must be the same order as they were added to the collection, and the order of appearance of the collection must be after any style sheets that apply to the associated <video>
or <audio>
element’s document.
This section captures issues which have been identified, but are not yet fully documented.
As the specification develops, issues will be moved out of this section and included elsewhere in the document, until it is no longer needed and is completely removed.
This section lists potential features which have been identified during the development process, but have not yet matured to a full design specification.
Features which appear in this section warrant further investigation, but are not guaranteed to appear in the final specification.
An image linked to and displayed at an offset from a geolocation.
A text string linked to and displayed at an offset from a geolocation.
Shortcuts to popular tile URLs for easy access and to help avoid URL syntax errors.
Syntax to allow more than one layer of map tiles to be specified, e.g. 'map' and 'satellite' layers.
This should be functional, but remain lightweight.
The current tech demo is based on the Leaflet API, but should be broadened to support other web map APIs, e.g. Open Layers.
A hot-swap feature would allow users to switch API on-the-fly to take advantage of the unique features supported by different APIs, e.g. Street View.
Camera orientation may not match the direction of travel, or may be dynamic, e.g. for Augmented Reality. Field of view and zoom level also affect video frame content and may vary over time.
Although originally conceived for Earth-based use, spatial data in other environments could be accommodated by specifying the co-ordinate reference system. For example, location on another planet, e.g. Mars, or in an artifical environment, e.g. a video game.
WebVMT paths represent objects moving through the mapped space, though could be extended to support properties associated with motion such as distance travelled, speed, heading, etc. through a defined API.
WebVMT zones represent regions in the mapped space, which could be extended to support WebVMT path properties for their centroid's motion and include dynamic properties such as area and volume.
Care should be taken to build a lightweight interface which includes simple, common properties that are useful to most use cases and avoids overloading with unnecessary edge cases, processing overheads and complexity.
In addition to height above the WGS84 ellipsoid, an option could be added to measure altitude from mean sea level, e.g. for an aircraft, using a suitable Earth Gravitational Model (EGM) or from ground level, e.g. for the height of a structure.
This section lists interfaces which have been identified during the development process, but have not yet matured to a full design specification.
Expose WebVMT cues in the DOM API, based on the DataCue API proposed in WICG.
This is analogous to the VTTCue interface.
Expose a WebVMT map in the DOM API.
[Exposed=Window] interface VMTMap { constructor(double centerLatitude, double centerLongitude, optional double centerAltitude, double zoomRadius); attribute double centerLatitude; attribute double centerLongitude; attribute double centerAltitude; attribute double zoomRadius; object getMap(); };
This is analogous to the VTTRegion interface.
text/vmt
This registration is for community review and will be submitted to the IESG for review, approval, and registration with IANA.
(An optional UTF-8 BOM, the ASCII string "WEBVMT", and finally a space, tab, line break, or the end of the file.)
Fragment identifiers have no meaning with text/vmt
resources.
As with any text-based format, it is possible to construct malicious content that might cause buffer over-runs, value overflows (e.g. string representations of integers that overflow a given word length), and the like. Implementers should take care in implementing a parser that over-long lines, field values, or encoded values do not cause security problems.
Comments
Comments are blocks that are preceded by a blank line, start with the word
NOTE
(followed by a space or newline), and end at the first blank line.Comment Block
Comment block format is identical to WebVTT.