Web of Things (WoT): Use Cases and Requirements

W3C Interest Group Note

This version:
https://www.w3.org/TR/2021/NOTE-wot-usecases-20210518/
Latest published version:
https://www.w3.org/TR/wot-usecases/
Latest editor's draft:
https://w3c.github.io/wot-usecases/
Editors:
Michael Lagally (Oracle Corp.)
Michael McCool (Intel Corp.)
Ryuichi Matsukura (Fujitsu Ltd.)
Tomoaki Mizushima (Internet Research Institute, Inc.)
Contributors
In the GitHub repository
Repository
We are on GitHub
File a bug
Contribute

Abstract

The Web of Things is applicable to multiple IoT domains, including Smart Home, Industrial, Smart City, Retail, and Health applications, where usage of the W3C WoT standards can simplify the development of IoT systems that combine devices from multiple vendors and ecosystems. During the last charter period of the WoT Working Group several specifications were developed to address requirements for these domains.

This Use Cases and Requirements Document is created to collect new IoT use cases from various domains that have been contributed by various stakeholders. These serve as a baseline for identifying requirements for the standardization work in the W3C WoT groups.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.

This document was published by the Web of Things Interest Group as an Interest Group Note.

GitHub Issues are preferred for discussion of this specification.

Publication as an Interest Group Note does not imply endorsement by the W3C Membership.

This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

The disclosure obligations of the Participants of this group are described in the charter.

This document is governed by the 15 September 2020 W3C Process Document.

1. Introduction

The World Wide Web Consortium (W3C) has published the Web of Things (WoT) Architecture and Web of Things (WoT) Thing Description (TD) as official W3C Recommendations in May 2020. These specifications enable easy integration across Internet of Things platforms and applications.

The W3C Web of Thing Architecture [wot-architecture] defines an abstract architecture, the WoT Thing Description [wot-thing-description] defines a format to describes a broad spectrum of very different devices, which may be connected over various protocols.

During the inception phase of the WoT 1.0 specifications in 2017-2018 the WoT IG collected use cases and requirements to enable interoperability of Internet of Things (IoT) services on a worldwide basis. These released specifications have been created to address the use cases and requirements for the first version of the WoT specifications, which are documented in https://w3c.github.io/wot/ucr-doc/

The present document gathers and describes new use cases and requirements for future standardization work in the WoT standard.

This document contains chapters describing the use cases that were contributed by multiple authors, functional and technical requirements on the Web of Things standards. Additionally it contains a summary of the liaisons, where active collaboration is taking place at the time of writing. Since this document is a WG note, additional use cases will be added in future revisions of this document.

1.1 Domains

The collection of use cases can be separated into two categories:

Domain specific use cases are described in § 2. Domain specific Use Cases, horizontal use cases are described in § 3. Use Cases for multiple domains

1.2 Terminology, Stakeholders and Roles

1.2.1 Terminology

The present document uses the terminology from WoT Architecture [wot-architecture].

1.2.2 Stakeholders and Roles

The following stakeholders and actors were identified when the use cases have been collected and requirements were identified. Note that these stakeholders and roles may overlap in some use cases.

2. Domain specific Use Cases

2.1 Smart Agriculture

2.1.1 Greenhouse Horticulture

Submitter(s)
Ryuichi Matsukura, Takuki Kamiya
Target Users
Agricultural corporation, Farmer, Manufacturers (Sensor, other facilities), Cloud provider
Motivation
Greenhouse Horticulture controlled by computers can create an optimal environment for growing plants. This enables to improve productivity and ensure stable vegetable production throughout the year, independent of the weather. This is the result of research on the growth of plants in the 1980s. For example, in tomatoes, switching to hydroponics and optimizing the temperature, humidity and CO2 concentration required for photosynthesis resulted in a five times increase in yield. The growth conditions for other vegetables also have been investigated, and this control system is applied now.
Expected Devices
Sensors (temperature, humidity, brightness, UV brightness, air pressure, and CO2) Heating, CO2 generator, open and close sunlight shielding sheet.
Expected Data
Sensors values to clarify the gaps between conditions for maximizing photosynthesis and the current environment. Following sensors values at one or some points in the greenhouse: temperature, humidity, brightness, and CO2.
Dependencies
WoT Architecture
WoT Thing Description
Description
Sensors and some facilities like heater, CO2 generator, sheet controllers are connected to the gateway via wired or wireless networks. The gateway is connected to the cloud via the Internet. All sensors and facilities can be accessed and controlled from the cloud. To maximize photosynthesis, the temperature, CO2 concentration, and humidity in the greenhouse are mainly controlled. When the sunlight comes in the morning and CO2 concentration inside decreases, the application turns on the CO2 generator to keep over 400 ppm, the same as the air outside. The temperature in the greenhouse is adjusted by controlling the heater and the sunlight shielding sheet. The cloud gathers all sensor data and the status of the facilities. The application makes the best configuration for the region of the greenhouse located.
Gaps
In the case of the wireless connection to the sensors, the gateway should keep the latest value of the sensors since the wireless connection is sometimes broken. The gateway can create a virtual entity corresponding to the sensor and allow the application to access this virtual entity having the actual sensor status like sleeping.

2.1.2 Open-field agriculture

Submitter(s)
Cristiano Aguzzi
Target Users
Agricultural corporation, Farmer, Manufacturers (Sensor, other facilities), Cloud provider, Middleware provider, Network providers, service provider.
Motivation
Water is vital for ensuring food security to the world’s population, and agriculture is the biggest consumer amounting for 70% of freshwater. Field irrigation application methods are one of the main causes of water wastage. The most common technique, surface irrigation, wastes a high percentage of the water by wetting areas where no plants benefit from it. On the other hand, localized irrigation can use water more efficiently and effectively, avoiding both under-irrigation and over-irrigation. However, in an attempt to avoid under-irrigation, farmers feed more water than is needed resulting not only to productivity losses, but also water wastages. Therefore, technology should be developed and deployed for sensing water needs and automatically manage water supply to crops. However, open field agriculture is characterized by a quite dynamic range of requirements. Usually, solutions developed for one particular crop type cannot be reused in other cultivations. Moreover, the same field can have different crop types or different sizes/shapes during the years, meaning that technology to monitor the state of crop growth should be highly configurable and adaptive. Even agriculture and irrigation methods can change and also they are very different depending on the size of the field and its clime type. Consequently, silos applications are deployed leveraging on IoT technologies to gather data about the crop growth state and irrigation needs. The Web of Things may help to create a single platform where cost-effective applications could adapt seamlessly between different scenarios, breaking the silos and giving value both to the environment and the market.
Expected Devices

Sensors:

  • Weather sensors (maybe collected together inside a weather station)
    • temperature
    • air humidity
    • air pressure
    • pluviometer
    • global solar radiation
    • anemometer (wind speed)
    • wind direction
    • global solar radiation and photosynthetically active radiation
    • gas/air quality sensor (i.e. CO2)
  • Soil sensors (usually packed together in soil probes)
    • soil temperature
    • soil moisture/water content
    • soil conductivity (detecting salt levels in the soil)
    • water table sensor
  • Drone sensors
    • camera
    • temperature sensitive camera
    • multispectral camera

Actuators:

  • drones: used for data collection or pesticed/impollination
  • sprinklers
  • pumps
  • central pivot sprinklers
  • hose-reel irrigation machine

Additional devices:

  • Solar panels
  • Loggers: units that collect data from close sensors.
  • Gateways
Expected Data
Sensor data plays a central role in Smart Agriculture. In particular, it is critical that the information sensed is associated with a timestamp. Common algorithms use *time series* to calculate the water needs of a crop. Furthermore, soil sensors usually are calibrated over a specific soil type (which may differ even in the same geographic region). For example, the calibration data for a soil moisture sensor is represented by a function that maps sensor output to soil water content. In literature, this function is knowns as a *calibration curve*. Commercial sensors are precalibrated with a "standard" curve but on most occasions, it fails to accurately measure the water content. Therefore, it can be configured during the installation phase (which may happen every time the soil is plowed). Finally, a crucial aspect is forecasting. Farmers use this information to actively change their management procedures. Services exploit it to suggest irrigation schedule or change device settings to behave accordingly to environmental changes. To summarize here it is a list of most important expected data from Open field agriculture:
  • Calibration curve
  • Time series
  • Forecast data
  • Geolocations: sensor data must be contextualized in geolocation. Also, geolocation is critical in massive open fields to localize instrument position.
  • Weather data
  • Unit of measure: commercial soli sensor may output their value in a different unit of measures (i.e. volts or % water in an m^3 of soil)
  • Relative values
  • Depth position: geolocation is not sufficient to describe the parameters of the soil. Depth is an additional context that should be added to an observed value.
  • Device owner information
  • Battery level and energy consumption
Dependencies
WoT Architecture, WoT Thing Description
Description
In open-field agriculture, the IoT solutions leverage on different radio protocols and devices. Usually, radio protocols should cover long distances (even kilometers) and be energy efficient. Devices too need to be energy saving as they are deployed for months and sometimes even years in harsh environments. A sleeping-cycle is one mechanism they use to save energy usually coordinated by *loggers/gateways* or preprogrammed. *Loggers* are deployed closed to sensor devices and have more storage space. They serve as buffers between sensors and higher services. Often *loggers* and sensors are embedded in the same board, otherwise, they are connected using cables or close-ranged radio protocols. On the other hand, *gateways* serve as a collection point for data of an entire field or farm. They are much more capable devices and usually are more energy-consuming. In some deployment scenarios, they host a full operating system with multiple software facilities installed. Otherwise, gateways only serve as relays of data sent from the loggers and sensors to cloud services and vice-versa. The cloud services may be partially hosted in edge servers to preserve data privacy and responsiveness of the whole IoT solution. Possible cloud services are:
  • Weather forecasting/local weather forecasting
  • Soil digital twin to simulate and predict water content
  • Plant digital twin (growth and water needs prediction)
  • Irrigation advice service: combining the previous services and knowing the irrigation system topology is possible to advise farms with the best times to irrigate a crop.
  • Pesticide and fertilize planning
The complete deployment topology of an open field agriculture solution is described in the diagram below:

deployment topology of an open field agriculture solution


Variants
Open-field agriculture varies a lot between geographical location and methods. For example in the SWAMP project there three different pilots with different requirement/constraints:
  • Italian pilot (Reggio Emilia region):
    • Relative small field size
    • Multiple connectivity solutions available: 4G, LPWAN, and WiFi
    • Variance in crop types, sometimes even inside the same farm
    • Small soil type variance
    • Precise model soil behavior
    • A great influence of the water table
    • Variance in the irrigation system
    • Channel-based water distribution
    • The main goal is to optimize water consumption
  • Brazilian pilot Matopiba and Guaspari location:
    • Huge field size
    • Centra pivot irrigation systems: need to optimize each sprinkler output
    • Soil type variance within the same field
    • A low number of connectivity options: no 4G, only radio communication base on LPWAN
    • Low crop type variance
    • the main goal is to optimize energy consumption
  • Spain pilot
    • Efficient localized irrigation and application of the right amount of water to the crop
    • arid location
    • The goal is to minimize water consumption but maintaining a good field yield.
Gaps
Currently, there is no specification on how to model device status (i.e. connected/disconnected) Examples of how to handle a device calibration phase may help developers to use a standardized approach. Possibly define standard links types to define the relation between loggers and sensors Handle both geographical position and depth information. Ontology class for battery and energy consumption Model historical and forecast data
Existing Standards
Comments
This use case is designed using the experience gained in the European-Brazil Horizon 2020 SWAMP project. Please follow the link for further information. Since SWAMP is heavily oriented to optimize water consumption, this document just mentioned issues like plant feeding, fertilizing, pollination, yield prediction, crop quality measurement, etc. Nevertheless, WoT technologies may be employed also in these scenarios.

2.1.3 Irrigation in outdoor environment

Submitter(s)
  • Catherine Roussey
  • Jean-Pierre Chanet
Target Users
  • device users: farmers
  • service provider
Motivation
Depending on the type of crops (e.g. maize), cultivated plots may need specific irrigation processes in outdoor environments. Depending on the country there exist some specific pedo-climatic conditions and some water consumption restrictions. Thus an irrigation system is installed on the plot. It is used on a several days basis (e.g. every 7 days), for each plot. The goal is to optimize the irrigation decision based on the crop development stage and the quantity of rain that has already fallen down on the plot. For example an important rain may postpone the irrigation decision.

This use case aims to evaluate the number of days to delay the irrigation system, in addition to the basis irrigation frequency (e.g. 2 delay days means 9 days between two irrigations).
Expected Devices
  • 6 tensiometers in the plot (soil moisture):
    • 3 tensiometers at 30 cm depth
    • 3 tensiometer at 60 cm depth
  • 1 weather station:
    • thermometer (outdoor temperature)
    • pluviometer (rain quantity)
  • 1 mobile pluviometer (quantity of water provided by the watering system)
Expected Data
To decide when to water a cultivated plot, we evaluate the crop growth stage, the root zone moisture level and the number of delay days:
  • To evaluate the Crop growth stage, we need:
    • Min and max temperature per day: the min temperature per day is evaluated on the period [d-1 18:00, d 18:00[. The *max temperature per day is evaluated on the period [d 06:00:00, d+1 06:00:00[.i
    • Growing degree day values uses min and max temperature per day, the sowing day and the type of seed. The Growing degree day is compared to some thresholds to evaluate the crop growth stage
  • To evaluate the Root zone moisture level, we need:
    • Mean moisture per day per probe: in order to get reliable values, each tensiometer sends several measurements of soil moisture, at fixed hours of the day (usually in the morning), that are aggregated; their mean value is considered
    • For the set of 3 tensiometers localised at the same level of depth, the median value is evaluated from their mean per day moisture measurements. One tensiometer may not provide accurate values (the soil around the probe is too dry and the soil matter is not connected to the probe). The median value of three different tensiometers at the same depth will improve the accuracy of the moisture measurement.
    • Then the sum of the two median values at two different depths is evaluated, to take into account the quantity of water available in the root zone volume. This aggregated value estimates the root zone moisture level.
    • The root zone moisture level is compared to some thresholds (dependent on the crop growth stage) to evaluate if the crop needs water or not at the end of the basis irrigation period.
  • To determine the number of delay days, we need:
    • The time period between two waterings of the same plot is dependent on the farm and known by the farmer. When a watering is launched, no new watering should be planned during the basic irrigation frequency. The quantity of rain that falls down on the plot may postpone the watering plan. The total quantity of rain per day is compared to some thresholds to determine the number of delay days.
The mobile pluviometer is used to validate that the quantity of water received by the crop actually corresponds to the quantity of water provided by the watering system.

At the end, the farmer may decide if they follow the irrigation recommendations or not. They could force the watering for one of the next days.
Affected WoT deliverables and/or work items
  • WoT Architecture: wireless communication in outdoor environments presents some issues: communication consumes lots of energy, sensor nodes have limited energy, weather conditions impact communication quality
  • WoT Thing Description: the affordance should be precise enough to describe the soil at a specific depth or the root zone volume or the min temperature per day
Description
To avoid Property right and consent management issues between farmers and cloud service providers on these computed data, sensors are connected to the farm infrastructure and the services that evaluate aggregated data are executed locally on this infrastructure.

The weather station may be located outside of the farm.

The tensiometers are located inside the farm. The tensiometers and the mobile pluviometer are connected using wireless communication to the gateway. The gateway sends the measurements to the farm infrastructure.
Variants:
The crop growth stage may be observed by the farmer. In this case, they can force this value to update the service inputs.
Security Considerations
The 6 tensiometers and 1 pluviometer are installed on the plot, but only the farmer should be able to change their configurations (frequency of communication). Wireless communication should be used but the measurement data should only be accessible through the farm network infrastructure.
Privacy Considerations
Data concerning quantity of water, type of seed, sowing day should be protected.
Gaps
The main potential issues come from tensiometers located in the plot, as they are known to be cheap and easy to use probes but not always reliable. They can face multiple issues: if the soil gets too dry or the probe is improperly installed, there may be air between the probe and the soil, therefore preventing the probe from providing accurate conductivity measurements.

To be sure of the quality of those measurements each tensiometer sends its measurements several times (3 to 5) per day. The tensiometer may send an inappropriate value due to the bad connection between the soil and the probe, that is the reason why three tensiometers are used and the median value is computed. If the gateway does not receive the value of one sensor during a whole day, an alert should be sent. To take an irrigation decision, at least one measurement per sensor and per day should be provided.

The gateway can create a virtual entity corresponding to the sensor and allow the application to access this virtual entity having the actual sensor status like sleeping.

Sensor nodes deployed in outdoor environments may take into account that their energy supply device (battery, solar panel) constrains the lifetime of the device. Thus they should be able to alert that they may not be able to provide a service due to lack of energy or they should be able to change their configuration and switch communication protocols to save as much energy as possible.

Moreover wireless communication can be impacted by weather conditions or any outdoor conditions. For example a tractor that comes too close to the sensor node may move the communication device and destroy some components. Some kind of network supervision must be achieved (for instance by the gateway) to check node availability.
Existing Standards
The CASO and IRRIG ontologies extend SSN, PROV-O and SAREF4AGRI to implement an irrigation expert system.

A thesaurus climate and forecast that describes the weather properties and associated phenomenon is available at http://vocab.nerc.ac.uk/collection/P07/.

The weather measurements provided by the agricultural weather station of Agrotechnopole is available at http://ontology.irstea.fr/weather/snorql/. [5]
Comments
This use case has been implemented in France, following local conditions and regulations. There exists an open manual irrigation decision method called IRRINOV® developed by Arvalis [2] and INRAE dedicated to France and some specific crops: maize, wheat and cereals, potatoes and beans.

IRRINOV® can be automated using wireless sensor networks and semantic web technologies. The considered network is of star type: all sensors can communicate with a common gateway, which is connected to the Internet. The IRRINOV® implementation was developed in [3]. This work presents an expert system for maize using drools. It automates the irrigation decision for maize based on sensor measurements.

To measure weather properties, we use the recommendation provided by the French National Weather Institute: Météo France[4]. Its web site defines how to evaluate the min and max temperatures per day in http://www.meteofrance.fr/publications/glossaire/154123-temperature-minimale (in French, we found no equivalent description in English).
References
[1] https://www.inrae.fr/
[2] https://www.arvalisinstitutduvegetal.fr/
[3] Q-D. Nguyen, C. ROUSSEY, M. Poveda-Villalón, C. de Vaulx , J-P. Chanet. Development Experience of a Context-Aware System for Smart Irrigation Using CASO and IRRIG Ontologies. Applied Science 2020, 10(5), 1803; https://doi.org/10.3390/app10051803
[4] http://www.meteofrance.fr/
[5] C. ROUSSEY,S. BERNARD, G. ANDRÉ, D. BOFFETY. Weather Data Publication on the LOD using SOSA/SSN Ontology.Semantic Web Journal, 2019 http://www.semantic-web-journal.net/content/weather-data-publication-lod-using-sosassn-ontology-0

2.2 Smart City

2.2.1 Geolocation

Submitter(s)
Jennifer Lin, Michael McCool
Target Users

A Smart City managing mobile devices and sensors, including passively mobile sensor packs, packages, vehicles, and autonomous robots, where their location needs to be determined dynamically.

Motivation

Smart Cities need to track a large number of mobile devices and sensors. Location information may be integrated with a logistics or fleet management system. A reusable geolocation module is needed with a common network interface to include in these various applications. For outdoor applications, GPS could be used, but indoors other geolocation technologies might be used, such as WiFi triangulation or vision-based navigation (SLAM). Therefore the geolocation information should be technology-agnostic.

NOTE: we prefer the term "geolocation", even indoors, over "localization" to avoid confusion with language localization.

Expected Devices

One of the following:

  • A geolocation system on a personal device, such as a smart phone.
  • A geolocation system to be attached to some other portable device.
  • A geolocation system attached to a mobile vehicle.
  • A geolocation system on a payload transported by a vehicle.
  • A geolocation system on an indoor mobile robot.
Expected Data
  • Sensor ID
  • Timestamp of last geolocation
  • 2D location
    • typically latitude and longitude
    • May also be semantic, i.e. room in a building, exit
Optional:
  • Semantic location
    • Possibly in addition to numerical lat/long location.
  • Altitude
    • May also be semantic, i.e. floor of a building
  • Heading
  • Speed
  • Accuracy information
    • Confidence interval, e.g. distance that true location will be within some probability.
    • Gaussian covariance matrix
    • For each measurement
    • For lat/long, may be a single value (see web browser API; radius?)
  • Geolocation technology (GPS, SLAM, etc.).
    • Note that multiple technologies might be used together.
    • Include parameters such as sample interval, accuracy
  • For each geolocation technology, data specific to that technology:
    • GPS: NMEA type
  • Historical data

Note: the system should be capable of notifying consumers of changes in location. This may be used to implement geofencing by some other system. This may require additional parameters, such as the maximum distance that the device may be moved before a notification is sent, or the maximum amount of time between updates. Notifications may be sent by a variety of means, some of which may not be traditional push mechanisms (for example, email might be used). For geofencing applications, it is not necessary that the device be aware of the fence boundaries; these can be managed by a separate system.

Dependencies
node-wot
Description

Smart Cities have the need to observe the physical locations of large number of mobile devices in use in the context of a Fleet or Logistics Management System, or to place sensor data on a map in a Dashboard application. These systems may also include geofencing notifications and mapping (visual tracking) capabilities.

Variants
  • A version of the system may log historical data so the past locations of the devices can be recovered.
  • Geolocation technologies other than GPS may be used. The payload may contain additional information specific to the geolocation technology used. In particular, in indoor situations technologies such as WiFi triangulation or (V)SLAM may be more appropriate.
  • Geofencing may be implemented using event notifications and will require setting of additional parameters such as maximum distance.
Security Considerations

High-resolution timestamps can be used in conjunction with cache manipulation to access protected regions of memory, as with the SPECTRE exploit. Certain geolocation APIs and technologies can return high-resolution timestamps which can be a potential problem. Eventually these issues will be addressed in cache architecture but in the meantime a workaround is to artificially limit the resolution of timestamps.

Privacy Considerations

Location is generally considered private information when it is used with a device that may be associated with a specific person, such as a phone or vehicle, as it can be used to track that person and infer their activities or who they associate with (if multiple people are being tracked at once). Therefore APIs to access geographic location in sensitive contexts are often restricted, and access is allowed only after confirming permission from the user.

Gaps

There is no single standardized semantic vocabulary for representing location data. Location data can be point data, a path, an area or a volumetric object. Location information can be expressed using multiple standards, but the reader of location data in a TD or in data returned by an IoT device must be able to unambiguously describe location information.

There are both dynamic (data returned by a mobile sensor) and static (fixed installation location) applications for geolocation data. For dynamic location data, some recommended vocabulary to annotate data schemas would be useful. For static location data, a standard format for metadata to be included in a TD itself would be useful.

Existing Standards
  • NMEA: defines sentences from GPS devices
  • WGS84:
    • World Geodetic System
    • Defines lat/long/alt coordinate system used by most other geolocation standards
    • More complicated than you would think (need to deal with deviations of Earth from a true sphere, gravitational irregularities, position of centroid, etc. etc.)
  • Basic Geo Vocabulary:
    • Very basic RDF definitions for lat, long, and alt
    • Does not define heading or speed
    • Does not define accuracy
    • Does not define timestamps
    • Uses string as a data model (rather than a number)
  • W3C Geolocalization API:
    • W3C Devices and Sensors WG is now handling
    • There is an updated proposal: https://w3c.github.io/geolocation-sensor/#geolocationsensor-interface
    • Data schema of updated proposal is similar to existing API, but all elements are now optional
    • Data includes latitude, longitude, altitude, heading, and speed
    • Accuracy is included for latitude/longitude (single number in meters, 95% confidence, interpretation a little ambiguous, but probably intended to be a radius) and altitude, but not for heading or speed.
  • Open Geospatial Consortium:
  • ISO
  • SSN:
  • Timestamps:

Note that accuracy and time are issues that apply to all kinds of sensors, not just geolocation. However, the specific geolocation technology of GPS is special since it is also a source of accurate time.

2.2.2 Dashboard

Submitter(s)
Michael McCool
Target Users

A Smart City managing a large number of devices whose data needs to be visualized and understood in context.

Stakeholders include:

  • device owners: need to make data from devices available to dashboard system.
  • device user: users of the dashboard system, such as members of city management, are indirectly "using" the devices by accessing their data, and in one variant, sending commands to actuators.
  • cloud provider: the dashboard system itself or components of it (such as a database or data ingestion system) may be hosted in the cloud.
Motivation

In order to facilitate Smart City planning and decision-making, a Smart City dashboard interface makes it possible for city management to view and visualize all sensor data through the entire city in real time, with data identified as to geographic source location.

Expected Devices

Actuators can include robots; for these, commands might be given to robots to move to new locations, drop off or pick up sensor packages, etc. However, it could also include other kinds of actuators, such as flood gates, traffic signals, lights, signs, etc. For example, posting a public message on an electronic billboard might be one task possible through the dashboard.

Sensors can include those for the environment and for people and traffic management (density counts, thermal cameras, car speeds, etc.). status of robots, other actuators, and sensors, data visualization, and (optionally) historical comparisons.

Dashboard would include mapping functionality. Mapping implies a need for location data for every actuator and sensor, which could be acquired through geolocation sensors (e.g. GPS) or assigned statically during installation.

This use case also includes images from cameras and real-time image and data streaming.

Expected Data
  • Environmental data for temperature, humidity, UV levels, pollution levels, etc.
  • Infrastructure status (water flow, electrical grid, road integrity, etc.)
  • Emergency sensing (flooding, earthquake, fire, etc.)
  • Traffic (both people and vehicles)
  • Health monitoring (e.g.fever tracking, mask detection, social distancing)
  • Safety monitoring (e.g.wearing construction helmets on a construction site)
  • Reports from non-IoT sources (for example, police reports of crimes, hospital emergency case reports)
  • Images and data derived from images (people traffic and density can be derived from image analysis) All data would need an associated geolocation and timestamp so it can be placed in time and space.
Affected WoT deliverables and/or work items
  • Thing description - support for data ingestion and normalization, geolocation and timestamp standards.
  • Discovery - directories capable of tracking and managing a large number of devices on a large and possibly segmented network
Description

Data from a large number and wide variety of sensors needs to be integrated into a single database and normalized, then placed in time and space, and finally visualized.

The user, a member of city management responsible for making planning decisions, sees data visualized on a map suitable for planning decisions.

Variants:

  • Historical data may also be available (allowing an analysis of trends over time).
  • It may be possible to also issue commands to actuators through the interface.
  • The system may be used for emergency response (for instance, closing floodgates in response to an expected tsunami)
  • A subset of the data visualization capabilities may be made available to the public (for example, traffic)
  • Filtering based on parameters such as location (area, state, county, country, zip code, etc.), sensor type, subject matter, etc.
  • Ability to generate alerts off of various parameters
  • Ability to produce logs off historical data
Security Considerations
  • Access to data should only be provided to authorized users, although some may be made available publicly
  • Access to actuators should only be provided to authorized users, and commands should be recorded for auditing.
Privacy Considerations
  • Management of privacy-sensitive information, for example images of people, should be controlled and ideally not associated with specific individuals
  • Data that can be used to track movements of particular individuals should be controlled or eliminated.
  • Data purge functions should be supported to allow the permanent deletion of private data.
Gaps
  • Geolocation data standards
  • Timestamp data standards
  • Scalable Discovery

2.2.3 Interactive Public Spaces

Submitter(s)
Michael McCool
Category
Accessibility
Motivation
Public spaces provide many opportunities for engaging, social and fun interaction. At the same time, preserving privacy while sharing tasks and activities with other people is a major issue in ambient systems. These systems may also deliver personalized information in combination with more general services presented publicly. A trustful discovery of the services and devices available in such environments is a necessity to guarantee personalization and privacy in public-space applications.
Expected Devices
Public spaces supporting personalizable services and device access.
Expected Data
Command and status information transferred between the personal mobile device application and the public space's services and devices.

Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
Optional:
  • WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services in the public space.
Description
Interactive installations such as touch-sensitive or gesture-tracking billboards may be set up in public places. Objects that present public information (e.g. a map of a shopping mall) can use a multimodal interface (built-in or in tandem with the user's mobile devices) to simplify user interaction and provide faster access. Other setups can stimulate social activities, allowing multiple people to enter an interaction simultaneously to work together towards a certain goal (for a prize) or just for fun (e.g. play a musical instrument or control a lighting exhibition). In a context where privacy is an issue (for example, with targeted/personalized alerts or advertisements), the user's mobile device acts as a mediator for the services running on the public network. This allows the user to receive relevant information in the way they see fit. Notifications can serve as triggers for interaction with public devices and services if the user chooses to do so.
Variants
The user may have additional mobile devices they want to incorporate into an interaction, for example a headset acting as an auditory aid or personal speech output device.
Gaps
Data format describing user interface preferences.
Existing Standards
This use case is based on MMI UC 3.1.
Comments
Does not include Requirements section from original MMI use case.

2.2.4 Meeting Room Event Assistance

Submitter(s)
Michael McCool
Category
Accessibility
Expected Devices
Meeting space supporting personalizable services and device access.
Expected Data
Command and status information transferred between the personal mobile device application and the meeting space's services and devices. Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services in the meeting space.
Description
A conference room where a series of meetings will take place. People can go in and out of the room before, after and during the meeting. The door is "touched" by a badge. An application on the user's mobile device can activate any available display in the room and the room and can access and receive notification from devices and services in the room. The chair of the meeting is notified by a dynamically composed graphic animation, audio notification or a mobile phone notification, about available devices and services, and can install applications indicated by links. The chair of the meeting selects a setup procedure by text amongst the provided links. These options could be, for example: photo step-by-step instructions (smartphone, HDTV display, Web site), audio instructions (MP3 audio guide, room speakers reproduction, HDTV audio) or RFID enhanced instructions (mobile SmartTag Reader, RFID Reader for smartphone). The chair of the meeting chooses the room speakers reproduction, then the guiding Service is activated and they start to set the video projector. After some attendees arrive, the chair of the meeting changes to the slide show option and continues to follow the instructions at the same step it was paused but with another more private modality for example, a smartphone slideshow.
Variants
The user may have additional mobile devices they want to incorporate into an interaction, for example a headset acting as an auditory aid or personal speech output device.
Gaps
Data format describing user interface preferences. Ability to install applications based on links that can access IoT services.
Existing Standards
This use case is based on MMI UC 3.2.
Comments
Does not include Requirements section from original MMI use case.

2.2.5 Cross-Domain Discovery in a Smart Campus

Submitter(s)
Andrea Cimmino and Raúl García Castro
Target Users
  • device owners
  • service provider
  • network operator (potentially transparent for WoT use cases)
  • directory service operator
Motivation
In this use case a network full of IoT devices is presented, in which these devices are registered in several Middle-Nodes. The challenge presented in this scenario is to be able to discover the different sensors, by issues a SPARQL query, and without having prior knowledge of where those devices are allocated. Therefore, the discovery SPARQL query must start from a specific Middle-Node and reach all those Middle-Nodes that are relevant to answer the query. This scenario requires that discovery does not only happen locally when a Middle-Node receives the query and checks if some Thing Description registered is suitable to answer the query. Instead, the scenario requires also that the Middle-Node forwards the query through the network (topology conformed by the middle-nodes) in order to find those Middle-Nodes that actually contain relevant Thing Descriptions. Notice from the following example that the query is not broadcasted in the network to prevent flooding, instead the Middle-Nodes follow some discovery heuristic to know where the query should be forwarded. Also, notice that in this scenario not all the Middle-Nodes have the IoT devices registered directly within, they are Middle-Nodes collectors, such as Middle-Node C, I, G, and D.
Expected Devices
Any device from the energy context (e.g. solar panels, smart plugs, or smart energy meters), devices from the building context (e.g. light bulbs, light switches, occupancy sensors, or thermostats), devices from the environmental context (e.g. soil moisture, flood detection, or air humidity), devices from the wearables context (e.g. smart bands), and/or devices from the water context (e.g. water valves, or water quality sensors)
Expected Data
Data coming from different contexts, such as Energy, Building, Environmental Wearables and Water.
Affected WoT deliverables and/or work items
Current WoT-Discovery approach
Description
A campus has a wide range of IoT devices distributed across their grounds. These IoT devices belong to very different domains in a smart city, such as, energy, buildings, environment, water, wearable, etc. The IoT devices are distributed across the campus and belong to different infrastructures or even to individuals. A sample topology of this scenario could be the following:

sample topology of a smart campus


In this scenario, energy-related IoT devices monitor the energy use and income in the campus, among other things. From these measurements, an Energy Management System may predict a negative peak of incoming energy that would entail the failure of the whole system. In this case, a Service or a User needs to discover all those IoT devices that are not critical for the normal functioning of the campus (such as indoor or outdoor illumination, HVAC systems, or water heaters) and interact with them in order to save energy, by switching them off or reducing their consumption. Besides, the same Service or User will look for those IoT devices that are critical for the well-functioning of the campus (such as magnetic locks, water distribution system, or fire/smoke sensors) and ensure that they are up and running. Additionally, the Service or the User, will discover relevant people's wearable to warn them about the situation.

Sample flow:

A service, or a user, sends a (SPARQL) query to the discovery endpoint of a known Middle-Node (which can be wrapped by a GUI). The Middle-Node will try to answer the query first checking the Thing Descriptions of the IoT devices registered in such Middle-Node. Then, if the query requires further discovery, or it was not successfully answered the Middle-Node will forward the query to its *known* Middle-Nodes. Recursively, the Middle-Nodes will try to answer the query and/or forward the query to their known Middle-Nodes. When one Middle-Node is able to answer the query it will forward back to the former Middle-Node the partial query answer. Finally, when the discovery task finishes, the former Middle-Node will join all the partial query answers producing an unified view (which could be synchronous or asynchronous).

For instance, assuming Middle-Node F receives a query that asks about all the discoverable Building IoT devices in the campus. First, the Middle-Node F will try to answer the query with the Thing Descriptions of the IoT registered within. Since Middle-Node F contains some Building IoT devices a partial query answer is achieved. However, since they query asked about all the discoverable Building IoT devices Middle-Node F should forward the query to its other known Middle-Nodes, i.e., Middle-Node G. This process will be repeated by the Middle-Nodes until the query reaches the Middle-Nodes H and B which are the ones that have registered Thing Descriptions about IoT buildings. Therefore, the query will travel through the topology as follows:

query goes through the topology for a smart campus


Finally, when Middle Nodes B and H compute two partial query answers, those answers will be forwarded back to Middle-Node F which will join them with its former partial query answer obtained from its registered Thing Descriptions. Finally, a global query answer will be provided.
Security Considerations
None, in this case an underneath infrastructure that handles security is assumed
Privacy Considerations
None, although relevant in this case the core of the use case relies on the feature of finding across the network different IoT devices. It is assumed that there is an underneath infrastructure that handles privacy
Gaps
Been able to find suitable Middle-Nodes that are relevant to answer the query, with no prior knowledge
Existing Standards
None
Comments
None

2.3 Building Technologies

2.3.1 Smart Building

Submitter(s)
Sebastian Kaebisch
Target Users
Motivation and Description
Buildings such as office buildings, hotels, airports, hospitals, train stations and sports stadiums typically consist of heterogeneous IoT systems such as lightings, elevators, security (e.g. door control), air-conditionings, fire warnings, heatings, pools, parking control, etc. Monitoring, controlling, and management of such a heterogeneous IoT landscape is quite challenging in terms of engineering and maintenance.
Expected Devices
All kind of sensors and actuators (e.g. HVAC).
Expected Users
  • systems engineers
  • system administrators
  • third party user
Expected Data
Heterogeneous data models from different IoT systems such as BACnet, KNX, and Modbus.
Affected WoT deliverables and/or work items
WoT Thing Description and Thing Model, WoT Architecture, WoT Binding Templates (covering protocol specifica)
Existing Standards
BACnet, KNX, OPC-UA, Modbus

2.3.2 Connected Building Energy Efficiency

Submitter(s)
Farshid Tavakolizadeh
Target Users
  • device owners
  • device user
  • directory service operator
Motivation
Construction and renovation companies often deal with the challenge of delivering target energy-efficient buildings given specific budget and time constraints. Energy efficiency, as one of the key factors for renovation investments, depends on the availability of various data sources to support the renovation design and planning. These include climate data and building material along with residential comfort and energy consumption profiles. The profiles are created using a combination of manual inputs and sensory data collected from residents.
Expected Devices
  • Gateway (e.g. Single-board computer with a Z-Wave controller)
Z-wave Sensors:
  • Power Meter
  • Gas Meter
  • Smart Plug
  • Heavy Duty Switch
  • Door/Window Sensors
  • CO2 Sensor
  • Thermostat
  • Multi-sensors (Motion, Temperature, Light, Humidity, Vibration, UV)
Expected Data
  • Ambient conditions
  • Occupancy model
Description
Renovation of residential buildings to improve energy efficiency depend on a wide range of sensory information to understand the building conditions and consumption models. As part of the pre-renovation activities, the renovation companies deploy various sensors to collect relevant data over a period of time. Such sensors become part of a wireless sensor network (WSN) and expose data endpoint with the help of one or more gateway devices. Depending on the protocols, the endpoints require different interaction flows to securely access the current and historical measurements. The renovation applications need to discover the sensors, their endpoints and how to interact with them based on search criteria such as the physical location, mapping to the building model or measurement type.
Privacy Considerations
The TD may expose personal information about the building layout and residents.
Gaps
There is no standard vocabulary for embedding application-specific meta data inside the TD. It is possible to extend the TD context and add additional fields but with too much flexibility, every application may end up with a completely different structure, making such information more difficult to discover. In this use-case, the application specific data are:
  • the mapping between each thing and the space in the building model
  • various identifiers for each thing (e.g. sensor serial number, z-wave ID, SenML name)
  • indoor coordinates
There is no standard API specification for the WoT Thing Directory to maintain and query TDs.
Existing Standards
  • OGC SensorThings model includes a properties property for each Thing which is a non-normative JSON Object for application-specific information (not to be confused with TD's properties which is a Map of instances of PropertyAffordance

2.3.3 Automated Smart Building Management

Submitter(s)
Edison Chung, Hervé Pruvost, Georg Ferdinand Schneider
Category
Smart Building
Target Users
  • device owners
  • device user
  • service provider
  • device manufacturer
  • gateway manufacturer
  • identity provider
  • directory service operator
Motivation

When operating smart buildings, aggregating and managing all data provided by heterogeneous devices in these buildings still require a lot of manual effort. Besides the hurdles of data acquisition that relies on multiple protocols, the acquired data generally lacks contextual information and metadata about its location and purpose. Usually, each service or application that consumes data from building things requires information about its content and its context like, e.g.:

  • which thing produces the data (sensor, meter, actuator, other technical component...) in a building;
  • which physical quantity or process is represented (temperature, energy supply, monitoring, actuation);
  • which other building things are involved (e.g. sensor hosted by a duct or a space).

Through the increased use of model-based data exchange over the whole life cycle of a building, often referred to as Building Information Modeling (BIM) (Sacks et al., 2018), a curated source for data describing the building itself is available including, amongst others, the topology of the building structured into e.g. sites, stores and spaces.

Automatically tracking down data and their related things in a building would especially ease the configuration and operation of Building Automation and Control Systems (BACS) and Heating Ventilation and Air-Conditioning (HVAC) services during commissioning, operation, maintenance and retrofitting. To tackle these challenges, still, building experts make use of metadata and naming conventions which are manually implemented in Building Management Systems (BMS) databases to annotate data and things. An important property of a thing is its location within the topology of a building as well as where its related data are produced or used. For example, this applies to the temperature sensor of a space, the temperature setpoint of a zone, a mixing damper flap actuator of a HVAC component, etc. In addition, other attributes of things are of interest, such as cost or specific manufacturer data. One difficulty is especially the lack of a standardized way of creating, linking and sharing this information in an automated manner. On the contrary, manufacturers, service providers and users introduce their own metadata for their own purpose. As a solution, the Web of Things (WoT) Thing Description (TD) aims at providing normalized and syntactic interoperability between things.

To support this effort, this use case is motivated by the need to enhance semantic interoperability between things in smart buildings and to provide them with contextual links to building information. This building information is usually obtained from a BIM model. The use case builds on Web of Data technologies and reuses schemas available from the Linked Building Data domain. It should serve as a use case template for many applications in an Internet of Building Things (IoBT).

Expected Devices
  • Actuators
  • Sensors
  • Devices from the building context
  • Devices from the HVAC system
  • Smart devices
Expected Data
  • Sensor ID
  • Thing Descriptions
  • Protocol integrations
  • Sensor readings
  • Building topology
  • Semantic location
  • Geolocation
Affected WoT deliverables and/or work items
Description

The goal of this use case is to show the potential to automate workflows and address the heterogeneity of data as observed in the smart building domain. The examples show the potential benefits of combining WoT TD with contextual data obtained from BIM.

The use cases is based on the Open Smart Home Dataset, which introduces a BIM model for a residential flat combined with observations made by typical smart home sensors. We extend the dataset with Thing Descriptions of some of the items. The respective Thing Description of a temperature sensor in the kitchen of the considered flat is as follows:

{
    "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureSensor",
    "@context": [
        "https://www.w3.org/2019/wot/td/v1",
        {
            "osh": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#",
            "bot": "https://w3id.org/bot#",
            "sosa": "http://www.w3.org/ns/sosa/",
            "om": "http://www.ontology-of-units-of-measure.org/resource/om-2/",
            "ssns": "http://www.w3.org/ns/ssn/systems/",
            "brick": "https://brickschema.org/schema/Brick#",
            "schema": "http://schema.org"
        }
    ],
    "title": "TemperatureSensor",
    "description": "Kitchen Temperature Sensor",
    "@type": ["sosa:Sensor", "brick:Zone_Air_Temperature_Sensor", "bot:element"],
    "@reverse": {
        "bot:containsElement": {
            "@id": "osh:Kitchen"
        }
    },
    "securityDefinitions": {
        "basic_sc": {
            "scheme": "basic",
            "in": "header"
        }
    },
    "security": [
        "basic_sc"
    ],
    "properties": {
        "Temperature": {
            "type": "number",
            "unit": "om:degreeCelsius",
            "forms": [
                {
                    "href": "https://kitchen.example.com/temp",
                    "contentType": "application/json",
                    "op": "readproperty"
                }
            ],
            "readOnly": true,
            "writeOnly": false
        }
    },
    "sosa:observes": {
        "@id": "osh:Temperature",
        "@type": "sosa:ObservableProperty"
    },
    "ssns:hasSystemCapability": {
        "@id": "osh:SensorCapability",
        "@type": "ssns:SystemCapability",
        "ssns:hasSystemProperty": {
            "@type": ["ssns:MeasurementRange"],
            "schema:minValue": 0.0,
            "schema:maxValue": 40.0,
            "schema:unitCode": "om:degreeCelsius"
        }
    }
}

Where the contextual information on the measurement range of the sensor is specified using the SSNS schema. The location information of the thing TemperatureSensor is provided based on the Building Topology Ontology (BOT), a minimal ontology developed by the W3C Linked Building Data Community Group (W3C LBD CG) to describe the topology of buildings in the semantic web. Additionally, the thing description of the corresponding actuator is given below.

    {
    "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureActuator",
    "@context": [
        "https://www.w3.org/2019/wot/td/v1",
        {
            "osh": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#",
            "bot": "https://w3id.org/bot#",
            "sosa": "http://www.w3.org/ns/sosa/",
            "ssn": "http://www.w3.org/ns/ssn/",
            "brick": "https://brickschema.org/schema/Brick#"
        }
    ],
    "title": "TemperatureActuator",
    "description": "Kitchen Temperature Actuator",
    "@type": ["sosa:Actuator", "brick:Zone_Air_Temperature_Setpoint", "bot:element"],
    "@reverse": {
        "bot:containsElement": {
            "@id": "osh:Kitchen"
        }
    },
    "securityDefinitions": {
        "basic_sc": {
            "scheme": "basic",
            "in": "header"
        }
    },
    "security": [
        "basic_sc"
    ],
    "actions": {
        "TemperatureSetpoint": {
            "forms": [
                {
                    "href": "https://kitchen.example.com/tempS"
                }
            ]
        }
    },
    "ssn:forProperty": {
        "@id": "osh:Temperature",
        "@type": "sosa:ActuatableProperty"
    }
}
Combining Topological Context and Thing Descriptions

The scenario considered is related to the replacement of a temperature sensor in a BACS. The topological information localizing the things, e.g. the temperature sensor can be used to automatically commission the newly replaced sensor and link it to existing control algorithms. For this purpose, the identifiers of suitable sensors and actuators are needed and can be, for example, queried via SPARQL. Here the query uses some additional classification of sensors from BRICK schema.

PREFIX bot: <https://w3id.org/bot>
PREFIX brick: <https://brickschema.org/schema/Brick#>
PREFIX osh: <https://w3id.org/ibp/osh/OpenSmartHomeDataSet#>
SELECT ?sensor ?actuator
WHERE{
  ?space a bot:Space .
  ?space bot:containsElement ?sensor .
  ?space bot:containsElement ?actuator .
  ?sensor a brick:Zone_Air_Temperature_Sensor .
  ?actuator a brick:Zone_Air_Temperature_Setpoint .
}

Similarly this data can be obtained via a REST API built upon the HTTP protocol. Below is an example endpoint applying REST style for getting the same information for a specific space name:

GET "https://server.example.com/api/locations?space=osh:Kitchen&sensorType=brick:Zone_Air_Temperature_Sensor&actuatorType=brick:Zone_Air_Temperature_Setpoint"
API response:
{
  "location": {
    "site": {
      "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Site1",
      "name": "Site1"
    },
    "building": {
      "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Building1",
      "name": "Building1"
    },
    "zone": null,
    "storey": {
      "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Level2",
      "name": "Level2"
    },
    "space": {
      "id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Kitchen",
      "name": "Kitchen"
    },
  "sensors": [
    "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureSensor"
  ],
  "actuators": [
    "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureActuator"
  ]
}

In this example query, the REST endpoint has been defined using the OpenAPI specification and is provided by a RESTful server. A data binding is needed between the server and the underlying backend storage, here the triple store that contains the involved ontologies (osh, bot, ssn, brick...). The data binding relies on similar SPARQL queries as the one shown above. As a result the endpoint can deliver information to a target application that consumes custom JSON rather than triples. Similar implmentation could be achieved using GraphQL.

Automated Update of Fault Detection Rule based on Thing Description

Another related use case in smart buildings, which would greatly benefit from harmonised thing descriptions and attached location information is related to the detection of unexpected behavior, errors and faults. An example for such a detection of faults is the rule-based surveillance of sensor values. A generic rule applicable to sensors is that the observation values stay within the measurement range of the sensor. Again, in the case of maintenance as described above a sensor is replaced.

Some agent configuring fault detection rules can obtain the measurement range from the sensor's TD (see above) to obtain the parameters to configure the mentioned rule. Again, a query or API call retrieving this information (schema:minValue/ schema:maxValue) can be used to update the upper and lower bound of the values provided by the sensor.

Security Considerations

Security in smart buildings is of importance. In particular, access control needs to be properly secured. This applies also for data access which can be secured using existing security schemes (API Keys, OAuth2...). Moreover, from certain observations, e.g. electricity consumption, clues can be indirectly given such as presence in a home. Hence, security needs must be defined and properly addressed.

Privacy Considerations

Privacy considerations can be of a concern if observations of sensors can be matched to individuals. It is of the responsibility of building owners, managers and users to define their own privacy policies for their data and share necessary consents if necessary.

Accessibility Considerations

Accessibility is a major concern in the buildings domain. Efforts exist in also providing accessibility data in a electronic format. The W3C LBD CG is in contact with the W3C Linked Data for Accessbility Community Group.

Internationalization (i18n) Considerations

Internationalization is a concern as the Buildings industry is a global industry. This is reflected in some efforts, e.g. BOT used in the examples above does provide multilanguage labels in up to 16 different languages including english, french and chinese.

Existing Standards

References:

2.4 Manufacturing

2.4.1 Manufacturing

Submitter(s)
Michael Lagally
Target Users
Device owners, cloud provider.
Motivation

Production lines for industrial manufacturing consist of multiple machines, where each machine incorporates sensors for various values. A failure of a single machine can cause defective products or a stop of the entire production.

Big data analysis enables to identify behavioral patterns across multiple production lines of the entire production plant and across multiple plants.

The results of this analysis can be used for optimizing consumption of raw materials, checking the status of production lines and plants and predicting and preventing fault conditions.

Expected Devices
Various sensors, e.g. temperature, light, humidity, vibration, noise, air quality.
Expected Data
Discrete sensor values, such as temperature, light, humidity, vibration, noise, air quality readings. The data can be delivered periodically or on demand.
Dependencies
Thing Description: groups of devices, aggregation / composition mechanism, thing models Discovery/Onboarding: Onboarding of groups of devices
Description

A company owns multiple factories which contain multiple production lines. Examples are production lines and environment sensors. These devices collect data from multiple sensors and transmit this information to the cloud. Sensor data is stored in the cloud, can be visualized and analyzed using machine learning / AI.

The cloud service allows to manage single and groups of devices. Combining the data streams from multiple devices allows to get an easy overview of the state of all connected devices in the user's realm.

In many cases there are groups of devices of the same kind, so the aggregation of data across devices can serve to identify anomalies or to predict impending outages.

The cloud service allows to manage single and groups of devices and can help to identify abnormal conditions. For this purpose a set of rules can be defined by the user, which raises alerts towards the user or triggers actions on devices based on these rules.

This enables the early detection of pending problems and reduces the risk of machine outages, quality problems or threats to the environment or life of humans. It increases production efficiency, improves production logistics (such as raw material delivery and production output).

Comments
See also Digital Twin use case.

2.5 Retail

Submitter(s)
David Ezell, Michael Lagally, Michael McCool
Target Users
Retailers, customers, suppliers.
Motivation

Integrating and interconnecting multiple devices into the common retail workflow (i.e., transaction log) drastically improves retail business operations at multiple levels. It brings operational visibility,including consumer behavior and environmental information, that was not previously possible or viable in a meaningful way.

It drastically speeds up the process of root cause analysis of operational issues and simplifies the work of retailers.

Expected Devices
Connected sensors, such as people counters, presence sensors, air quality, room occupancy, door sensors. Cloud services. Video analytics edge services.
Expected Data
Inventory data, supply chain status information, discrete sensor data or data streams.
Description
Falling costs of sensors, communications, and handling of very large volumes of data combined with cloud computing enable retail business operations with increased operational efficiency, better customer service, and even increased revenue growth and return on investment. Accurate forecasts allow retailers to coordinate demand-driven outcomes that deliver connected customer interactions. They drive optimal strategies in planning, increasing inventory productivity in retail supply chains, decreasing operational costs and driving customer satisfaction from engagement, to sale, to fulfilment. Understanding of store activity juxtaposed with traditional information streams can boost worker and consumer safety, comply better with work safety regulations, enhance system and site security, and improve worker efficiency by providing real-time visibility into worker status, location, and work environment.
Variants
  • Use edge computing, in particular video analytics, in combination with IoT devices to deliver an enhanced customer experience, better manage inventory, or otherwise improve the store workflow.
Security Considerations
  • In retail, replay attacks can cause monetary loss, customers may be incorrectly charged or over-charged.
  • To avoid replay attacks, "Things" should implement a sequence number for each message and digital signature.
  • Servers ("Things" or "Cloud") should verify the signature and disallow for duplicated messages.
  • For "Things" relying on electronic payments, "Things" must comply with PCI-DSS requirements.
  • "Things" must never store credit card information.
  • Customer satisfaction and trust depends on availability, so attacks such as Denial-of-Service (DoS) need to be prevented or mitigated.
  • To prevent DoS, implement "Things" with early DoS detection.
  • Have an automated DoS system that will notify the controlling unit of the problem.
  • Implement IP white list, that could be part of the DoS early detection system.
  • Make sure your network perimeter is defended with up to date firewall software.
Privacy Considerations
As a general rule, personal consumer information should not be stored. That is especially true in the retail industry where a security breach could cause financial, reputation, and brand damage. If personal or information that can identify a consumer is to be stored, it should be to conduct business and with the explicit acknowledgment of the consumer. WoT vendors and integrators should always have a privacy policy and make it easily available. By default, devices should adopt an opt-out policy. That means, unless the consumer explicitly allowed for the data capture and storage, avoid doing it.

2.6 Health

2.6.1 Public Health

2.6.1.1 Public Health Monitoring
Submitter(s)
Jennifer Lin
Target Users
Agencies, companies and other organizations in a Smart City with significant pedestrian traffic in a pandemic situation.
Motivation
A system to monitor the health of people in public places is useful to control the spread of infectious diseases. In particular, we would like to identify individuals with temperatures outside the norm (i.e. running a fever) and then take appropriate action. Actions can include sending a notification or actuating a security device, such as a gate. This mechanism should be non-invasive and non-contact since the solution should not itself contribute to the spread of infectious diseases. Data may also be aggregated for statistics purposes, for example, to identify the number of people in an area with elevated temperatures. This has additional requirements to avoid double-counting individuals.
Expected Devices
One of the following:
  • A thermal camera.
  • Face detection (AI) service
    • May be on device or be an edge or cloud service.
Optional:
  • RGB and/or depth camera registered with the thermal camera
  • Cloud service for data aggregation and analytics.
  • Some way to identify location (optional) Note that location might be static and configured during installation, but might also be based on a localization technology if the device needs to be portable (for example, if it needs to be set up quickly for an event).
Expected Data
  • Sensor ID
  • Timestamp
  • Number of people identified with a fever in image
  • Estimated temperature for each person
    • May be coarse, low/normal/high
  • Location
    • Latitude, Longitude, Altitude, Accuracy
    • Semantic (e.g. a particular building entrance)
  • Thermal image
Optional:
  • RGB image
  • Depth image
  • Localization technology (see localization use case)
  • Integration with local IoT devices: gates, lights, or people (guards)
  • Bounding boxes around faces of identified people in image(s)
  • Data that can be used to uniquely identify a face (distinguish it from others)
    • Aggregation system may output the total number of unique faces with fever

Note 1: the system should be capable of notifying consumers (such as security personnel), of fever detections. This may be email, SMS, or some other mechanism, such as MQTT publication.

Note 2: In all cases where images are captured, privacy considerations apply.

It would also be useful to count unique individuals for statistics purposes, but not necessarily based on identifying particular people. This is to avoid counting the same person multiple times.

Dependencies
node-wot
Description
A thermal camera image is taken of a group of people and an AI service is used to identify faces in the image. The temperature of each person is then estimated from the registered face; for greater accuracy, a consistent location for sampling should be used, such as the forehead. The estimated temperature is compared to high (and optionally, low) thresholds and a notification (or other action) is taken if the temperature is outside the norm. Additional features may be extracted to identify unique individuals.
Variants
  • Enough information is included in the notification that the specific person that raised the alarm can be identified. For example, if an RGB camera is also registered with the thermal camera, then a bounding box may be indicated via JSON and the RGB image included; or the bounding box could be actually drawn into the sent image, or the face could be cropped out. This is useful if, for example, a notification needs to be sent to health or security workers who need to identify the person in a crowd.
  • Instead of simply a notification, an action may be taken, such as closing or refusing to open a gate at the entrance to a building, to prevent sick employees from entering the building.
  • To generate statistics, for example to count the number of people with fevers, then unique individuals need to be identified to avoid counting the same person more than once.
  • The same sensors might be used to determine the number of people in an area and send a notification if crowded conditions are detected, in order to support social distancing behavior (for instance, supporting an app that notifies users when a destination is crowded) in a pandemic situation.
  • Cameras that provide video streams rather than still images.
Security Considerations
  • Because PII is involved (see below) access should be controlled (only provided to authorized users) and communications protected (encrypted).
Privacy Considerations
  • Images of people and their health status is involved.
    • If later these are made public then the health information of a particular person would be released publicly.
    • There is also the possibility that the camera data could be in error, and should be confirmed with a more accurate sensor.
    • This information needs to be treated as PII and protected: only distributed to authorized users, and deleted when no longer needed.
    • However, derived aggregate information can be kept and published.
Gaps
  • Onboarding mechanism for rapidly deploying a large number of devices
  • Standard vocabulary for geolocation information
  • Implementations able to handle image payload formats, possibly in combination with non-image data (e.g. images and JSON in a single response)
  • Video streaming support (if we wish to serve video stream from the camera instead of still images)
  • Standard ways to specify notification mechanisms and data payloads for things like SMS and email (in addition to the expected MQTT, CoAP, and HTTP event mechanisms)
Comments
  • May be additional requirements for privacy since images of people and their health status is involved.
  • Different sub-use cases: immediate alerts or actions vs. aggregate data gathering
2.6.1.2 Interconnected medical devices in a hospital ICU
Submitter(s)
Taki Kamiya
Target Users
  • device owners
  • device user
  • cloud provider
  • service provider
  • device manufacturer
  • gateway manufacturer
  • identity provider
Motivation
Preventable medical errors may account for more than 100,000 deaths per year in U.S. alone. These errors are mainly caused by failures of communication such as a chart misread or the wrong data passed along to machines or staffs. Part of the problem could be solved if the machines could speak to one another. Manufacturers have little incentive to make their proprietary code and data easily to accessible and process able by their competitors’ machines. So the task of middleman falls to the hospital staffs. In addition to saving lives, a common framework could result in collecting and recording more clinical data on patients, making it easier to deliver precision medicine.
Description

Physiological Closed-Loop Control (PCLC) devices are a group of emerging technologies, which use feedback from physiological sensor(s) to autonomously manipulate physiological variable(s) through delivery of therapies conventionally delivered by clinician(s).

Clinical scenario without PCLC. An elderly female with end-stage renal failure was given a standard insulin infusion protocol to manage their blood glucose, but no glucose was provided. Their blood glucose dropped to 33, then rebounded to over 200 after glucose was given. This scenario has not changed for decades.

The desired state with PCLC implemented in an ICU. A patient is receiving an IV insulin infusion and is having the blood glucose continuously monitored. The infusion pump rate is automatically adjusted according to the real-time blood glucose levels being measured, to maintain blood glucose values in a target range. If the patient’s glucose level does not respond appropriately to the changes in insulin administration, the clinical staff is alerted.

Medical devices do not interact with each other autonomously (monitors, ventilator, IV pumps, etc.) Contextually rich data is difficult to acquire. Technologies and standards to reduce medical errors and improve efficiency have not been implemented in theater or at home.

In recent years, researchers have made progress developing PCLC devices for mechanical ventilation, anesthetic delivery applications, and so on. Despite these promises and potential benefits, there has been limited success in the translation of PCLC devices from bench to bedside. A key challenge to bringing PCLC devices to a level required for a clinical trials in humans is risk management to ensure device reliability and safety.

The United States Food and Drug Administration (FDA) classifies new hazards that might be introduced by PCLC devices into three categories. Besides clinical factors (e.g. sensor validity and reliability, inter- and intra-patient physiological variability) and usability/human factors (e.g. loss of situational awareness, errors, and lapses in operation), there are also engineering challenges including robustness, availability, and integration issues.

Variants
US military developed ONR SBIR (Automated Critical Care System Prototype), and found those issues.
  • No plug and play, i.e. cannot swap O2 Sat with another manufacturer.
  • No standardization of data outputs for devices to interoperate.
  • Must have the exact make/model to replace a faulty device or system will not work.
Security Considerations

Security considerations for interconnected and dynamically composable medical systems are critical not only because laws such as HIPAA mandate it, but also because security attacks can have serious safety consequences for patients. The systems need to support automatic verification that the system components are being used as intended in the clinical context, that the components are authentic and authorized for use in that environment, that they have been approved by the hospital’s biomedical engineering staff and that they meet regulatory safety and effectiveness requirements.

For security and safety reasons, ICE F2761-09(2013) compliant medical devices never interact directly each other. All interaction is coordinated and controlled via the applications.

While transport-level security such as TLS provides reasonable protection against external attackers, they do not provide mechanisms for granular access control for data streams happening within the same protected link. Transport-level security is also not sufficiently flexible to balance between security and performance. Another issue with widely used transport-level security solutions is the lack of support for multicast.

Gaps
Multicast support. It has proven useful for efficient and scalable discovery and information exchange in industrial systems.
Existing Standards

F2761-09 (2013)

Medical Devices and Medical Systems - Essential safety requirements for equipment comprising the patient-centric integrated clinical environment (ICE) - Part 1: General requirements and conceptual model. The idea behind ICE is to allow medical devices that conform to the ICE standard, either natively or using an adapter, to interoperate with other ICE-compliant devices regardless of manufacturer.

OpenICE

OpenICE is an initiative to create a community implementation of F2761-09 (ICE - Integrated Clinical Environment) based on DDS.

MDIRA Specification Document Version 1.0.

MDIRA Version 1.0 provides requirements and implementation guidance for MDIRA-compliant systems focused on trauma and critical care in austere environments. Johns Hopkins University Applied Physics Laboratory (JHU-APL) lead a research project in collaboration with US military to develop a framework of autonomous / closed loop prototypes for military health care which are dual use for the civilian healthcare system.

2.6.2 Private Health

2.6.2.1 Health Notifiers
Submitter(s)
Michael McCool
Target Users
End user with a health problem they wish to monitor. Health services provider (doctor, nurse, paramedic, etc.).
Motivation
In critical situations regarding health, like a medical emergency, media multimodality may be the most effective way to communicate alerts, When the goal is to monitor the health evolution of a person in both emergency and non-emergency contexts, access via networked devices may be the most effective way to collect data and monitor a patient's status.
Expected Devices
Medical facilities supporting device and service access.
Expected Data
Command and status information transferred between the personal mobile device application and the meeting space's services and devices. Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services.
Description
In medical facilities, a system may provide multiple options to control sensor operations by voice or gesture ("start reading my blood pressure now"). These interactions may be mediated by an application installed into a smartphone. The system integrates information from multiple sensors (for example, blood pressure and heart rate); reports medical sensor readings periodically (for example, to a remote medical facility) and sends alerts when unusual readings/events are detected.
Variants
The user may have additional mobile devices they want to incorporate into an interaction, for example a headset acting as an auditory aid or personal speech output device.
Gaps
Data format describing user interface preferences. Ability to install applications based on links that can access IoT services.
Existing Standards
This use case is based on MMI UC 3.2.
Comments
Does not include Requirements section from original MMI use case.

2.7 Energy

2.7.1 Smart Grids

Submitter(s)
Christian Glomb
Target Users
  • Grid operators on all voltage levels line Distribution System Operators (DSO), Transmission System Operators (TSO)
  • Plant operators (centralized as well as de-centralized producers)
  • Virtual Power Plant (VPP) operators
  • Energy grid markets
  • Cloud providers where grid backend services are hosted and where Operation Technology bridges to Information Technology
  • Device manufacturers, owners, and users; devices include communication gateways, monitoring and control units
Expected Devices
A smart grid integrates all players in the electricity market into one overall system through the interaction of generation, storage, grid management and consumption. Power and storage plants are already controlled today in such a way that only as much electricity is produced as is needed. Smart grids include consumers as well as small, decentralized energy suppliers and storage locations in this control system, so that on the one hand, consumption is more homogeneous in terms of time and space (see also intelligent electricity consumption) and on the other hand, in principle inhomogeneous producers (e.g. wind power) and consumers (e.g. lighting) can be better integrated.
Expected Data
  • Weather and climate data
  • Metering data (both production as well as consumption as well as storage, e.g. 15 min. intervals)
  • Real time data from PMUs (Phasor Measurement Units)
  • Machine and equipment monitoring data (enabling health checks)
  • ...
Affected WoT deliverables and/or work items
WoT Architecture, WoT Binding Templates (covering protocol specifica)
Description
The term Smart Grid refers to the communicative networking and control of power generators, storage facilities, electrical consumers, and grid equipment in power transmission and distribution networks for electricity supply. This enables the optimization and monitoring of the interconnected components. The aim is to secure the energy supply on the basis of efficient and reliable system operation.
Variants
Decentralized Power Generation
While electricity grids with centralized power generation have dominated up to now, the trend is moving towards decentralized generation plants, both for generation from fossil primary energy through small CHP plants and for generation from renewable sources such as photovoltaic systems, solar thermal power plants, wind turbines and biogas plants. This leads to a much more complex structure, primarily in the area of load control, voltage maintenance in the distribution grid and maintenance of grid stability. In contrast to medium to large power plants, smaller, decentralized generation plants also feed directly into the lower voltage levels such as the low-voltage grid or the medium-voltage grid. This use case variants also includes operation and control of energy storages like batteries.
Virtual Power Plants
A Virtual Power Plant (VPP) is an aggregation of Distributed Energy Resources (DERs) that can act as an entity on energy markets or as an ancillary service to grid operation. The individual DERs often have a primary use on their own, with electric generation/consumption being a side-effect resp. secondary use. This results in negotiations/collaborations between many different parties e.g. such as the DER owner, the VPP operator, the grid operator and others.
Smart Metering
For consumers, a major change is the installation of smart meters. Their core tasks are remote reading and the possibility to realize fluctuating prices within a day at short notice. All electricity meters must therefore be replaced by those with remote data transmission.
Other variants
Emergency response, grid synchronization, grid black start
Building Blocks
  • Multi-Stakeholder Operation: Multiple involved parties have to find a common mode of operation
  • Device Lifecycle Management: Since the VPP is a dynamic system of loosely coupled DERs, the appearance and disappearance of DERs as well as the software management on the devices itself requires a means to orchestrate the lifecycle of individual device's respective components.
  • Embedded Runtime: Especially for DERs in remote locations, maintaining a close couple control loop can be expensive if feasible at all. Therefore, it is desirable to be able to offload control logic to the DER itself.
  • Ensemble Discovery: In order to dynamically find matching DERs needed for the operational goal of a VPP, a registry with different options of DER discovery is needed.
  • Content-Negotiation: The different stakeholders have to interact and therefore need a common data format.
  • Resource Description: The DER has to describe itself to enable discovery of single DERs and ensembles, also the operational data needs to be understood by the different stakeholders without engineering effort.
  • Push Services: As there is a fan-out with many devices that probably have a rate-limited connection connecting to one single command center, a bidirectional communication mechanism is needed rather than polling for the reverse direction
  • Object Memory: As multiple and interchangeable stakeholders are involved in the application, a backlog of the object is beneficial for scrollkeeping
Non-Functionals
  • Privacy: As fine-grained metering information provides sensitive data about a household, the system should show a high degree of privacy
  • Trust: Since the data exchange between the virtual power plant and the distributed energy resource leads to a physical action that invokes high currents and monetary flows, the integrity of both parties and the exchange's data is crucial
  • Layered L7 Communication: Since multiple different links are used for monitoring and control, integration requires a clear and consistent separation of information from the used serialization and application protocols to enable the exchange of homogenous information over heterogenous application layer protocols
Existing Standards
IEC 61850 - International standard for data models and communication protocols
IEEE 1547 - US standard for interconnecting distributed resources with electric power systems

2.8 Transportation

Submitter(s)
Zoltan Kis
Sub-categories
Transportation - Infrastructure Transportation - Cargo Transportation - People
Target Users

Smart Cities: managing roads, public transport and commuting, autonomous and human driven vehicles, transportation tracking and control systems, route information systems, commuting and public transport, vehicles, on-demand transportation, self driving fleets, vehicle information and control systems, infrastructure sharing and payment system, smart parking, smart vehicle servicing, emergency monitoring, etc.

Transport companies: managing shipping, air cargo, train cargo and last mile delivery transportation systems including automated systems.

Commuters: Mobility as a service, booking systems, route planning, ride sharing, self-driving, self-servicing infrastructure, etc.

Motivation

Provide common vocabulary for describing transport related services and solutions that can be reused across sub-categories, for easier interoperability between various systems owned by different stakeholders.

Thing models could be defined in many subdomains to help integration or interworking between multiple systems.

Transportation of goods can be optimized at global level by enhancing interoperability between vertical systems.

Expected Devices
Road information system (routes, conditions, navigation). Road control system (e.g. virtual rails). Traffic management services, e.g. intelligent traffic light system with localization and identification (by satellite, radio frequency identification, cameras etc.). Emergency monitoring and data/location sharing. Airport management. Shipping docks and ports management. Train networks management. Public transport vehicles (train, metro, tram, bus, minibus), mobility as a service (ride sharing, bicycle sharing, scooters etc.). Transportation network planning and management (hubs, backbones, sub-networks, last mile network). Electronic timetable management system. Vehicles (human driven, self-driving, isolated or part of fleet). Connected vehicles (cars, ships, airplanes, trains, buses etc). Devices needed for cargo.
Expected Data
Vehicle data (identification, location, speed, route, selected vehicle data). Weather and climate data. Contextual data (representing various risk factors, delays, etc.).
Dependencies
Localization technologies. Automotive data. Contextual data. Cloud integration.
Description
Transportation system implementers will be able to use a unified data description model across various systems.
Variants
There will be different verticals, such as:
  • Smart City public transport
  • Smart City traffic management
  • Smart city vehicle management
  • Cargo traffic management
  • Cargo vehicle management

2.9 Automotive

2.9.1 Smart Car Configuration Management

Submitter(s)
Michael McCool
Category
Accessibility
Motivation
User interface personalization is a task that most often needs to be repeated for all Devices a user wishes to interact with recurringly. With complex devices, this task can also be very time-consuming, which is problematic if the user regularly accesses similar, but not identical devices, as in the case of several cars rented over a month. A standardized set of personal information and preferences that could be used to configure personalizable devices automatically would be very helpful for all these cases in which the interaction becomes a customary practice.
Expected Devices
Personal mobile device running an application providing command mediation capabilities. IoT-enabled smart car supporting remote sensing, actuation, and configuration functionality.
Expected Data
Command and status information transferred between the personal mobile device application and the car's services and devices. Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services in the car.
Description
Basic in-car functionality is standardized to be managed by other devices. A user can control seat, radio or AC settings through a personalized multimodal interface shared by the car and their personal mobile device. User preferences are stored on the mobile Device (or in the cloud), and can be transferred across different car models handling a specific functionality (e.g. all cars with touchscreens should be able to adapt to a "high contrast" preference). The car can make itself available as a complex modality component that wraps around all functionality and supported modalities, or as a collection of modality components such as touchscreen, speech recognition system, or audio player. In the latter case, certain user preferences may be shared with other environments. For example, a user may opt to select the "high contrast" scheme at night on all of their displays, in the car or at home. A car that provides a set of modalities can be also adapted by the mobile device to compose an interface for its functionality, for example to manage playback of music tracks through the car's voice control system. Sensor data provided by the phone can be mixed with data recorded by the car's own sensors to profile user behavior which can be used as context in multimodal interaction.
Variants
Additional portable devices may be brought into the car and also be incorporated into an application, for example, a GPS navigation system.
Gaps
Data format describing user interface preferences.
Existing Standards
This use case is based on MMI UC 2.1.
Comments
Does not include Requirements section from original MMI use case.

2.10 Smart Home

2.10.1 Home WoT devices synchronize to TV programs

Submitter(s)
Hiroki Endo, Masaya Ikeo, Shinya Abe, Hiroshi Fujisawa
Target Users
Person watching TV, Broadcasters
Motivation
A lot of home devices, such as TV, cleaner, and home lighting, connect to an IP network. When you watch a content program, these devices should cooperate for enhancing your experience. If the cleaning robot makes a loud noise while watching the TV program, it will hinder viewing. Also, even if you set up the theater environment with smart lights, it is troublesome to operate it yourself each time the TV program switches. Therefore, by WoT device to operate in accordance with the TV program being viewed, thereby improving the user experience. WoT devices work according to TV programs:
  • Cleaning robot stops at an important situation,
  • Color of smart lights are changed according to TV programs,
  • Smart Mirror is notified that favorite TV show will start.
Expected Devices
  • Hybridcast TV
  • Hybridcast Connect application (in a smartdevice such as smartphone)
  • Cleaning Robot
  • Smart Light (such as Philips Hue)
  • Smart Mirror
Expected Data
The trigger value of the scene of the TV program. Hybridcast connect application know the Thing Description of the devices in home. (Discovery?)
Description

Home smart devices behave according to TV programs.

Hybridcast applications in TV emit information about TV programs for smart home devices. (Hybridcast is a Japanese Integrated Broadcast-Broadband system. Hybridcast applications are HTML5 applications that work on Hybridcast TV.)

Hybridcast Contact application receives the information and controlls smart home devices.

Hybridcast Connect Application

2.11 Education

2.11.1 Shared Devices

Submitter(s)
Ege Korkan
Target Users
For the education category:
  • device owners : The university -> Research Group -> Specific Lab
  • device user : Students and potentially anyone who participates in plugfests
  • service provider : The university -> Research Group
  • network operator : The university
Motivation
This use case motivates a standardized use of shared resources. One example is when a physical resource of the Thing should not be used by multiple Consumers at the same time like the arm of the robot but its position can be read my multiple Consumers.
Expected Devices
Concrete devices are irrelevant for this use case but devices with a physical state is required. However, we have currently the following devices that are connected to Raspberry Pis where the WoT stack (node-wot or similar) is running. Concrete device models can be given upon request.
  • Robotic arms
  • Conveyor belts
  • Motorized sliders where the robots or devices can be mounted on
  • Philips Hue devices: Light bulbs, LED Strips, Motion sensors, Switch. We do not have the source code of these devices (brownfield)
  • Various sensors (brightness, humidity, temperature, gyroscopic sensors)
  • LED Screen to display messages
There are also IP Cameras but they are not WoT compatible and are not planned to be made compatible.
Expected Data
Atmospheric data of a room, machine sensors
Affected WoT deliverables and/or work items
Thing Description, Scripting API, possibly security
Description
We are offering a practical course for the students where they can interact fully remotely with WoT devices and verify their physical actions via video streams. We have sensors and actuators like robots. Students then build mashup applications to deepen their knowledge of WoT technologies. Official page of the course is here.
Security Considerations
The devices are connected to the Internet and are secured behind a router and proxy.
Privacy Considerations
None from the WoT point of view since we want the devices to be used by anyone and the devices do not share any information that is related to the students or us as the provider of the devices. However, there are cameras which can show humans entering the room as a side effect (they are meant to monitor the devices). The streams are accessible only to authorized users, the room has signs on the door and there is a cage around the area that is filmed.
Gaps

Thing Description

  • How to give hints that a particular action should not be used by others at the same time. A new keyword (like "shared":true) would be needed for devices that do not implement a describable mechanism.
  • How to describe the mechanism that the Thing implements to manage the shared resources. Does it happen in the security level?

Scripting API

  • How does the Consumer code change when this mechanism is used. Does it get settled in the implementation or scripting level.

3. Use Cases for multiple domains

3.1 Discovery

Submitter(s)
Michael McCool
Target Users
All stakeholders:
  • device owners
  • device user
  • cloud provider
  • service provider
  • device manufacturer
  • gateway manufacturer
  • network operator (potentially transparent for WoT use cases)
  • identity provider
  • directory service operator
Motivation
Discovery defines a distribution mechanism for the metadata contained in WoT Things Descriptions, and allows Things to advertise their capabilities and for potential consumers to find Things that match their needs. A standardized discovery mechanism is an enabler for convenient and ad-hoc orchestration of combinations of Things from different vendors while supporting appropriate security and privacy controls.
Expected Devices
  • Thing - any device or service that wishes to distribute (advertise) its metadata.
  • Consumer - any device or service that wishes to find Things whose location and metadata satisfies specified constraints.
  • Discovery Service - Mechanism by which metadata is distributed, which can involve a variety of services to handle spatial and semantic queries, register Thing Descriptions, provide access controls, etc.
Expected Data
  • Thing Descriptions - metadata describing a Thing
Affected WoT deliverables and/or work items
  • WoT Discovery
Note: this is a "horizontal" use case, and is driven by requirements in multiple verticals.
Description
A user wishing to build or instantiate an IoT service needs access to Thing Descriptions of installed and running devices satisfying specific requirements. These requirements can include being in or near a certain location, accessible using particular protocols or on a certain network, satisfying certain semantic categories, having certain capabilities, or having specific sub-APIs (interfaces). Discovery is the general process whereby WoT Thing Descriptions satisfying a specific set of such constraints are retrieved by a running system.
Variants
  • Run-time discovery allows late binding of orchestration services to particular devices and requires that consumers be able to adapt to Thing Descriptions discovered when a service is deployed.
  • Development-time discovery may be useful during system development to build services that can interface to a particular class of Thing Descriptions. In this case what actually needs to be discovered Thing Models, not specific Thing Descriptions.
Security Considerations
  • The distribution mechanism needs to be able to clearly authenticate potential users.
  • The distribution mechanism for metadata should only provide metadata to authorized users.
  • The distribution mechanism should be able to resist denial-of-service attacks seeking to overwhelm it within spurious requests.
  • The distribution mechanism should be able to preserve the integrity of metadata.
Privacy Considerations
  • Metadata should only be distributed to appropriate sets of requesters, with the definition of "appropriate" configurable by the source of the metadata.
  • Unauthorized users should not be able to access or infer information that they do not have access rights to.
  • Providers of metadata should be able to withdraw metadata from distribution at any time.
  • Metadata should not be retained indefinitely.
Gaps
  • The current WoT standards define a metadata format (the Thing Description) but not a means of distributing it.
Existing Standards
  • WoT Thing Description
  • CoreRD
  • DID
Comments
  • Many discovery mechanisms already exist but many do not satisfy all the requirements above, e.g. they may have insufficient privacy controls. A standards solution that builds upon prior work in this area is desirable.

3.2 Multi-Vendor System Integration - Out of the box interoperability

Submitter(s)
Michael Lagally
Target Users
  • device owner
  • service provider
  • cloud provider
  • device manufacturer
  • gateway manufacturer
Motivation
  • As a device owner, I want to know whether a device will work with my system before I purchase it to avoid wasting money.
    • Installers of IoT devices want to be able to determine if a given device will be compatible with the rest of their installed systems and whether they will have access to its data and affordances.
  • As a developer, I want TDs to be as simple as possible so that I can efficiently develop them.
    • Here "simple" should relate to the end goal, "efficiently develop"; that is, TDs should be straightforward for the average developer to complete and validate.
  • As a developer, I want to be able to validate that a Thing will be compatible with a Consumer without having to test against every possible consumer.
  • As a cloud provider I want to onboard, manage and communicate with as many devices as possible out of the box. This should be possible without device specific customization.
Expected Devices
sensors, actuators, gateways, cloud, directory service.
Expected Data
discrete or streaming data.
Affected WoT deliverables and/or work items
WoT Profile, WoT Thing Description
Description
As a consumer of devices I want to be able to process data from any device that conforms to a class of devices. I want to have a guarantee that I'm able to correctly interact with all affordances of the Thing that complies with this class of devices. Behavioral ambiguities between different implementations of the same description should not be possible. I want to integrate it into my existing scenarios out of the box, i.e. with close to zero configuration tasks.
Comments
The profile specification is currently in development by the architecture task force. The current draft of the specification is available at: https://github.com/w3c/wot-profile
Recommendations for commonalities and interoperability profiles of IoT platforms:https://european-iot-pilots.eu/wp-content/uploads/2018/11/D06_02_WP06_H2020_CREATE-IoT_Final.pdf

3.3 Digital Twin

Submitter(s)
Michael Lagally
Target Users
Device owners, cloud provider.
Motivation

A digital twin is the virtual representation of a physical asset such as a machine, a vehicle, robot, sensor. Using a digital twin allows businesses to analyze their physical assets to troubleshoot in real time, predict future problems, minimize downtime, and perform simulations to create new business opportunities.

A digital twin may also be called a twin or a shadow. Digital twin technology may be referred to as device virtualization.

Digital twins can be located in the edge or in the cloud.

Expected Devices

Various devices such as sensors, machines, vehicles, production lines, industry robots.

Digital twin platforms at the edge or in the cloud.

Expected Data
Machine status information, discrete sensor data or data streams.
Dependencies
  • WoT Architecture
  • WoT Thing Description
  • WoT Profile
  • WoT Scripting?
Description
The user benefits from using digital twins with the following scenarios:
  • Better visibility: Continually view the operations of the machines or devices, and the status of their interconnected systems.
  • Accurate prediction: Retrieve the future state of the machines from the digital twin model by using modeling.
  • What-if analysis: Easily interact with the model to simulate unique machine conditions and perform what-if analysis using well-designed interfaces.
  • Documentation and communication: Use of the digital twin model helps to understand, document, and explain the behavior of a specific machine or a collection of machines.
  • Integration of disparate systems: Connect with back-end applications related to supply chain operations such as manufacturing, procurement, warehousing, transportation, or logistics.
Variants
Virtual Twin

The virtual twin is a representation of a physical device or an asset. A virtual twin uses a model that contains observed and desired attribute values and also uses a semantic model of the behavior of the device.

Intermittent connectivity: An application may not be able to connect to the physical asset. In such a scenario, the application must be able to retrieve the last known status and to control the operation states of other assets.

Protocol abstraction: Typically, devices use a variety of protocols and methods to connect to the IoT network. From a users perspective this complexity should not affect other business applications such as an enterprise resource planning (ERP) application.

Business rules: The user can specify the normal operating range of a property in a semantic model. Business rules can be declaratively defined and actions can be automatically invoked in the edge or on the device.

Example: In a fleet of connected vehicles, the user monitors a collection of operating parameters, such as fuel level, location, speed and others. The semantics-based virtual twin model enables the user to decide whether the operating parameters are in normal range. In out of range conditions the user can take appropriate actions.

Predictive Twin

In a predictive twin, the digital twin implementation builds an analytical or statistical model for prediction by using a machine-learning technique. It need not involve the original designers of the machine. It is different from the physics-based models that are static, complex, do not adapt to a constantly changing environment, and can be created only by the original designers of the machine.

A data analyst can easily create a model based on external observation of a machine and can develop multiple models based on the user’s needs. The model considers the entire business scenario and generates contextual data for analysis and prediction.

When the model detects a future problem or a future state of a machine, the user can prevent or prepare for them. The user can use the predictive twin model to determine trends and patterns from the contextual machine data. The model helps to address business problems.

Twin Projections

In twin projections, the predictions and the insights integrate with back-end business applications, making IoT an integral part of business processes. When projections are integrated with a business process, they can trigger a remedial business workflow.

Prediction data offers insights into the operations of machines. Projecting these insights into the back-end applications infrastructure enables business applications to interact with the IoT system and transform into intelligent systems. ^

Gaps
WoT does not define a way to describe the behavior of a thing to use for a simulation.

3.4 Cross Protocol Interworking

Submitter(s)
Michael Lagally
Target Users
Device owners, cloud providers.
Motivation
In smart city, home and industrial scenarios various devices are connected to a common network. These devices implement different protocols. To enable interoperability, an "agent" needs to communicate across different protocols. Platforms for this agent can be edge devices, gateways or cloud services. Interoperability across protocols is a must for all user scenarios that integrate devices from more than one protocol.
Expected Devices
Various sensors, e.g. temperature, light, humidity, vibration, noise, air quality, edge devices, gateways, cloud servers and services.
Expected Data
Discrete sensor values, such as temperature, light, humidity, vibration, noise, air quality readings. A/V streams. The data can be delivered periodically or on demand.
Dependencies
WoT Profiles.
Description

There are multiple user scenarios that are addressed by this use case.

An example in the smart home environment is an automatic control lamps, air conditioners, heating, window blinds in a household based on sensor data, e.g. sunlight, human presence, calendar and clock, etc.

In an industrial environment individual actuators and production devices use different protocols. Examples include MQTT, OPC-UA, Modbus, Fieldbus, and others. Gathering data from these devices, e.g. to support digital twins or big data use cases requires an "Agent" to bridge across these protocols. To provide interoperability and to reduce implementation complexity of this agent a common set of (minimum and maximum) requirements need to be supported by all interoperating devices.

A smart city environment is similar to the industrial scenario in terms of device interoperability. Devices differ however, they include smart traffic lights, traffic monitoring, people counters, cameras.

Gaps
A common profile across protocols is required to address this use case.
Existing Standards
MQTT, OPC-UA, BACNet, CoAP, various other home and industrial protocols.

3.5 Multimodal System Integration

3.5.1 Multimodal Recognition Support

Submitter(s)
Michael McCool
Category
Accessibility
Motivation
Recognizer system development has arrived at a point of maturity where if we want to dramatically enhance recognition performance, sensor fusion from multiple modalities is needed. In order to achieve this, an image recognizer should incorporate results coming from other kinds of recognizers (e.g. audio recognizer) within the network engaged in the same interaction cycle.
Expected Devices
Audio sensing device (microphone). Video sensing device (camera). Audio recognition service. Video recognition service. Devices capable of presenting alerts in various modalities.
Expected Data
Command and status information transferred between the sensing devices, the recognition services, and the alert devices. Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services.
Description
An audio recognizer has been trained with the more common sounds in the house, in order to provide alerts in case of an emergency. In the same house a security system uses a video recognizer to identify people at the front door. These two systems need to cooperate with a remote home management system to provide integrated services.
Gaps
Support for video and audio recognition services.
Existing Standards
This use case is based on MMI UC 5.1.
Comments
Does not include Requirements section from original MMI use case.

3.5.2 Enhancement of Synergistic Interactions

Submitter(s)
Michael McCool
Category
Accessibility
Motivation
One of the main indicators concerning the usability of a system is the corresponding level of accessibility provided by it. The opportunity for all the users to receive and to deliver all kinds of information, regardless of the information format or the type of user profile, state or impairment is a recurrent need in web applications. One of the means to achieve accessibility is the design of a more synergic interaction based on the discovery of multimodal Modality Components. Synergy is two or more entities functioning together to produce a result that is not obtainable independently. It means "working together". For example, how to avoid disruptive interactions in nomadic systems (always affected by the changing context) is an important issue. In these applications, user interaction is difficult, distracted and less precise. Discovery and use of alternative input and output devices can increase synergic interaction offering new possibilities more adapted to the current context. Such a system can also enhance the fusion process for target groups of users experiencing permanent or temporary learning difficulties or with sensorial, emotional or social impairments.
Expected Devices
A normal client computer with I/O devices that need to be emulated. Alternative I/O devices that need to be interfaced to the client system.
Expected Data
Command and status information transferred between the client computer and the alternative I/O devices. Profile data for user preferences.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API in application on mobile personal device and possibly in IoT orchestration services.
Description
A person working mostly with a PC is having a problem with their right arm and hands. They are unable to use a mouse or a keyboard for a few months. They can point at things, sketch, clap, make gestures, but they can not make any precise movements. A generic interface allows this person to perform their most important tasks in their personal devices: to call someone, open a mailbox, access their agenda or navigate over some Web pages. The generic interface can propose child-oriented intuitive interfaces like a clapping-based interface, a very articulated TTS component, or reduced gesture input widgets. Other specialized devices might include phones with very big numbers, very simple remote controls, screens displaying text at high resolution, or voice command devices.
Existing Standards
This use case is based on MMI UC 5.2.
Comments
Does not include Requirements section from original MMI use case.

3.6 Accessibility

3.6.1 Audiovisual Devices Acting as Smartphone Extensions

Submitter(s)
Michael McCool
Category
Accessibility
Motivation

Many of today's home IoT-enabled devices can provide similar functionality (e.g. audio/video playback), differing only in certain aspects of the user interface. This use case would allow continuous interaction with a specific application as the user moves from room to room, with the user interface switched automatically to the set of devices available in the user's present location.

On the other hand, some devices can have specific capabilities and user interfaces that can be used to add information to a larger context that can be reused by other applications and devices. This drives the need to spread an application across different devices to achieve a more user-adapted and meaningful interaction according to the context of use. Both aspects provide arguments for exploring use cases where applications use distributed multimodal interfaces.

Expected Devices
Mobile phone or other client running an application requiring a extended and more accessible user interface. IoT-enabled audio-visual devices providing audio and visual information display capabilities that can be used to augment the user interface of the application. Possible edge computation services providing speech-to-text or described video (e.g. object detection) capabilities.
Expected Data
Visual display information mapping information from audio to visual modalities, for example text generated from voice recognition. Text from an application that needs to be displayed at a larger size. Visual alerts corresponding to audio stimuli, e.g. sound effects in a game mapped to visual icons. Visual information mapped to audio information, for example, described video based on an AI service providing object recognition.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API accessible from application for interacting with devices.
Description
A home entertainment system is adapted by a mobile device as a set of user interface components. In addition to media rendering and playback, these Devices also act as input or output modalities for an application, for example an application running on a smartphone. The native user interface on the application does not have to be manipulated directly at all. A wall-mounted touch-sensitive TV could be used to navigate applications, and a wide-range microphone can handle speech input. Spatial (Kinect-style) gestures may also be used to control application behavior. Accessibility support software on the smartphone discovers available modalities and arranges them to best serve the user's purpose. One display can be used to show photos and movies, another for navigation. As the user walks into another room, this configuration is adapted dynamically to the new location. User intervention may be sometimes required to decide on the most convenient modality configuration. The state of the interaction is maintained while switching between modality sets. For example, if the user was navigating a GUI menu in the living room, it is carried over to another screen when they switch rooms, or replaced with a different modality such as voice if there are no displays in the new location.
Variants
Modalities may be translated from one form to another to accommodate accessibility issues, for example, visual cues into audio cues and vice-versa, as appropriate.
Gaps
An AI service may be require to perform modality mapping, for example, object recognition.
Existing Standards
This use case is based on MMI UC 1.1.
Comments
Does not include Requirements section from original MMI use case. Variant supporting modality conversion is not included in the original MMI use case.

3.6.2 Unified Smart Home Control and Status Interface

Submitter(s)
Michael McCool
Category
Accessibility
Target Users
Motivation
The increase in the number of controllable devices in an intelligent home creates a problem with controlling all available services in a coherent and useful manner. Having a shared context, built from information collected through sensors and direct user input, would improve recognition of user intent, and thus simplify interactions. In addition, multiple input mechanisms could be selected by the user based on device type, level of trust and the type of interaction required for a particular task.
Expected Devices
Mobile phone or other client running an application providing command mediation capabilities. IoT-enabled smart home devices supporting remote sensing and actuation functionality.
Expected Data
Command and status information transferred between the command mediation application and one or more devices.
Dependencies
  • WoT Thing Description
  • WoT Discovery
  • Optional: WoT Scripting API accessible from application for interacting with devices.
Description

Smart home functionality (window blinds, lights, air conditioning etc.) is controlled through a multimodal interface, composed from modalities built into the house itself (e.g. speech and gesture recognition) and those available on the user's personal devices (e.g. smartphone touchscreen). The system may automatically adapt to the preferences of a specific user, or enter a more complex interaction if multiple people are present.

Sensors built into various devices around the house can act as input modalities that feed information to the home and affect its behavior. For example, lights and temperature in the gym room can be adapted dynamically as workout intensity recorded by the fitness equipment increases. The same data can also increase or decrease volume and tempo of music tracks played by the user's mobile device or the home's media system.

Variants
The intelligent home in tandem with the user's personal devices can additionally monitor user behavior for emotional patterns such as 'tired' or 'busy' and adapt further.
Gaps
A service may be needed to recognize gestures and emotional states.
Existing Standards
This use case is based on MMI UC 1.2; original title was Intelligent Home Apparatus.
Comments
Does not include Requirements section from original MMI use case.

3.7 Security

3.7.1 OAuth2 Flows

Submitter(s)
Michael McCool, Cristiano Aguzzi
Target Users
  • device owner
  • device user
  • device application
  • service provider
  • identity provider
  • directory service
Motivation

OAuth 2.0 is an authorization protocol widely known for its usage across several web services. It enables third-party applications to obtain limited access to HTTP services on behalf of the resource owner or of itself. The protocol defines the following actors:

  • Client: an application that wants to use a resource owned by the resource owner.
  • Authorization Server: An intermediary that authorizes the client for a particular scope.
  • Resource: a web resource
  • Resource Server: the server where the resource is stored
  • Resource Owner: the owner of a particular web resource. If it is a human is usually referred to as an end-user. More specifically from the RFC:
    • An entity capable of granting access to a protected resource.

These actors can be mapped to WoT entities:

  • Client is a WoT Consumer
  • Authorization Server is a third-party service
  • Resource is an interaction affordance
  • Resource Server is a Thing described by a Thing Description acting as a server. May be a device or a service.
  • Resource Owner might be different in each use case. A Thing Description may also combine resources from different owners or web server.

TO DO: Check the OAuth 2.0 spec to determine exactly how Resource Owner is defined. Is it the actual owner of the resource (e.g. running the web server) or simply someone with the rights to access that resource?

The OAuth 2.0 protocol specifies an authorization layer that separates the client from the resource owner. The basic steps of this protocol are summarized in the following diagram:

+--------+                               +---------------+
|        |--(A)- Authorization Request ->|   Resource    |
|        |                               |     Owner     |
|        |<-(B)-- Authorization Grant ---|               |
|        |                               +---------------+
|        |
|        |                               +---------------+
|        |--(C)-- Authorization Grant -->| Authorization |
| Client |                               |     Server    |
|        |<-(D)----- Access Token -------|               |
|        |                               +---------------+
|        |
|        |                               +---------------+
|        |--(E)----- Access Token ------>|    Resource   |
|        |                               |     Server    |
|        |<-(F)--- Protected Resource ---|               |
+--------+                               +---------------+

Steps A and B defines what is known as authorization grant type or flow. What is important to realize here is that not all of these interactions are meant to take place over a network protocol. In some cases, interaction with with a human through a user interface may be intended. OAuth2.0 defines 4 basic flows plus an extension mechanism. The most common of which are:

  • code
  • implicit
  • password (of resource owner)
  • client (credentials of the client)

In addition, a particular extension which is of interest to IoT is the device flow. Further information about the OAuth 2.0 protocol can be found in IETF RFC6749. In addition to the flows, OAuth 2.0 also supports scopes. Scopes are identifiers which can be attached to tokens. These can be used to limit authorizations to particular roles or actions in an API. Each token carries a set of scopes and these can be checked when an interaction is attempted and access can be denied if the token does not include a scope required by the interaction. This document describes relevant use cases for each of the OAuth 2.0 authorization flows.

Expected Devices
To support OAuth 2.0, all devices must have the capability of:
  • Both the producer and consumer must be able to create and participate in a TLS connection.
  • The producer must be able to verify an access (bearer) token (i.e. have sufficient computational power/connectivity).
Comment:
  • Investigate whether DTLS can be used. Certainly the connection needs to be encrypted; this is required in the OAuth 2.0 specification.
  • Investigate whether protocols other than HTTP can be used, e.g. CoAP.
    • found an interesting IETF draft RFC about CoAP support(encrypted using various mechanisms like DTLS or CBOR Object Signing and Encryption): draft-ietf-ace-oauth
Expected Data
Depending on the OAuth 2.0 flow specified, various URLs and elements need to be specified, for example, the location of an authorization token server. OAuth 2.0 is also based on bearer tokens and so needs to include the same data as those, for example, expected encryption suite. Finally, OAuth 2.0 supports scopes so these need to be defined in the security scheme and specified in the form.
Affected WoT deliverables and/or work items
Thing Description, Scripting API, Discovery, and Security.
Description
A general use case for OAuth 2.0 is when a WoT consumer wants to access restricted interaction affordances. In particular, when those affordances have a specific resource owner which may grant some temporary permissions to the consumer. The WoT consumer can either be hosted in a remote device or interact directly with the end-user inside an application.
Variants

For each OAuth 2.0 flow, there is a corresponding use case variant. We also include the experimental "device" flow for consideration.

code

A natural application of this protocol is when the end-user wants to interact directly with the consumed thing or to grant their authorization to a remote device. In fact from the RFC6749

  • Since this is a redirection-based flow, the client must be capable of interacting with the resource owner's user-agent (typically a web browser) and capable of receiving incoming requests (via redirection) from the authorization server.

This implies that the code flow can be only used when the resource owner interacts directly with the WoT consumer at least once. Typical scenarios are:

  • In a home automation context, a device owner uses a third party software to interact with/orchestrate one or more devices
  • Similarly, in a smart farm, the device owner might delegate its authorization to third party services.
  • In a smart home scenario, Thing Description Directories might be deployed using this authorization mechanism. In particular, the list of the registered TDs might require an explicit read authorization request to the device owner (i.e. an human who has bought the device and installed it).
  • ...

The following diagram shows the steps of the protocol adapted to WoT idioms and entities. In this scenario, the WoT Consumer has read the Thing Description of a Remote Device and want to access one of its WoT Affordances protected with OAuth 2.0 code flow.

                                                 +-----------+
  +----------+                                   |           |
  | Resource |                                   |  Remote   |
  |   Owner  |                                   |  Device   +<-------+
  |          |                                   |           |        |
  +----+-----+                                   +-----------+        |
       ^                                                              |
       |                                                              |
      (B)                                                             |
+------------+          Client Identifier      +---------------+      |
|           ------(A)-- & Redirection URI ---->+               |      |
|   User-    |                                 | Authorization |      |
|   Agent   ------(B)-- User authenticates --->+     Server    |      |
|            |                                 |               |      |
|           ------(C)-- Authorization Code ---<+               |      |
+---+----+---+                                 +---+------+----+      |
    |    |                                         ^      v           |
   (A)  (C)                                        |      |           |
    |    |                                         |      |           |
    ^    v                                         |      |           |
+---+----+---+                                     |      |           |
|            |>-+(D)-- Authorization Code ---------'      |           |
|    WoT     |         & Redirection URI                  |           |
|  Consumer  |                                            |           |
|            |<-+(E)----- Access Token -------------------'           |
+-----+------+      (w/ Optional Refresh Token)                       |
      v                                                               |
      |                                                               |
      +-----------(F)----- Access WoT --------------------------------+
                           Affordance

Notice that steps (A), (B) and (C) are broken in two parts as they pass through the User-Agent.

device

The device flow (IETF RFC 8628) is a variant of the code flow for browserless and input-constrained devices. Similarly, to its parent flow, it requires a close interaction between the resource owner and the WoT consumer. Therefore, the use cases for this flow are the same as the code authorization grant but restricted to all devices that do not have a rich means to interact with the resource owner. However, differently from code, RFC 8628 states explicitly that one of the actors of the protocol is an end-user interacting with a browser (even if section-6.2 briefly describes an authentication using a companion app and BLE), as shown in the following (slightly adapted) diagram:

+----------+
|          |
|  Remote  |
|  Device  |
|          |
+----^-----+
     |
     | (G) Access WoT Affordance
     |
+----+-----+                                +----------------+
|          +>---(A)-- Client Identifier ---v+                |
|          |                                |                |
|          +<---(B)-- Device Code,      ---<+                |
|          |          User Code,            |                |
|   WoT    |          & Verification URI    |                |
| Consumer |                                |                |
|          |  [polling]                     |                |
|          +>---(E)-- Device Code       --->+                |
|          |          & Client Identifier   |                |
|          |                                |  Authorization |
|          +<---(F)-- Access Token      ---<+     Server     |
+-----+----+   (& Optional Refresh Token)   |                |
      v                                     |                |
      :                                     |                |
     (C) User Code & Verification URI       |                |
      :                                     |                |
      ^                                     |                |
+-----+----+                                |                |
| End User |                                |                |
|    at    +<---(D)-- End user reviews  --->+                |
|  Browser |          authorization request |                |
+----------+                                +----------------+

Notable mentions:

  • the protocol is heavily end-user oriented. In fact, the RFC states the following
    • Due to the polling nature of this protocol (as specified in Section 3.4), care is needed to avoid overloading the capacity of the token endpoint. To avoid unneeded requests on the token endpoint, the client SHOULD only commence a device authorization request when prompted by the user and not automatically, such as when the app starts or when the previous authorization session expires or failAs.
  • TLS is required both between WoT Consumer/Authorization Server and between Browser/Authorization Server
  • Other user interactions methods may be used but are left out of scope

client credential

The Client Credentials grant type is used by clients to obtain an access token outside of the context of an end-user. From RFC6749:

  • The client can request an access token using only its client credentials (or other supported means of authentication) when the client is requesting access to the protected resources under its control, or those of another resource owner that has been previously arranged with the authorization server (the method of which is beyond the scope of this specification).

Therefore the client credential grant can be used:

  • When the resource owner is a public authority. For example, in a smart city context, the authority provides a web service where to register an application id.
  • Companion application
  • Industrial IoT. Consider a smart factory where the devices or services are provisioned with client credentials.
  • ...

The Client Credentials flow is illustrated in the following diagram. Notice how the Resource Owner is not present.

+----------+
|          |
|  Remote  |
|  Device  |
|          |
+----^-----+
     |
     |  (C) Access WoT Affordance
     ^
+----+-----+                                  +---------------+
|          |                                  |               |
|          +>--(A)- Client Authentication --->+ Authorization |
|   WoT    |                                  |     Server    |
| Consumer +<--(B)---- Access Token ---------<+               |
|          |                                  |               |
|          |                                  +---------------+
+----------+

Comment: Usually client credentials are distributed using an external service which is used by humans to register a particular application. For example, the npm cli has a companion dashboard where a developer requests the generation of a token that is then passed to the cli. The token is used to verify the publishing process of npm packages in the registry. Further examples are Docker cli and OpenId Connect Client Credentials.

implicit

Deprecated From OAuth 2.0 Security Best Current Practice:

  • In order to avoid these issues, clients SHOULD NOT use the implicit grant (response type "token") or other response types issuing access tokens in the authorization response, unless access token injection in the authorization, response is prevented and the aforementioned token leakage vectors are mitigated.

The RFC above suggests using code flow with Proof Key for Code Exchange (PKCE) instead.

The implicit flow was designed for public clients typically implemented inside a browser (i.e. javascript clients). As the code is a redirection-based flow and it requires direct interaction with the resource's owner user-agent. However, it requires one less step to obtain a token as it is returned directly in the authentication request (see the diagram below).
Considering the WoT context this flow is not particularly different from code grant and it can be used in the same scenarios.
Comment: even if the implicit flow is deprecated existing services may still using it.

+----------+
| Resource |
|  Owner   |
|          |
+----+-----+
     ^
     |
    (B)
+----------+          Client Identifier     +---------------+
|         ------(A)-- & Redirection URI --->+               |
|  User-   |                                | Authorization |
|  Agent  ------(B)-- User authenticates -->+     Server    |
|          |                                |               |
|          +<---(C)--- Redirection URI ----<+               |
|          |          with Access Token     +---------------+
|          |            in Fragment
|          |                                +---------------+
|          +----(D)--- Redirection URI ---->+   Web-Hosted  |
|          |          without Fragment      |     Client    |
|          |                                |    Resource   |
|     (F)  +<---(E)------- Script ---------<+               |
|          |                                +---------------+
+-+----+---+
  |    |
 (A)  (G) Access Token
  |    |
  ^    v
+-+----+---+                                   +----------+
|          |                                   |  Remote  |
|   WoT    +>---------(H)--Access WoT--------->+  Device  |
| Consumer |               Affordance          |          |
|          |                                   +----------+
+----------+

resource owner password

Deprecated From OAuth 2.0 Security Best Current Practice:

  • The resource owner password credentials grant MUST NOT be used. This grant type insecurely exposes the credentials of the resource owner to the client. Even if the client is benign, this results in an increased attack surface (credentials can leak in more places than just the AS) and users are trained to enter their credentials in places other than the AS.

For completeness the diagram flow is reported below.

 +----------+
 | Resource |
 |  Owner   |
 |          |
 +----+-----+
      v
      |    Resource Owner
     (A) Password Credentials
      |
      v
+-----+----+                                  +---------------+
|          +>--(B)---- Resource Owner ------->+               |
|          |         Password Credentials     | Authorization |
|   WoT    |                                  |     Server    |
| Consumer +<--(C)---- Access Token ---------<+               |
|          |    (w/ Optional Refresh Token)   |               |
+-----+----+                                  +---------------+
      |
      | (D) Access WoT Affordance
      |
 +----v-----+
 |  Remote  |
 |  Device  |
 |          |
 +----------+
Security Considerations
See OAuth 2.0 security considerations in RFC6749. See also RFC 8628 section 5 for device flow.
Comments
Notice that the OAuth 2.0 protocol is not an authentication protocol, however OpenID defines an authentication layer on top of OAuth 2.0.

3.8 Lifecycle

3.8.1 Device Lifecycle

Submitter(s)
Michael Lagally
Target Users
device manufacturer, gateway manufacturer, cloud provider
Motivation
The architecture specification currently does not address lifecycle.
Description
Handle the entire device lifecycle: Define terminology for lifecycle states and transitions.

Actors (represent a physical person or group of persons (company))

Manufacturer Service Provider Network Provider (potentially transparent for WoT use cases) Device Owner (User) Others?

Roles:

Depending on the use case, an actor can have multiple roles, e.g. security maintainer. Roles can be delegated.
Variants
There are (at least) two different entities to consider:
  • Things / Devices
  • Consumers, e.g. cloud services or gateways
In more complex use cases there are additional entities:
  • Intermediates
  • Directories
Gaps
The current architecture spec does not describe device lifecycle in detail. A common lifecycle model helps to clarify terminology and structures the discussion in different groups. Interaction of a device with other entities such as directories may introduce additional states and transitions.
Existing Standards
  • WoT Security
  • ETSI OneM2M
  • OMA LwM2M
  • OCF
  • IEEE
  • SIM cards / GSMA
  • IETF
  • Application Lifecycle (W3C Multimodal Interaction WG)
Comments
All lifecycle contributions and discussion documents are available at: https://github.com/w3c/wot-architecture/blob/main/proposals/lifecycle

Documents that were created / discussed in the architecture TF.

3.9 VR/AR

3.9.1 AR Virtual Guide

Submitter(s)
  • Rob Smith
  • Kaz Ashimura
Target Users
  • device owners
  • device user
  • cloud provider
  • service provider
  • device manufacturer
  • network operator (potentially transparent for WoT use cases)
  • identity provider
  • directory service operator
Motivation
Using a wearable semi-transparent display, users can be guided by a virtual assistant through a physical area of interest with a rendered overlay to visualize events, annotate structures and other physical features, or visualize live and historical data associated with features of interest (which may or may not be at the same physical location as the sensor generating the data). An annotated map may provide additional geospatial guidance, including identification of landmarks, locations of devices. The system may also guide the user along a specific trajectory.
Expected Devices
  • Wearable, semi-transparent head-mounted display
  • Headphones for speakers for audio output
  • Geopose and motion estimator (various technologies can be used)
  • Data processor to integrate all data (including live an historical data and geopose), generate annotations for the display, and record/play scenes
Expected Data
  • 3D Position, orientation, velocity, and acceleration of the user
  • Corresponding geolocation information (latitude, longitude, altitude) for all features of interest, including but not limited to physical landmarks, roads and paths, and locations of sensor's measurement points.
  • Timestamps to allow synchronization between the annotations and data streams and the user's movement
Affected WoT deliverables and/or work items
  • WoT Thing Description
  • WoT Binding Templates
  • WoT Discovery
  • Optional: WoT Scripting API accessible from application for interacting with devices.
Description
  • The user can travel around a real space with guidance from virtually defined geospatial data projected on a head-mounted wearable display synchronized with the view of the physical environment.
  • The wearable display can generate position and orientation (geopose) data so that the user's movement will be traced through the physical environment and can synchronized with virtual features.
  • The user can control the video images provided by the system, based sensors attached the display system or other means of control (gestures, voice input, etc.)
  • The technology should include synchronization of playback of stored video media and related sensors, displays, and devices as well as the display of geolocation information from the virtual map.
  • Discovery of sensors should take into account the position and field of view of the user so that data can be retrieved only for the relevant features of interest.
  • Discovery may additionally want to consider the motion (e.g. velocity) of the user to that data soon to come into view can be prefetched.
  • Metadata for sensors needs to distinguish between the location of the device itself and the feature of interest it is measuring. For example, a camera might monitor traffic on a highway. The feature of interest is the location on the highway being monitored, while the location of the camera might be quite far away (e.g. mounted on top of a building).
See also the Use Case description from the WebVMT Editor's draft
Variants
  • Two synchronized displays (for example, a phone and a headset) can offer greater insight and provide clearer guidance to the user by showing different views of the same location, e.g. a top-view map on the phone.
  • A VR (virtual-only) implementation may also be used, with a rendered scene replacing the real scene. This may be applicable to contexts such as a Smart City dashboard where sensor information from data needs to be viewed in context without having to actually visit the site.
  • The head-mounted semi-transparent display might be replaced in some contexts with a handheld display e.g. a phone or tablet. To be useful for AR however, such a device needs a back camera to simulate transparency and capture images of the real environment (optional for VR), and a way to determine its geolocation and orientation (geopose) relative to the environment.
  • The head-mounted display may use a camera rather than being physically transparent.
  • A microphone may be added for voice input, including voice commands. This avoid having to clutter the view with controls.
  • A 3D camera (e.g. LIDAR) may be used to capture a view of the environment, which can be helpful to establish geopose and align annotations with real features of the environment.
  • A virtual guide for a particular geographic location, e.g. a historical site, which visualises past events and buildings in AR, or allows remote users to explore in VR.
  • A medical tool which allows a patient to describe their symptoms using AR, e.g. identify a painful area on their own body, which is also modelled as a 'map' to show internal features and display a treatment guide, including any WoT medical devices.
  • A virtual controller for a city engineer to visualize utilities, e.g. electrical cables or water pipes, and control them. For example, a maintenance engineer could switch off an individual street lamp in order to replace the bulb using an AR menu displayed on that WoT-enabled lamppost.
  • These mechanisms can also be used for video overlay in general. The technologies are related to the recording, playing, and distribution of video content when the data is stored. Playback of stored data and movements would be useful for simulation and debugging.
Security Considerations
  • If an AR systems is compromised it could be used to guide a user into a dangerous situation while hiding that fact from them, e.g. encouraging them to step over a drop.
  • For the above reason the system should "fail gracefully" if there is any sign its integrity is compromised, and should implement mechanisms (e.g. signing) to detect tampering. Standards should be similar to other systems than can cause physical harm, e.g. automobiles.
  • For a "simulated" transparent head-mounted display using a camera, the system should have a fail-safe supporting an unfiltered view, which should be automatic even if the processor crashes.
  • For all systems the user should have a simple way (e.g. a single button push) of viewing "baseline reality".
Privacy Considerations
  • Systems that handle or display private data, e.g. medical applications, should respect the relevant regulations.
  • Private data should not be retained by the device or used for purposes other than which it was provided. This includes the location of personal devices. To display information from another's personal device, permission needs to be explicitly granted by that person and this permission should be time and possibly space-limited.
Requirements
  • Geospatially aware discovery mechanisms that can discover features of interest close to the user.
  • Geospatial filters for discovery that include a pyramid-shaped region representing the field of view of the user. Note: a basic cylindrical, spherical, or rectangular filter region can be used instead and then the irrelevant results filtered out, but this is less efficient than the filter itself supporting field-of-view queries.
  • Geospatial data associate with the metadata for devices. Note that mobile devices may update their position more rapidly than a discovery service may be able to support. In this case the discovery service needs to take the velocity and last known position of the data source into account and compute a zone of uncertainty and return the metadata for sources that might possibly be in the field of view. For sources such as this with dynamic positions, the AR system may also communicate with data sources directly to determine their most recent geolocation.
Gaps
  • Geospatial queries for discovery.
  • Standardized encodings of geospatial metadata in TDs.

3.10 Edge Computing

Submitter(s)
Michael McCool
Target Users
Note: User should be "Stakeholder"
  • device owners - may benefit from using edge computing for iot orchestration and compute offload
  • device user - may benefit from reduced cost of devices that can use compute offload
  • cloud provider - may provide fallback for local edge compute services
  • service provider - may provide edge computing service
  • device manufacturer - may lower cost of device by depending on compute offload
  • gateway manufacturer - may provide edge computing host hardware
  • network operator - may provide edge computing nodes
  • directory service operator - provides means to discover edge computing nodes
Motivation
  • IoT devices are often designed to be inexpensive (so they can be used at scale), small (for ease of installation) and are often power-limited, for example needing to run off a battery. For all these reasons, they usually have severely limited on-board computational capabilities.
  • For applications that require significant computation and/or memory, for example computer vision, machine learning, or autonomous navigation, offloading work to another computer on the network may be advantageous.
  • Offloading to the cloud typically involves relatively long latencies and may also have privacy implications. Edge computing implies offloading to a more "local" compute node with lower latency and optionally under more direct control of the user (improving privacy). This can be important for control applications (e.g. in robotics), computer graphics (e.g. gaming) and for applications processing imagery (e.g. facial recognition).
  • An edge computer is also a convenient place to run persistent computations such as IoT orchestration rules that need to be "always on". Such an IoT orchestration system, in addition to needing to read from sensors and send commands to actuators over the network, may also invoke computationally-intensive services (e.g. image recognition). An example would be a security system that when a motion sensor is tripped, runs a person detection computation, and if a person is detected when and where they should not be, sounds an alarm. The motion sensor and alarm can be IoT devices while the person detection is a computationally-intensive service.
Expected Devices
  • IoT devices with Thing Descriptions for use in IoT orchestrations.
  • An edge computer providing one or more fixed or generic compute services.
  • A directory or other discovery mechanism that allows IoT devices and edge computers to advertize their availability.
Expected Data
  • Thing descriptions for IoT devices
  • Thing descriptions for compute services
  • Compute service configurations, e.g container images, WASM code, scripts, ONNX files, etc.
Affected WoT deliverables and/or work items
  • WoT Discovery - needs to be designed to support services, not just physical devices.
  • WoT Architecture - concept of Thing needs to be expanded to include computational services.
  • WoT Scripting API - essential for programming IoT orchestrations.
Description
The WoT architecture can provide an interesting approach to edge computing:
  • An IoT orchestration running in an edge computer can consume WoT Thing Descriptions in order to determine how to connect to IoT devices.
  • Fixed services (e.g. person detection) and generic compute nodes (a service that would allow an arbitrary computation to be loaded onto it) can also advertise themselves using Thing Descriptions, allowing an IoT orchestrator to interface to devices and services in a uniform way. This also facilitates support for "virtual devices", e.g. using computer vision, audio recognition, or other forms of analytics in place of a physical sensor.
  • WoT discovery can be used to find appropriate compute services for IoT devices to offload computationally demanding tasks to, assuming those services describe themselves with TDs and advertise their availability via WoT discovery mechanisms.
Variants
  • An edge computer can provide facilities either for general-purpose computation (e.g. loading and running a container image, script, etc.) or special-purpose fixed computations (e.g. object detection and tracking, person detection, etc.). General-purpose computation is more powerful but also is more difficult to make fully secure.
  • An edge computation can be stateless (function as a service, FaaS) or stateful. It is easier to migrate stateless computations transparently to new compute hardware but state then needs to be provided by a separate service, e.g. a database, and it is harder to program.
  • Edge computers may provide just IoT orchestration without significant computational ability, just compute offload, or both. Many more use cases can be unlocked by providing both.
  • Persistent computation can be provided in various ways. Rather than actually running continuously, an edge computation might be event-driven, for example.
  • Under discussion are various ways to integrate edge computation with the web execution environment, for example by extending web and service workers.
Security Considerations
Edge compute services supporting the specification of generic computation has many security challenges. In addition to the challenges common to cloud computing, e.g. protecting "tenants" from seeing each other's activity, additional challenges arise if the edge computer is offering computation as an ad-hoc service. For example, there needs to be a way to project the edge computer from denial-of-service attacks. An edge computer may also need to be protected from physical attacks. There is also the possibility that an edge computer might be physically compromised so approaches such as isolated containers (protecting the contents from the edge computer's hypervisor), and/or validated boot, might be necessary in some circumstances.
Privacy Considerations
Edge computers can theoretically improve privacy since sensitive data can be processed "locally" without having to be transmitted to a remote site. This however is tempered by edge computer's greater vulnerability to physical attacks. To avoid offloading work to a malicious edge computer, some means of evaluating the trustworthiness of edge computers is needed.
Gaps
  • Explicit support for WoT Things that are services.
  • Sufficient abstraction capability (e.g. "interfaces") to support virtual devices.
  • A mechanism to package and install edge computations that can use the WoT scripting API for orchestration.
  • A general means to manage compute nodes to provide offload targets (e.g. a standardized TD template for compute services).
Existing Standards

4. Requirements

4.1 Functional Requirements

This section defines the properties required in an abstract Web of Things (WoT) architecture.

4.1.1 Common Principles

  • WoT architecture should enable mutual interworking of different eco-systems using web technology.
  • WoT architecture should be based on the web architecture using RESTful APIs.
  • WoT architecture should allow to use multiple payload formats which are commonly used in the web.
  • WoT architecture must enable different device architectures and must not force a client or server implementation of system components.
  • Flexibility

    There are a wide variety of physical device configurations for WoT implementations. The WoT abstract architecture should be able to be mapped to and cover all of the variations.

  • Compatibility

    There are already many existing IoT solutions and ongoing IoT standardization activities in many business fields. The WoT should provide a bridge between these existing and developing IoT solutions and Web technology based on WoT concepts. The WoT should be upwards compatible with existing IoT solutions and current standards.

  • Scalability

    WoT must be able to scale for IoT solutions that incorporate thousands to millions of devices. These devices may offer the same capabilities even though they are created by different manufacturers.

  • Interoperability

    WoT must provide interoperability across device and cloud manufacturers. It must be possible to take a WoT enabled device and connect it with a cloud service from different manufacturers out of the box.

4.1.2 Thing Functionalities

  • WoT architecture should allow things to have functionalities such as
    • reading thing's status information
    • updating thing's status information which might cause actuation
    • subscribing to, receiving and unsubscribing to notifications of changes of the thing's status information.
    • invoking functions with input and output parameters which would cause certain actuation or calculation.
    • subscribing to, receiving and unsubscribing to event notifications that are more general than just reports of state transitions.

4.1.3 Search and Discovery

  • WoT architecture should allow clients to know thing's attributes, functionalities and their access points, prior to access to the thing itself.
  • WoT architecture should allow clients to search things by its attributes and functionalities.
  • WoT architecture should allow semantic search of things providing required functionalities based on a unified vocabulary, regardless of naming of the functionalities.

4.1.4 Description Mechanism

  • WoT architecture should support a common description mechanism which enables describing things and their functions.
  • Such descriptions should be not only human-readable, but also machine-readable.
  • Such descriptions should allow semantic annotation of its structure and described contents.
  • Such description should be able to be exchanged using multiple formats which are commonly used in the web.

4.1.5 Description of Attributes

  • WoT architecture should allow describing thing's attributes such as
    • name
    • explanation
    • version of spec, format and description itself
    • links to other related things and metadata information
  • Such descriptions should support internationalization.

4.1.6 Description of Functionalities

4.1.7 Network

  • WoT architecture should support multiple web protocols which are commonly used.
  • Such protocols include
    1. protocols commonly used in the internet and
    2. protocols commonly used in the local area network
  • WoT architecture should allow using multiple web protocols to access to the same functionality.
  • WoT architecture should allow using a combination of multiple protocols to the functionalities of the same thing (e.g. HTTP and WebSocket).

4.1.8 Deployment

  • WoT architecture should support a wide variety of thing capabilities such as edge devices with resource restrictions and virtual things on the cloud, based on the same model.
  • WoT architecture should support multiple levels of thing hierarchy with intermediate entities such as gateways and proxies.
  • WoT architecture should support accessing things in the local network from the outside of the local network (the internet or another local network), considering network address translation.

4.1.9 Application

  • WoT architecture should allow describing applications for a wide variety of things such as edge device, gateway, cloud and UI/UX device, using web standard technology based on the same model.

4.1.10 Legacy Adoption

  • WoT architecture should allow mapping of legacy IP and non-IP protocols to web protocols, supporting various topologies, where such legacy protocols are terminated and translated.
  • WoT architecture should allow transparent use of existing IP protocols without translation, which follow RESTful architecture.
  • WoT architecture must not enforce client or server roles on devices and services. An IoT device can be either a client or a server, or both, depending on the system architecture; the same is true of edge and cloud services.

4.2 Technical Requirements

The W3C WoT Thing Architecture [wot-architecture] defines the abstract architecture of Web of Things and illustrates it with various system topologies. This section describes technical requirements derived from the abstract architecture.

4.2.1 Components in the Web of Things and the Web of Things Architecture

The use cases help to identify basic components such as devices and applications, that access and control those devices, proxies (i.e., gateways and edge devices) that are located between devices. An additional component useful in some use cases is the directory, which assists with discovery.

Those components are connected to the internet or field networks in offices, factories or other facilities. Note that all components involved may be connected to a single network in some cases, however, in general components can be deployed across multiple networks.

4.2.2 Devices

Access to devices is made using a description of their functions and interfaces. This description is called Thing Description (TD). A Thing Description includes a general metadata about the device, information models representing functions, transport protocol description for operating on information models, and security information.

General metadata contains device identifiers (URI), device information such as serial number, production date, location and other human readable information.

Information models defines device attributes, and represent device’s internal settings, control functionality and notification functionality. Devices that have the same functionality have the same information model regardless of the transport protocols used.

Because many systems based on Web of Things architecture are crossing system Domains, vocabularies and meta data (e.g. ontologies) used in information models should be commonly understood by involved parties. In addition to REST transports, PubSub transports are also supported.

Security information includes descriptions about authentication, authorization and secure communications. Devices are required to put TDs either inside them or at locations external to the devices, and to make TDs accessible so that other components can find and access them.

4.2.3 Applications

Applications need to be able to generate and use network and program interfaces based on metadata (descriptions).

Applications have to be able to obtain these descriptions through the network, therefore, need to be able to conduct search operations and acquire the necessary descriptions over the network.

4.2.4 Digital Twins

Digital Twins need to generate program interfaces internally based on metadata (descriptions), and to represent virtual devices by using those program interfaces. A twin has to produce a description for the virtual device and make it externally available.

Identifiers of virtual devices need to be newly assigned, therefore, are different from the original devices. This makes sure that virtual devices and the original devices are clearly recognized as separate entities. Transport and security mechanisms and settings of the virtual devices can be different from original devices if necessary. Virtual devices are required to have descriptions provided either directly by the twin or to have them available at external locations. In either case it is required to make the descriptions available so that other components can find and use the devices associated with them.

4.2.5 Discovery

For TDs of devices and virtual devices to be accessible from devices, applications and twins, there needs to be a common way to share TDs. Directories can serve this requirement by providing functionalities to allow devices and twins themselves automatically or the users to manually register the descriptions.

Descriptions of the devices and virtual devices need to be searchable by external entities. Directories have to be able to process search operations with search keys such as keywords from the general description in the device description or information models.

4.2.6 Security

Security information related to devices and virtual devices needs to be described in device descriptions. This includes information for authentication/authorization and payload encryption.

WoT architecture should support multiple security mechanism commonly used in the web, such as Basic, Digest, Bearer and OAuth2.0.

4.2.7 Accessibility

The Web of Things primarily targets machine-to-machine communication. The humans involved are usually developers that integrate Things into applications. End-users will be faced with the front-ends of the applications or the physical user interfaces provided by devices themselves. Both are out of scope of the W3C WoT specifications. Given the focus on IoT instead of users, accessibility is not a direct requirement, and hence is not addressed within this specification.

There is, however, an interesting aspect on accessibility: Fulfilling the requirements above enables machines to understand the network-facing API of devices. This can be utilized by accessibility tools to provide user interfaces of different modality, thereby removing barriers to using physical devices and IoT-related applications.

4.3 Acknowledgments

Special thanks to all authors of use case descriptions (in alphabetical order) for their contributions to this document:

Many thanks to the W3C staff and all other active Participants of the W3C Web of Things Interest Group (WoT IG) and Working Group (WoT WG) for their support, technical input and suggestions that led to improvements to this document.

Special thanks to Kazuyuki Ashimura from the W3C for the continuous help and support of the work of the WoT Use Cases Task Force.

[JSON-SCHEMA] [ISO-6709] [NMEA] [WGS84] [Basic Geo Vocabulary] [W3C Geolocalization API] [Open Geospatial Consortium] [ISO19111] [SSN] [Timestamps] [MMI UC3.1] [MMI UC3.2] [OpenICE] [MDIRA] [MQTT] [OPC UA] [BACnet] [CoAP] [MMI UC5.1] [MMI UC5.2] [MMI UC1.1] [MMI UC1.2] [MMI UC2.1] [KNX] [Modbus] [OGC Sensor Things] [OneM2M] [LWM2M] [OCF]

A. References

A.1 Informative references

[BACnet]
BACnet. ASHRAE. URL: http://www.bacnet.org
[Basic Geo Vocabulary]
W3C Semantic Web Interest Group. W3C. URL: https://www.w3.org/2003/01/geo/
[CoAP]
The Constrained Application Protocol (CoAP). Z. Shelby; K. Hartke; C. Bormann. IETF. June 2014. Published. URL: https://tools.ietf.org/html/rfc7252
[ISO-6709]
ISO-6709:2008 : Standard representation of geographic point location by coordinates. ISO. 2008-07. Published. URL: https://www.iso.org/standard/39242.html
[ISO19111]
ISO19111. ISO. Jan 2019. Published. URL: https://www.iso.org/standard/74039.html
[JSON-SCHEMA]
JSON Schema Validation: A Vocabulary for Structural Validation of JSON. Austin Wright; Henry Andrews; Geraint Luff. IETF. 19 March 2018. Internet-Draft. URL: https://tools.ietf.org/html/draft-handrews-json-schema-validation-01
[KNX]
KNX. KNX. URL: https://www.knx.org/knx-en/for-professionals/index.php
[LWM2M]
Lightweight Machine to Machine Technical Specification: Core. OMA SpecWorks. Aug 2018. URL: http://openmobilealliance.org/release/LightweightM2M/V1_1-20180710-A/OMA-TS-LightweightM2M_Core-V1_1-20180710-A.pdf
[MDIRA]
MDIRA. URL: https://secwww.jhuapl.edu/mdira/documents
[MMI UC1.1]
MMI UC1.1. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC1.2]
MMI UC1.2. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC2.1]
MMI UC2.1. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC3.1]
MMI UC3.1. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC3.2]
MMI UC3.2. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC5.1]
MMI UC5.1. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[MMI UC5.2]
MMI UC5.2. Emily Candell, Dave Raggett. W3C. 2002. published. URL: https://www.w3.org/TR/mmi-use-cases/
[Modbus]
Modbus. Modbus Organization. URL: https://modbus.org
[MQTT]
MQTT Version 3.1.1 Plus Errata 01. Andrew Banks; Rahul Gupta. OASIS Standard. December 2015. Published. URL: http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/mqtt-v3.1.1.html
[NMEA]
National Marine Electronics Association. URL: https://www.nmea.org
[OCF]
OCF Core Specification. Open Connectivity Foundation. April 2019. URL: https://openconnectivity.org/developer/specifications
[OGC Sensor Things]
OGC Sensor Things API. Open Geospatial Consortium. URL: https://www.ogc.org/standards/sensorthings
[OneM2M]
OneM2M. ETSI. URL: https://www.onem2m.org
[OPC UA]
OPC Unified Architecture. OPC. URL: https://opcfoundation.org/about/opc-technologies/opc-ua/
[Open Geospatial Consortium]
Open Geospatial Consortium. URL: http://docs.opengeospatial.org/as/18-005r4/18-005r4.html
[OpenICE]
OpenICE. URL: https://www.openice.info
[SSN]
Semantic Sensor Network Ontology. Armin Haller; Krzysztof Janowicz; Simon Cox; Danh Le Phuoc; Kerry Taylor; Maxime Lefrançois. W3C. 19 Oct 2017. Published. URL: https://www.w3.org/TR/vocab-ssn/
[Timestamps]
Timestamps. Ilya Grigorik. W3C. 06 Oct 2020. Draft. URL: https://w3c.github.io/hr-time/#dom-domhighrestimestamp
[W3C Geolocalization API]
Geolocation API Specification 2nd Edition. Andrei Popescu. W3C. 8 Nov 2016. Published. URL: https://www.w3.org/TR/geolocation-API/
[WGS84]
WGS84. URL: https://en.wikipedia.org/wiki/World_Geodetic_System
[wot-architecture]
Web of Things (WoT) Architecture. Matthias Kovatsch; Ryuichi Matsukura; Michael Lagally; Toru Kawaguchi; Kunihiko Toumura; Kazuo Kajimoto. W3C. 9 April 2020. W3C Recommendation. URL: https://www.w3.org/TR/wot-architecture/
[wot-thing-description]
Web of Things (WoT) Thing Description. Sebastian Käbisch; Takuki Kamiya; Michael McCool; Victor Charpenay; Matthias Kovatsch. W3C. 9 April 2020. W3C Recommendation. URL: https://www.w3.org/TR/wot-thing-description/