The Web of Things is applicable to multiple IoT domains, including Smart Home, Industrial, Smart City, Retail, and Health applications, where usage of the W3C WoT standards can simplify the development of IoT systems that combine devices from multiple vendors and ecosystems. During the last charter period of the WoT Working Group several specifications were developed to address requirements for these domains.
This Use Cases and Requirements Document is created to collect new IoT use cases from various domains that have been contributed by various stakeholders. These serve as a baseline for identifying requirements for the standardization work in the W3C WoT groups.
The World Wide Web Consortium (W3C) has published the Web of Things (WoT) Architecture and Web of Things (WoT) Thing Description (TD) as official W3C Recommendations in May 2020. These specifications enable easy integration across Internet of Things platforms and applications.
The W3C Web of Thing Architecture [[wot-architecture]] defines an abstract architecture, the WoT Thing Description [[wot-thing-description]] defines a format to describes a broad spectrum of very different devices, which may be connected over various protocols.
During the inception phase of the WoT 1.0 specifications in 2017-2018 the WoT IG collected use cases and requirements to enable interoperability of Internet of Things (IoT) services on a worldwide basis. These released specifications have been created to address the use cases and requirements for the first version of the WoT specifications, which are documented in https://w3c.github.io/wot/ucr-doc/
The present document gathers and describes new use cases and requirements for future standardization work in the WoT standard.
This document contains chapters describing the use cases that were contributed by multiple authors, functional and technical requirements on the Web of Things standards. Additionally it contains a summary of the liaisons, where active collaboration is taking place at the time of writing. Since this document is a WG note, additional use cases will be added in future revisions of this document.
The collection of use cases can be separated into two categories:
The following stakeholders and actors were identified when the use cases have been collected and requirements were identified. Note that these stakeholders and roles may overlap in some use cases.
Sensors:
Actuators:
Additional devices:
Dairy farming requires significant labor in the feeding, milking, breeding, and manure disposal as well as the control and management of the environmental conditions inside and outside the livestock barn. In particular, the milking accounts for more than 40% of the total working time for handling a cow.
Recently, advanced countries in the dairy industry have introduced an IoT-based automatic milking system using various IoT devices and equipment to reduce the labor for milking. The automatic milking system (AMS) with IoT devices and equipment, such as sensors, high performance cameras, laser equipment, and robot arms, can perform the entire milking process which includes identification of cows entering the milking box, washing udders, milking, collection, sterilization, storage, and milk composition analysis. The AMS has advantage of solving the labor shortage problem in the dairy farm by enabling labor allocation for tasks other than milking unlike conventional methods like pipeline, herringbone, tandem machines, etc. In addition, the AMS can improve productivity and quality of milk while reducing the incidence rate of disease in cows.
The AMS generates the following data, and the data need to be managed in an organic relationship with data for other purposes such as feeding, parturition, disease control, and growth control in order to establish a comprehensive production and operation management strategy for dairy farms.
When a cow enters a milking box, a object identifier installed in the milking room identifies the cow’s ID from the RFID tag, QR code or bar code attached to the cow. Through this, the milking can be performed more systematically based on historical data, such as the number of milking or milk yield, managed by AMS.
Then, 3D camera, laser equipment, and sensors accurately identify the position of udders, and the robot arm attaches the milking cups to the udders quickly to perform milking. Before and after the milking, cleaning and disinfection should be performed to remove any contaminant and bacteria. A sensor installed in the milking cup measures the elapsed time and milk yield during milking.
In addition, the components of milk, which are content of fat, protein, lactose, etc., are analyzed, and the analysis results are transmitted to the AMS in order to manage the quality of milk, disease and health status of the cow. After milking, the milk is delivered to a milk tank with cooling capability to maintain freshness of the milk. The AMS collects the data generated during the entire milking process, and analyzes the data to establish a milk production strategy of a dairy farm. The farmer or the manager of a dairy farm can monitor the milking process through a web page or a mobile app.
Automated system should be designed to minimize human intervention; however, it is more desirable to have the capability to directly control the AMS in order to respond to an emergency situation.
The devices and equipment such as RFID reader/tags, milking cups, robot arms, milk component analyzer, and sensor are connected to a gateway, which is a controller, installed in a dairy farm through wired or wireless networks. The gateway controlling various actuators and transferring the data is connected to the cloud system through the Internet. Thus, all devices and equipment on the dairy farm can be accessed and controlled through the cloud. The cloud utilizes technologies, such as AI and big data, to analyze the data transferred from the AMS. The analysis result can be shared and distributed to all stakeholders and can be utilized as basic information for the creation of various new services enhancing productivity and convenience of the dairy farm.
This use case does not specify any specific requirements on security matters. Any well-defined security management mechanisms can be applied for this use case.
In addition to farmers, various stakeholders such as farm workers, service providers, manufacturers, consumers of agricultural products, third-party companies and government departments are also involved with the operation of a dairy farm. The data with various types and characteristics generated during the operation of the AMS can be shared and distributed to one or more stakeholders.
Consequently, the right to access the data must be systematically managed according to the type, characteristics, and purpose of data utilization. Through this, it is possible to protect the experience, know-how and unique agricultural knowledge or techniques of a farmer, and to secure the dairy farm’s competitiveness.
A wired or wireless communication link is required to exchange data generated by operation of AMS and commands for controlling IoT devices and equipment. To prevent interfering with the free movement of cows and other agricultural works, using wireless communication link is recommended.
The data for the AMS should be delivered and stored with a common format regardless of device types and manufacturers, and should be expressed in a standardized way for AI-based analysis and processing.
In general, a gateway, which is a controller, installed in a dairy farm collects data and transmits the data to an external cloud connected through a network. The gateway also delivers the control commands received from the cloud to the various actuators through the network. However, if data loss or delay occurs due to a bottleneck, a disconnection between the gateway and the cloud, or due to the excessive internal processing time of the cloud, it may be difficult to perform efficient and reliable milking operations. In order to solve these problems, it is recommended to applying edge computing technologies to maintain essential functionalities for performing agricultural works including the milking.
IoT and AI-based animal health management technologies are being introduced to overcome the difficulty of separating sick livestock from other livestock in narrow spaces where many livestock herds live. This helps to safely maintain livestock by monitoring their behavior and health status and taking quick and appropriate responses if disease is expected or occurs.
IoT and AI-based animal health management technology plays an important role in monitoring livestock health, preventing diseases, and detecting them early. Through this technology, livestock's health status can be monitored in real-time by collecting and analyzing their biological signals, such as body temperature, heart rate, and respiration, and setting normal ranges. If abnormal data is detected, it sends an alert to the famer to take appropriate measures.
Additionally, AI technology can be used to predict livestock's health status. By analyzing the relationship between livestock's health status and disease occurrence using AI models, predictive results on health status can be provided to take preventive measures.
This IoT and AI-based animal health management technology is very useful for maintaining livestock's health, improving productivity, and minimizing disease occurrence by monitoring their health status, taking preventive measures, and providing early responses.
Overall, livestock health management involves a complex system of data collection, transmission, processing, analysis, and decision-making. By using this, it is possible to improve livestock health, prevent the spread of disease, and increase productivity. Livestock health management can be described from the perspective of data flow as follows.
(Data collection) Data for livestock health management are collected from the livestock and the livestock barn being monitored. This data can include various types of physiological data, such as body temperature, heart rate, respiration rate, and fecal output, as well as environmental data, such as temperature, humidity, and air quality.
(Data Processing) The data is then preprocessed on edge devices before being sent to the cloud server. The preprocessing step can involve cleaning the data, compressing it, and performing basic analysis to reduce the amount of data that needs to be transmitted.
(Data transmission) Collected data is transmitted to a cloud server for analysis. This is typically done using wireless communication technologies such as Bluetooth, Wi-Fi, RFID, or cellular networks.
(Data Analysis) The cloud server uses machine learning algorithms and AI models to analyze the data collected from the sensors and wearable devices. The analysis can include identifying patterns, predicting future health risks, and detecting early signs of illness or disease.
(Decision Making) Based on the data analysis results, the livestock health management service provider or veterinarian with insights can make decisions about the care and treatment of the livestock. This can include taking preventative measures to reduce the risk of disease or illness, administering medication, or isolating sick livestock from the rest of the herd. Also, the livestock health management service provider or veterinarian establish a livestock health management plan that reflects the results of analysis.
Smart agriculture in open fields, which covers a considerably large area, requires various types of agricultural machinery such as tractors, combines, weeders, and pesticide sprayers, unlike greenhouse with limited space. However, since the cost of agricultural machinery is generally high, it is important to efficiently operate and manage the machinery to save costs and labor required for operating them.
To manage agricultural machinery efficiently, farmers need to first establish an agricultural works plan that reflects the type of agricultural works required for each growth stage of the crops, the estimated time required, the type and quantity of machinery needed for each agricultural works. By implementing the established agricultural work plan, farmers can use machinery to complete the required agricultural work in the shortest time possible. In addition, IoT-connected machinery can share real-time progress status of the agricultural work and, if necessary, additional idle machinery can be added to shorten the time required for agricultural work.
Various types of sensor devices installed on agricultural machinery collect data related to environmental conditions of the field and crop growth status. The collected data is used for the establishment of optimal production management strategies and optimization (update) of agricultural works plan to improve the productivity (convenience, operating cost) of farmhouse through AI-based analysis. Farmers can monitor the operational status of the agricultural machinery and the progress of the agricultural work anytime, anywhere through web or mobile devices, and can also transmit appropriate instructions in case of changes in the farming plan or equipment failures.
Efficient management of agricultural machinery is achieved through systematic management of various data, such as optimal agricultural works plans, machinery operation status, agricultural works execution results, and history of malfunctions or damages. This data can be analyzed based on AI and big data technology to establish the optimal production management strategy. Through this series of procedures, farm productivity can be improved.
The farmer or agricultural machinery management service provider establish an agricultural works plan based on data such as the location of farmland, the types of agricultural works required, the time required to perform the agricultural works, the types of agricultural machinery required and their availability, and the history of past agricultural works execution. The establishment of the agricultural works plan is crucial because it serves as the basis for efficiently operating and managing agricultural machinery at the lowest cost possible. To establish an agricultural works plan, the farmer's requirements must be reflected, and consulting results from an agricultural expert group or expert system should be incorporated as necessary. The system may also take into account factors such as local weather conditions and forecasts, including historical conditions such as accumulated precipitation. Management of machinery can also take into account predicted events, such as the possible need for maintenance or repairs.
Based on the established agricultural works plan, the necessary agricultural machinery is deployed and executes the corresponding agricultural works. During the process of agricultural works, the agricultural machinery collects and reports data on its operating status, occurrence of malfunctions or damages, agricultural works execution status and its results, as well as the status of the farmland and crops-growth to the farm operation system. The farm operation system can monitor the agricultural works progress in real-time, and can modify the agricultural works plan to immediately deploy the machinery that is not yet deployed or that can finish the assigned agricultural work soon, thus improving the operational efficiency of the agricultural machinery for the farm.
The farm operation system analyzes the collected data comprehensively and utilizes it to update existing agricultural works plan and optimize production management strategies for the improvement of farm productivity. The collected and analyzed data can be shared with stakeholders, such as service providers, agricultural machinery manufacturers, and maintenance companies. Based on the shared data, service providers can improve the quality of agricultural machinery management services, while agricultural machinery manufacturers and maintenance companies can utilize the data to produce and maintain agricultural machinery that is optimized for the agricultural works demanded by farmers. Additionally, farmers can monitor the progress status and results of agricultural works through web or mobile devices.
A Smart City managing mobile devices and sensors, including passively mobile sensor packs, packages, vehicles, and autonomous robots, where their location needs to be determined dynamically.
Smart Cities need to track a large number of mobile devices and sensors. Location information may be integrated with a logistics or fleet management system. A reusable geolocation module is needed with a common network interface to include in these various applications. For outdoor applications, GPS could be used, but indoors other geolocation technologies might be used, such as WiFi triangulation or vision-based navigation (SLAM). Therefore the geolocation information should be technology-agnostic.
NOTE: we prefer the term "geolocation", even indoors, over "localization" to avoid confusion with language localization.
One of the following:
Note: the system should be capable of notifying consumers of changes in location. This may be used to implement geofencing by some other system. This may require additional parameters, such as the maximum distance that the device may be moved before a notification is sent, or the maximum amount of time between updates. Notifications may be sent by a variety of means, some of which may not be traditional push mechanisms (for example, email might be used). For geofencing applications, it is not necessary that the device be aware of the fence boundaries; these can be managed by a separate system.
Smart Cities have the need to observe the physical locations of large number of mobile devices in use in the context of a Fleet or Logistics Management System, or to place sensor data on a map in a Dashboard application. These systems may also include geofencing notifications and mapping (visual tracking) capabilities.
High-resolution timestamps can be used in conjunction with cache manipulation to access protected regions of memory, as with the SPECTRE exploit. Certain geolocation APIs and technologies can return high-resolution timestamps which can be a potential problem. Eventually these issues will be addressed in cache architecture but in the meantime a workaround is to artificially limit the resolution of timestamps.
Location is generally considered private information when it is used with a device that may be associated with a specific person, such as a phone or vehicle, as it can be used to track that person and infer their activities or who they associate with (if multiple people are being tracked at once). Therefore APIs to access geographic location in sensitive contexts are often restricted, and access is allowed only after confirming permission from the user.
There is no single standardized semantic vocabulary for representing location data. Location data can be point data, a path, an area or a volumetric object. Location information can be expressed using multiple standards, but the reader of location data in a TD or in data returned by an IoT device must be able to unambiguously describe location information.
There are both dynamic (data returned by a mobile sensor) and static (fixed installation location) applications for geolocation data. For dynamic location data, some recommended vocabulary to annotate data schemas would be useful. For static location data, a standard format for metadata to be included in a TD itself would be useful.
Note that accuracy and time are issues that apply to all kinds of sensors, not just geolocation. However, the specific geolocation technology of GPS is special since it is also a source of accurate time.
A Smart City managing a large number of devices whose data needs to be visualized and understood in context.
Stakeholders include:
In order to facilitate Smart City planning and decision-making, a Smart City dashboard interface makes it possible for city management to view and visualize all sensor data through the entire city in real time, with data identified as to geographic source location.
Actuators can include robots; for these, commands might be given to robots to move to new locations, drop off or pick up sensor packages, etc. However, it could also include other kinds of actuators, such as flood gates, traffic signals, lights, signs, etc. For example, posting a public message on an electronic billboard might be one task possible through the dashboard.
Sensors can include those for the environment and for people and traffic management (density counts, thermal cameras, car speeds, etc.). status of robots, other actuators, and sensors, data visualization, and (optionally) historical comparisons.
Dashboard would include mapping functionality. Mapping implies a need for location data for every actuator and sensor, which could be acquired through geolocation sensors (e.g. GPS) or assigned statically during installation.
This use case also includes images from cameras and real-time image and data streaming.
Data from a large number and wide variety of sensors needs to be integrated into a single database and normalized, then placed in time and space, and finally visualized.
The user, a member of city management responsible for making planning decisions, sees data visualized on a map suitable for planning decisions.
Variants:
Sample flow:
A service, or a user, sends a (SPARQL) query to the discovery endpoint of a known Middle-Node (which can be wrapped by a GUI). The Middle-Node will try to answer the query first checking the Thing Descriptions of the IoT devices registered in such Middle-Node. Then, if the query requires further discovery, or it was not successfully answered the Middle-Node will forward the query to its *known* Middle-Nodes. Recursively, the Middle-Nodes will try to answer the query and/or forward the query to their known Middle-Nodes. When one Middle-Node is able to answer the query it will forward back to the former Middle-Node the partial query answer. Finally, when the discovery task finishes, the former Middle-Node will join all the partial query answers producing an unified view (which could be synchronous or asynchronous).This use case is related to the semantic modeling of trustworthy IoT entities in energy-efficient cultural spaces such as museums.
Nowadays, energy-saving issues have awakened the research community's interest due to the more and more increasing global electricity demand. An excessive use of energy is believed to derive from public and industrial buildings to cover their daily load requirements in the context of the provision of their services. Thus, the necessity of developing energy-efficient buildings could be proved beneficial. Notably, the improvement of buildings' energy efficiency leads to Building Energy Management Systems (BEMS).
BEMS objectives include but not limited to:The application of BEMS in the context of energy-saving at cultural spaces, and especially at the museums' spaces, is an evolving recent research interest. The protection and preservation of artworks and ancient objects isolated in museums, leads to the necessity of continuous monitoring of the environmental factors and indoor conditions like temperature, humidity and CO2. This monitoring involves Internet of Things (IoT) entities, which may be considered as an integral part of BEMS, to reduce energy consumption without: a) sacrificing humans' visiting experience and comfort indoor levels, and b) sacrificing artworks' protection and preservation.
The aim of the presented use case is to sketch and highlight the following requirements for knowledge representation:Reasoning with this knowledge, the identification of interesting exhibits and energy-related observations (based on sensing visitors' proximity to exhibits and observation of exhibits' lamp brightness level) is realized.
For instance, if the brightness level of an exhibit's lamp is "medium" and there are more than two visitors near the exhibit, then this observation is classified as a) an interesting-exhibit observation and b) an observation to high level energy, meaning that the level of energy (light) for the lamp of the exhibit of this observation must be raised to high. In addition (another example), if the brightness level of the exhibit's lamp is "medium" and less than two visitors are nearby this, then classify this as an observation to low level energy, meaning that the level of energy (light) for the lamp of the exhibit of this observation must be raised to low. These examples indicate that a change (decrease or increase) to the level of light (energy) of the observed exhibit must be applied.
Web of Things Thing Description (WoT TD): representation of IoT entities trust (trustworthy things, trustworthy IoT entities in general i.e., devices, people, processes, data). An IoT-trust related knowledge representation (in OWL) is provided by Kotis et al. as an example: https://github.com/KotisK/IoTontos/blob/master/Ontologies/IoT/IoT-trust-onto-v06.owl (or http://i-lab.aegean.gr/kotis/Ontologies/IoT/IoT-trust-onto-v06.owl).
Related paper: Kotis, K., I. Athanasakis, and G. A. Vouros, "Semantically Enabling IoT Trust to Ensure and Secure Deployment of IoT Entities", Int. J. of Internet of Things and Cyber-Assurance, vol. 1, issue 1: Inderscience, pp. 3-21, 2018. (http://dx.doi.org/10.1504/IJITCA.2018.10011243)
When operating smart buildings, aggregating and managing all data provided by heterogeneous devices in these buildings still require a lot of manual effort. Besides the hurdles of data acquisition that relies on multiple protocols, the acquired data generally lacks contextual information and metadata about its location and purpose. Usually, each service or application that consumes data from building things requires information about its content and its context like, e.g.:
Through the increased use of model-based data exchange over the whole life cycle of a building, often referred to as Building Information Modeling (BIM) (Sacks et al., 2018), a curated source for data describing the building itself is available including, amongst others, the topology of the building structured into e.g. sites, stores and spaces.
Automatically tracking down data and their related things in a building would especially ease the configuration and operation of Building Automation and Control Systems (BACS) and Heating Ventilation and Air-Conditioning (HVAC) services during commissioning, operation, maintenance and retrofitting. To tackle these challenges, still, building experts make use of metadata and naming conventions which are manually implemented in Building Management Systems (BMS) databases to annotate data and things. An important property of a thing is its location within the topology of a building as well as where its related data are produced or used. For example, this applies to the temperature sensor of a space, the temperature setpoint of a zone, a mixing damper flap actuator of a HVAC component, etc. In addition, other attributes of things are of interest, such as cost or specific manufacturer data. One difficulty is especially the lack of a standardized way of creating, linking and sharing this information in an automated manner. On the contrary, manufacturers, service providers and users introduce their own metadata for their own purpose. As a solution, the Web of Things (WoT) Thing Description (TD) aims at providing normalized and syntactic interoperability between things.
To support this effort, this use case is motivated by the need to enhance semantic interoperability between things in smart buildings and to provide them with contextual links to building information. This building information is usually obtained from a BIM model. The use case builds on Web of Data technologies and reuses schemas available from the Linked Building Data domain. It should serve as a use case template for many applications in an Internet of Building Things (IoBT).
The goal of this use case is to show the potential to automate workflows and address the heterogeneity of data as observed in the smart building domain. The examples show the potential benefits of combining WoT TD with contextual data obtained from BIM.
The use cases is based on the Open Smart Home Dataset, which introduces a BIM model for a residential flat combined with observations made by typical smart home sensors. We extend the dataset with Thing Descriptions of some of the items. The respective Thing Description of a temperature sensor in the kitchen of the considered flat is as follows:
{
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureSensor",
"@context": [
"https://www.w3.org/2019/wot/td/v1",
{
"osh": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#",
"bot": "https://w3id.org/bot#",
"sosa": "http://www.w3.org/ns/sosa/",
"om": "http://www.ontology-of-units-of-measure.org/resource/om-2/",
"ssns": "http://www.w3.org/ns/ssn/systems/",
"brick": "https://brickschema.org/schema/Brick#",
"schema": "http://schema.org"
}
],
"title": "TemperatureSensor",
"description": "Kitchen Temperature Sensor",
"@type": ["sosa:Sensor", "brick:Zone_Air_Temperature_Sensor", "bot:element"],
"@reverse": {
"bot:containsElement": {
"@id": "osh:Kitchen"
}
},
"securityDefinitions": {
"basic_sc": {
"scheme": "basic",
"in": "header"
}
},
"security": [
"basic_sc"
],
"properties": {
"Temperature": {
"type": "number",
"unit": "om:degreeCelsius",
"forms": [
{
"href": "https://kitchen.example.com/temp",
"contentType": "application/json",
"op": "readproperty"
}
],
"readOnly": true,
"writeOnly": false
}
},
"sosa:observes": {
"@id": "osh:Temperature",
"@type": "sosa:ObservableProperty"
},
"ssns:hasSystemCapability": {
"@id": "osh:SensorCapability",
"@type": "ssns:SystemCapability",
"ssns:hasSystemProperty": {
"@type": ["ssns:MeasurementRange"],
"schema:minValue": 0.0,
"schema:maxValue": 40.0,
"schema:unitCode": "om:degreeCelsius"
}
}
}
Where the contextual information on the measurement range of the sensor is specified using the SSNS schema. The location information of the thing TemperatureSensor is provided based on the Building Topology Ontology (BOT), a minimal ontology developed by the W3C Linked Building Data Community Group (W3C LBD CG) to describe the topology of buildings in the semantic web. Additionally, the thing description of the corresponding actuator is given below.
{
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureActuator",
"@context": [
"https://www.w3.org/2019/wot/td/v1",
{
"osh": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#",
"bot": "https://w3id.org/bot#",
"sosa": "http://www.w3.org/ns/sosa/",
"ssn": "http://www.w3.org/ns/ssn/",
"brick": "https://brickschema.org/schema/Brick#"
}
],
"title": "TemperatureActuator",
"description": "Kitchen Temperature Actuator",
"@type": ["sosa:Actuator", "brick:Zone_Air_Temperature_Setpoint", "bot:element"],
"@reverse": {
"bot:containsElement": {
"@id": "osh:Kitchen"
}
},
"securityDefinitions": {
"basic_sc": {
"scheme": "basic",
"in": "header"
}
},
"security": [
"basic_sc"
],
"actions": {
"TemperatureSetpoint": {
"forms": [
{
"href": "https://kitchen.example.com/tempS"
}
]
}
},
"ssn:forProperty": {
"@id": "osh:Temperature",
"@type": "sosa:ActuatableProperty"
}
}
The scenario considered is related to the replacement of a temperature sensor in a BACS. The topological information localizing the things, e.g. the temperature sensor can be used to automatically commission the newly replaced sensor and link it to existing control algorithms. For this purpose, the identifiers of suitable sensors and actuators are needed and can be, for example, queried via SPARQL. Here the query uses some additional classification of sensors from the Brick schema, v1.1 [[Brick]].
PREFIX bot: <https://w3id.org/bot>
PREFIX brick: <https://brickschema.org/schema/Brick#>
PREFIX osh: <https://w3id.org/ibp/osh/OpenSmartHomeDataSet#>
SELECT ?sensor ?actuator
WHERE{
?space a bot:Space .
?space bot:containsElement ?sensor .
?space bot:containsElement ?actuator .
?sensor a brick:Zone_Air_Temperature_Sensor .
?actuator a brick:Zone_Air_Temperature_Setpoint .
}
Similarly this data can be obtained via a REST API built upon the HTTP protocol. Below is an example endpoint applying REST style for getting the same information for a specific space name:
GET "https://server.example.com/api/locations?space=osh:Kitchen&sensorType=brick:Zone_Air_Temperature_Sensor&actuatorType=brick:Zone_Air_Temperature_Setpoint"
API response:
{
"location": {
"site": {
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Site1",
"name": "Site1"
},
"building": {
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Building1",
"name": "Building1"
},
"zone": null,
"storey": {
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Level2",
"name": "Level2"
},
"space": {
"id": "https://w3id.org/ibp/osh/OpenSmartHomeDataSet#Kitchen",
"name": "Kitchen"
},
"sensors": [
"https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureSensor"
],
"actuators": [
"https://w3id.org/ibp/osh/OpenSmartHomeDataSet#TemperatureActuator"
]
}
In this example query, the REST endpoint has been defined using the OpenAPI specification and is provided by a RESTful server. A data binding is needed between the server and the underlying backend storage, here the triple store that contains the involved ontologies (osh, bot, ssn, brick...). The data binding relies on similar SPARQL queries as the one shown above. As a result the endpoint can deliver information to a target application that consumes custom JSON rather than triples. Similar implmentation could be achieved using GraphQL.
Another related use case in smart buildings, which would greatly benefit from harmonised thing descriptions and attached location information is related to the detection of unexpected behavior, errors and faults. An example for such a detection of faults is the rule-based surveillance of sensor values. A generic rule applicable to sensors is that the observation values stay within the measurement range of the sensor. Again, in the case of maintenance as described above a sensor is replaced.
Some agent configuring fault detection rules can obtain the measurement range from the sensor's TD (see above) to obtain the parameters to configure the mentioned rule. Again, a query or API call retrieving this information (schema:minValue/ schema:maxValue) can be used to update the upper and lower bound of the values provided by the sensor.
Security in smart buildings is of importance. In particular, access control needs to be properly secured. This applies also for data access which can be secured using existing security schemes (API Keys, OAuth2...). Moreover, from certain observations, e.g. electricity consumption, clues can be indirectly given such as presence in a home. Hence, security needs must be defined and properly addressed.
Privacy considerations can be of a concern if observations of sensors can be matched to individuals. It is of the responsibility of building owners, managers and users to define their own privacy policies for their data and share necessary consents if necessary.
Accessibility is a major concern in the buildings domain. Efforts exist in also providing accessibility data in a electronic format. The W3C LBD CG is in contact with the W3C Linked Data for Accessbility Community Group.
Internationalization is a concern as the Buildings industry is a global industry. This is reflected in some efforts, e.g. BOT used in the examples above does provide multilanguage labels in up to 16 different languages including english, french and chinese.
In these settings, devices are usually not commercial off-the-shelf IoT devices, but rather "packaged units" and other "lower level" devices that perform physical tasks on behalf of a larger system: pumps, fans, variable frequency drives, variable air volume boxes and chillers are all examples. Such devices are connected to one another using wires, pipes, ducts and other mechanisms. Sensors, actuators and other data sources and sinks are associated with the devices in these subsystems. Through some digital control system, they relay telemetry on the current behavior, status and performance of devices and properties of the substances and media touched by the building subsystem.
It is important for descriptions of these systems to be built on standardized, well-known names for equipment and other devices in building subsystems. Reliance on generic terminology is not sufficient to distinguish the different kinds of systems and different kinds of equipment in a broadly consistent and interpretable manner. Research and practice shows that a common terminology must be established in order to reduce the costs associated with developing and deploying data-driven applications that touch the internals of cyber-physical systems.
To support this use case, WoT descriptions should describe the networked devices present in building subsystems and their data capabilities. These capabilities should be related to properties of the substances or media that a device is operating on. For example, a smart thermostat's API may present a "mode" as a read-only property. "Mode" commonly refers to what the thermostat is "calling for", e.g. cooling, heating, fan; this is commonly captured as a numerical value. The mode is read by HVAC equipment, such as a rooftop-unit, which then enacts the desired conditioning. The WoT description of the mode property should permit the determination of what properties of other devices and entities in the building may be affected by the value of the mode property. In this example, the mode property representation should indicate that the mode property indirectly affects the temperature of air in the rooms that are connected to the equipment controlled by the thermostat.
Example: Rogue Zone Detection
"Rogue zones" are regions of the building that drive demand by calling for heating or cooling significantly more than other zones. One simple way to detect rogue zones is to observe zones (which may consist of multiple rooms) which are consistently above or below their setpoint by more than some delta. The following SPARQL query uses Brick to identify the air temperature setpoint and sensors associated with terminal units, and to identify the zones fed by those terminal units.
PREFIX brick: <http://brickschema.org/schema/Brick#> SELECT ?term ?zone ?sat ?sp WHERE { ?term a brick:Terminal_Unit . ?zone a brick:HVAC_Zone . ?sat a brick:Supply_Air_Temperature_Sensor . ?sp a brick:Supply_Air_Temperature_Setpoint . ?term brick:feeds ?zone . ?term brick:hasPoint ?sat, ?sp . }
Example: Measuring Temperature Before and After a Cooling Coil
A common fault detection and diagnosis operation is to detect broken or underperforming cooling coils. These are hollow loops through which chilled water flows; the loops are placed into an air stream in order to cool the air. The flow of chilled water through the coil is controlled by a valve. In order to tell if the coil is broken or underperforming, the temperature of the air before and after the coil is measured. If the temperature after the coil is not appreciably lower than the temperature before the coil while the valve is open, then there may be a fault on the coil.
PREFIX brick: <http://brickschema.org/schema/Brick#> SELECT ?ahu ?mat ?sat ?pos ?room WHERE { ?ahu a brick:AHU . ?sat a brick:Supply_Air_Temperature_Sensor . ?mat a brick:Mixed_Air_Temperature_Sensor . ?ccv a brick:Cooling_Valve . ?pos a brick:Position_Sensor . ?room a brick:Room . ?ahu brick:hasPoint ?mat, ?sat . ?ahu brick:hasPart ?ccv . ?ccv brick:hasPoint ?pos . ?ahu brick:feeds+/brick:hasPart? ?room . }
A very useful feature would be semantic descriptions of standard enumerations of device statuses, alarms and other multi-valued properties. One example is the numerical encoding of the thermostat mode above (e.g. "0 means off", "1 means 1-stage heat", etc.).
Many of the semantics are standard across manufacturers and models because they describe well-known and industry standard properties that must be accessible by users, but are encoded in different ways. The ability to refer to standardized error codes, device status, and so on would be a tremendous advance towards enabling vendor-agnostic treatment of data.
Production lines for industrial manufacturing consist of multiple machines, where each machine incorporates sensors for various values. A failure of a single machine can cause defective products or a stop of the entire production.
Big data analysis enables to identify behavioral patterns across multiple production lines of the entire production plant and across multiple plants.
The results of this analysis can be used for optimizing consumption of raw materials, checking the status of production lines and plants and predicting and preventing fault conditions.
A company owns multiple factories which contain multiple production lines. Examples are production lines and environment sensors. These devices collect data from multiple sensors and transmit this information to the cloud. Sensor data is stored in the cloud, can be visualized and analyzed using machine learning / AI.
The cloud service allows to manage single and groups of devices. Combining the data streams from multiple devices allows to get an easy overview of the state of all connected devices in the user's realm.
In many cases there are groups of devices of the same kind, so the aggregation of data across devices can serve to identify anomalies or to predict impending outages.
The cloud service allows to manage single and groups of devices and can help to identify abnormal conditions. For this purpose a set of rules can be defined by the user, which raises alerts towards the user or triggers actions on devices based on these rules.
This enables the early detection of pending problems and reduces the risk of machine outages, quality problems or threats to the environment or life of humans. It increases production efficiency, improves production logistics (such as raw material delivery and production output).
Integrating and interconnecting multiple devices into the common retail workflow (i.e., transaction log) drastically improves retail business operations at multiple levels. It brings operational visibility,including consumer behavior and environmental information, that was not previously possible or viable in a meaningful way.
It drastically speeds up the process of root cause analysis of operational issues and simplifies the work of retailers.
Note 1: the system should be capable of notifying consumers (such as security personnel), of fever detections. This may be email, SMS, or some other mechanism, such as MQTT publication.
Note 2: In all cases where images are captured, privacy considerations apply.
It would also be useful to count unique individuals for statistics purposes, but not necessarily based on identifying particular people. This is to avoid counting the same person multiple times.
Physiological Closed-Loop Control (PCLC) devices are a group of emerging technologies, which use feedback from physiological sensor(s) to autonomously manipulate physiological variable(s) through delivery of therapies conventionally delivered by clinician(s).
Clinical scenario without PCLC. An elderly female with end-stage renal failure was given a standard insulin infusion protocol to manage their blood glucose, but no glucose was provided. Their blood glucose dropped to 33, then rebounded to over 200 after glucose was given. This scenario has not changed for decades.
The desired state with PCLC implemented in an ICU. A patient is receiving an IV insulin infusion and is having the blood glucose continuously monitored. The infusion pump rate is automatically adjusted according to the real-time blood glucose levels being measured, to maintain blood glucose values in a target range. If the patient’s glucose level does not respond appropriately to the changes in insulin administration, the clinical staff is alerted.
Medical devices do not interact with each other autonomously (monitors, ventilator, IV pumps, etc.) Contextually rich data is difficult to acquire. Technologies and standards to reduce medical errors and improve efficiency have not been implemented in theater or at home.
In recent years, researchers have made progress developing PCLC devices for mechanical ventilation, anesthetic delivery applications, and so on. Despite these promises and potential benefits, there has been limited success in the translation of PCLC devices from bench to bedside. A key challenge to bringing PCLC devices to a level required for a clinical trials in humans is risk management to ensure device reliability and safety.
The United States Food and Drug Administration (FDA) classifies new hazards that might be introduced by PCLC devices into three categories. Besides clinical factors (e.g. sensor validity and reliability, inter- and intra-patient physiological variability) and usability/human factors (e.g. loss of situational awareness, errors, and lapses in operation), there are also engineering challenges including robustness, availability, and integration issues.
Security considerations for interconnected and dynamically composable medical systems are critical not only because laws such as [[HIPAA]] mandate it, but also because security attacks can have serious safety consequences for patients. The systems need to support automatic verification that the system components are being used as intended in the clinical context, that the components are authentic and authorized for use in that environment, that they have been approved by the hospital’s biomedical engineering staff and that they meet regulatory safety and effectiveness requirements.
For security and safety reasons, ICE F2761-09(2013) compliant medical devices never interact directly each other. All interaction is coordinated and controlled via the applications.
While transport-level security such as TLS provides reasonable protection against external attackers, they do not provide mechanisms for granular access control for data streams happening within the same protected link. Transport-level security is also not sufficiently flexible to balance between security and performance. Another issue with widely used transport-level security solutions is the lack of support for multicast.
The expected data include 2D and 3D streams produced by digital microscopes and recordings thereof. These streams may contain metadata which describe the instantaneous magnifications and timescales of data. The expected data also include the output streams produced by services. These streams could, for instance, contain annotation data.
With respect to annotating video streams, one could make use of secondary video tracks with uniquely-identified bounding boxes or more intricate silhouettes defining spatial regions on which to attach semantic data, e.g., metadata or annotations, using yet other secondary tracks. Similar approaches could work for point-cloud-based and mesh-based animations.
Mixed-reality collaborative spaces enable users to visualize and interact with data and to work together from multiple locations on shared tasks and projects.
Digital microscopes could be accessed and utilized from mixed-reality collaborative spaces via WoT architecture and standards. Digital microscopes could be thusly utilized throughout biomedicine, the sciences, and education. Data from digital microscopes could be processed by services to produce outputs useful to users. Users could select and configure one or more such services and route streaming data or recordings through them to consume resultant data in a mixed-reality collaborative space. Graphs, or networks, of such services could be created by users. Services could also communicate back to digital microscopes to control their mechanisms and settings. Services which simultaneously process digital microscope data and communicate back to control such devices could be of use for providing users with automatic focusing, magnification, and tracking.
Multimodal user interfaces could be dynamically generated for digital microscope content by making use of the output data provided by computer-vision-related services. Such dynamic multimodal user interfaces could provide users with the means of pointing and using spoken natural language to indicate precisely which contents that they wish to focus on, magnify, or track.For example, a digital microscope could be magnifying and streaming 2D or 3D imagery of a living animal cell. This data could be processed by a service which provides computer-vision-related annotations, labeling parts of the cell: the cell nucleus, Golgi apparatus, ribosomes, the endoplasmic reticulum, mitochondria, and so forth. The resultant visual content with its algorithmically-generated annotations could then be interacted with by users. Users could point and use spoken natural language to indicate precisely which parts of the living animal cell that they wished for the digital microscope to focus on, magnify, or track.
Requirements that are not addressed in the current WoT standards or building blocks include streaming protocols and formats for 3D digital microscope data and recordings. While digital microscopes could stream video using a variety of existing protocols and formats, the streaming of other forms of 3D data and animations, e.g., point clouds and meshes, could be facilitated by recommendation.
Users could select and configure one or more services and route data streaming from digital microscopes through them to consume the resultant data in a mixed-reality collaborative space. Additionally, services could be designed to communicate back to and control the mechanisms and settings of digital microscopes. Requirements that are not addressed in the current WoT standards or building blocks include a means of interconnecting services. Perhaps services could utilize WoT architecture and could be described as WoT things, or virtual devices, which provide functionality including that with which to establish data connectivity between them.
Smart Cities: managing roads, public transport and commuting, autonomous and human driven vehicles, transportation tracking and control systems, route information systems, commuting and public transport, vehicles, on-demand transportation, self driving fleets, vehicle information and control systems, infrastructure sharing and payment system, smart parking, smart vehicle servicing, emergency monitoring, etc.
Transport companies: managing shipping, air cargo, train cargo and last mile delivery transportation systems including automated systems.
Commuters: Mobility as a service, booking systems, route planning, ride sharing, self-driving, self-servicing infrastructure, etc.
Provide common vocabulary for describing transport related services and solutions that can be reused across sub-categories, for easier interoperability between various systems owned by different stakeholders.
Thing models could be defined in many subdomains to help integration or interworking between multiple systems.
Transportation of goods can be optimized at global level by enhancing interoperability between vertical systems.
Home smart devices behave according to TV programs.
Hybridcast applications in TV emit information about TV programs for smart home devices. (Hybridcast is a Japanese Integrated Broadcast-Broadband system. Hybridcast applications are HTML5 applications that work on Hybridcast TV.)
Hybridcast Contact application receives the information and controls smart home devices.
As a consumer of devices I want to be able to process data from any device that conforms to a class of devices.
I want to have a guarantee that I'm able to correctly interact with all affordances of the Thing that complies with this class of devices. Behavioral ambiguities between different implementations of the same description should not be possible.
I want to integrate it into my existing scenarios out of the box, i.e. with close to zero configuration tasks.
One of the most powerful features of the Web of Things is the ability for Thing Descriptions (TDs) to provide and abstract interface. This abstraction can remain constant when device capabilities change, when device suppliers are changed, or when new computational capabilities become available.
A "Virtual Thing" refers to a software simulation of a device conforming to a TD. That TD describes affordances generated in software from inputs that may or may not be similar to a physical thing that the same TD defines.
These inputs most often (but not always) will refer to data streams which, when examined with intelligent software (an AI), will allow that software to imitate the properties, actions, and events that an actual physical device would normally provide.
In a simple case, software could interpret data from a new door sensor product (possibly from a new manufacturer) and imitate the actions, properties, and events supported by the older device. This capability allows consuming software to remain unchanged and insulated from the churn caused by introducing new devices into the ecosystem. The consuming software will continue to use the original Thing Description as the interface definition.
In a more complex case, a data stream can be processed in software to imitate a physical device. Such "virtual things" allow the sensing hardware to be upgraded (in this case to video camera devices) without forcing a complete rewrite of software that was built to consume the original Thing Description. It is also possible for the data stream to be used to imitate multiple "virtual things", and also support new Thing Descriptions alongside the older ones.
Being able to use existing Thing Descriptions as an abstraction for "virtual things" will allow those with a device estate to save considerable time and effort in maintaining software and hardware in the estate.
Expected outcomes:
Retailers would like to avoid the expense of rewriting software when new capabilities become available, and would like to maintain existing functionality even while introducing new and more powerful TDs.
A video camera produces a data stream that can be processed to imitate a variety of "virtual things" defined with existing TDs. One such TD is a "door sensor." The video data stream can be processed to recognize when the door is open or closed, and can the processing software can emit "doorOpen" boolean events when the door is open or closed, and also emit "doorOpenPastLimit" events if the door has been open for too long. Any consuming software designed to understand the original door sensor TD will continue to work with this more advanced camera hardware, eliminating logistical challenges for retail management and reducing costs.
A digital twin is the virtual representation of a physical asset such as a machine, a vehicle, robot, sensor. Using a digital twin allows businesses to analyze their physical assets to troubleshoot in real time, predict future problems, minimize downtime, and perform simulations to create new business opportunities.
A digital twin may also be called a twin or a shadow. Digital twin technology may be referred to as device virtualization.
Digital twins can be located in the edge or in the cloud.
Various devices such as sensors, machines, vehicles, production lines, industry robots.
Digital twin platforms at the edge or in the cloud.
The virtual twin is a representation of a physical device or an asset. A virtual twin uses a model that contains observed and desired attribute values and also uses a semantic model of the behavior of the device.
Intermittent connectivity: An application may not be able to connect to the physical asset. In such a scenario, the application must be able to retrieve the last known status and to control the operation states of other assets.
Protocol abstraction: Typically, devices use a variety of protocols and methods to connect to the IoT network. From a users perspective this complexity should not affect other business applications such as an enterprise resource planning (ERP) application.
Business rules: The user can specify the normal operating range of a property in a semantic model. Business rules can be declaratively defined and actions can be automatically invoked in the edge or on the device.
Example: In a fleet of connected vehicles, the user monitors a collection of operating parameters, such as fuel level, location, speed and others. The semantics-based virtual twin model enables the user to decide whether the operating parameters are in normal range. In out of range conditions the user can take appropriate actions.
In a predictive twin, the digital twin implementation builds an analytical or statistical model for prediction by using a machine-learning technique. It need not involve the original designers of the machine. It is different from the physics-based models that are static, complex, do not adapt to a constantly changing environment, and can be created only by the original designers of the machine.
A data analyst can easily create a model based on external observation of a machine and can develop multiple models based on the user’s needs. The model considers the entire business scenario and generates contextual data for analysis and prediction.
When the model detects a future problem or a future state of a machine, the user can prevent or prepare for them. The user can use the predictive twin model to determine trends and patterns from the contextual machine data. The model helps to address business problems.
In twin projections, the predictions and the insights integrate with back-end business applications, making IoT an integral part of business processes. When projections are integrated with a business process, they can trigger a remedial business workflow.
Prediction data offers insights into the operations of machines. Projecting these insights into the back-end applications infrastructure enables business applications to interact with the IoT system and transform into intelligent systems.
There are multiple user scenarios that are addressed by this use case.
An example in the smart home environment is an automatic control lamps, air conditioners, heating, window blinds in a household based on sensor data, e.g. sunlight, human presence, calendar and clock, etc.
In an industrial environment individual actuators and production devices use different protocols. Examples include MQTT [[MQTT]], OPC UA [[OPC UA]], Modbus [[Modbus]], Fieldbus, and others. Gathering data from these devices, e.g. to support digital twins or big data use cases requires an "Agent" to bridge across these protocols. To provide interoperability and to reduce implementation complexity of this agent a common set of (minimum and maximum) requirements need to be supported by all interoperating devices.
A smart city environment is similar to the industrial scenario in terms of device interoperability. Devices differ however, they include smart traffic lights, traffic monitoring, people counters, cameras.
Many of today's home IoT-enabled devices can provide similar functionality (e.g. audio/video playback), differing only in certain aspects of the user interface. This use case would allow continuous interaction with a specific application as the user moves from room to room, with the user interface switched automatically to the set of devices available in the user's present location.
On the other hand, some devices can have specific capabilities and user interfaces that can be used to add information to a larger context that can be reused by other applications and devices. This drives the need to spread an application across different devices to achieve a more user-adapted and meaningful interaction according to the context of use. Both aspects provide arguments for exploring use cases where applications use distributed multimodal interfaces.
The increase in the number of controllable devices in an intelligent home creates a problem with controlling all available services in a coherent and useful manner. Having a shared context, built from information collected through sensors and direct user input, would improve recognition of user intent, and thus simplify interactions.
In addition, multiple input mechanisms could be selected by the user based on device type, level of trust and the type of interaction required for a particular task.
Smart home functionality (window blinds, lights, air conditioning etc.) is controlled through a multimodal interface, composed from modalities built into the house itself (e.g. speech and gesture recognition) and those available on the user's personal devices (e.g. smartphone touchscreen). The system may automatically adapt to the preferences of a specific user, or enter a more complex interaction if multiple people are present.
Sensors built into various devices around the house can act as input modalities that feed information to the home and affect its behavior. For example, lights and temperature in the gym room can be adapted dynamically as workout intensity recorded by the fitness equipment increases. The same data can also increase or decrease volume and tempo of music tracks played by the user's mobile device or the home's media system.
OAuth 2.0 is an authorization protocol widely known for its usage across several web services. It enables third-party applications to obtain limited access to HTTP services on behalf of the resource owner or of itself. The protocol defines the following actors:
These actors can be mapped to WoT entities:
TO DO: Check the OAuth 2.0 spec to determine exactly how Resource Owner is defined. Is it the actual owner of the resource (e.g. running the web server) or simply someone with the rights to access that resource?
The OAuth 2.0 protocol specifies an authorization layer that separates the client from the resource owner. The basic steps of this protocol are summarized in the following diagram:
+--------+ +---------------+ | |--(A)- Authorization Request ->| Resource | | | | Owner | | |<-(B)-- Authorization Grant ---| | | | +---------------+ | | | | +---------------+ | |--(C)-- Authorization Grant -->| Authorization | | Client | | Server | | |<-(D)----- Access Token -------| | | | +---------------+ | | | | +---------------+ | |--(E)----- Access Token ------>| Resource | | | | Server | | |<-(F)--- Protected Resource ---| | +--------+ +---------------+
Steps A and B defines what is known as authorization grant type or flow. What is important to realize here is that not all of these interactions are meant to take place over a network protocol. In some cases, interaction with with a human through a user interface may be intended. OAuth2.0 defines 4 basic flows plus an extension mechanism. The most common of which are:
In addition, a particular extension which is of interest to IoT is the `device` flow. Further information about the OAuth 2.0 protocol can be found in IETF RFC6749. In addition to the flows, OAuth 2.0 also supports scopes. Scopes are identifiers which can be attached to tokens. These can be used to limit authorizations to particular roles or actions in an API. Each token carries a set of scopes and these can be checked when an interaction is attempted and access can be denied if the token does not include a scope required by the interaction. This document describes relevant use cases for each of the OAuth 2.0 authorization flows.
For each OAuth 2.0 flow, there is a corresponding use case variant. We also include the experimental "device" flow for consideration.
code
A natural application of this protocol is when the end-user wants to interact directly with the consumed thing or to grant their authorization to a remote device. In fact from the RFC6749
This implies that the code flow can be only used when the resource owner interacts directly with the WoT consumer at least once. Typical scenarios are:
The following diagram shows the steps of the protocol adapted to WoT idioms and entities. In this scenario, the WoT Consumer has read the Thing Description of a Remote Device and want to access one of its WoT Affordances protected with OAuth 2.0 code flow.
+-----------+ +----------+ | | | Resource | | Remote | | Owner | | Device +<-------+ | | | | | +----+-----+ +-----------+ | ^ | | | (B) | +------------+ Client Identifier +---------------+ | | ------(A)-- & Redirection URI ---->+ | | | User- | | Authorization | | | Agent ------(B)-- User authenticates --->+ Server | | | | | | | | ------(C)-- Authorization Code ---<+ | | +---+----+---+ +---+------+----+ | | | ^ v | (A) (C) | | | | | | | | ^ v | | | +---+----+---+ | | | | |>-+(D)-- Authorization Code ---------' | | | WoT | & Redirection URI | | | Consumer | | | | |<-+(E)----- Access Token -------------------' | +-----+------+ (w/ Optional Refresh Token) | v | | | +-----------(F)----- Access WoT --------------------------------+ Affordance
Notice that steps (A), (B) and (C) are broken in two parts as they pass through the User-Agent.
device
The device flow (IETF RFC 8628) is a variant of the code flow for browserless and input-constrained devices. Similarly, to its parent flow, it requires a close interaction between the resource owner and the WoT consumer. Therefore, the use cases for this flow are the same as the code authorization grant but restricted to all devices that do not have a rich means to interact with the resource owner. However, differently from `code`, RFC 8628 states explicitly that one of the actors of the protocol is an end-user interacting with a browser (even if section-6.2 briefly describes an authentication using a companion app and BLE), as shown in the following (slightly adapted) diagram:
p>+----------+ | | | Remote | | Device | | | +----^-----+ | | (G) Access WoT Affordance | +----+-----+ +----------------+ | +>---(A)-- Client Identifier ---v+ | | | | | | +<---(B)-- Device Code, ---<+ | | | User Code, | | | WoT | & Verification URI | | | Consumer | | | | | [polling] | | | +>---(E)-- Device Code --->+ | | | & Client Identifier | | | | | Authorization | | +<---(F)-- Access Token ---<+ Server | +-----+----+ (& Optional Refresh Token) | | v | | : | | (C) User Code & Verification URI | | : | | ^ | | +-----+----+ | | | End User | | | | at +<---(D)-- End user reviews --->+ | | Browser | authorization request | | +----------+ +----------------+
Notable mentions:
client credential
The Client Credentials grant type is used by clients to obtain an access token outside of the context of an end-user. From RFC6749:
Therefore the client credential grant can be used:
The Client Credentials flow is illustrated in the following diagram. Notice how the Resource Owner is not present.
+----------+ | | | Remote | | Device | | | +----^-----+ | | (C) Access WoT Affordance ^ +----+-----+ +---------------+ | | | | | +>--(A)- Client Authentication --->+ Authorization | | WoT | | Server | | Consumer +<--(B)---- Access Token ---------<+ | | | | | | | +---------------+ +----------+
Comment: Usually client credentials are distributed using an external service which is used by humans to register a particular application. For example, the `npm` cli has a companion dashboard where a developer requests the generation of a token that is then passed to the cli. The token is used to verify the publishing process of `npm` packages in the registry. Further examples are Docker cli and OpenId Connect Client Credentials.
implicit
Deprecated From OAuth 2.0 Security Best Current Practice:
The RFC above suggests using `code` flow with Proof Key for Code Exchange (PKCE) instead.
The implicit flow was designed for public clients typically implemented inside a browser (i.e. javascript clients). As the `code` is a redirection-based flow and it requires direct interaction with the resource's owner user-agent. However, it requires one less step to obtain a token as it is returned directly in the authentication request (see the diagram below).
Considering the WoT context this flow is not particularly different from `code` grant and it can be used in the same scenarios.
Comment: even if the `implicit` flow is deprecated existing services may still using it.
+----------+ | Resource | | Owner | | | +----+-----+ ^ | (B) +----------+ Client Identifier +---------------+ | ------(A)-- & Redirection URI --->+ | | User- | | Authorization | | Agent ------(B)-- User authenticates -->+ Server | | | | | | +<---(C)--- Redirection URI ----<+ | | | with Access Token +---------------+ | | in Fragment | | +---------------+ | +----(D)--- Redirection URI ---->+ Web-Hosted | | | without Fragment | Client | | | | Resource | | (F) +<---(E)------- Script ---------<+ | | | +---------------+ +-+----+---+ | | (A) (G) Access Token | | ^ v +-+----+---+ +----------+ | | | Remote | | WoT +>---------(H)--Access WoT--------->+ Device | | Consumer | Affordance | | | | +----------+ +----------+
resource owner password
Deprecated From OAuth 2.0 Security Best Current Practice:
For completeness the diagram flow is reported below.
+----------+ | Resource | | Owner | | | +----+-----+ v | Resource Owner (A) Password Credentials | v +-----+----+ +---------------+ | +>--(B)---- Resource Owner ------->+ | | | Password Credentials | Authorization | | WoT | | Server | | Consumer +<--(C)---- Access Token ---------<+ | | | (w/ Optional Refresh Token) | | +-----+----+ +---------------+ | | (D) Access WoT Affordance | +----v-----+ | Remote | | Device | | | +----------+
Actors (represent a physical person or group of persons (company))
Manufacturer Service Provider Network Provider (potentially transparent for WoT use cases) Device Owner (User) Others?Roles:
Depending on the use case, an actor can have multiple roles, e.g. security maintainer. Roles can be delegated.The following categories group Use Cases that share a common property. In the definition of a User Story, use case categories can be cited as motivations rather than (or in addition to) specific use cases.
Provides a public service. Misuse can result in lack of support to other users.
Handles personal or confidential information. Misuse could disclose privately identifiable information (PII) or sensitive business information.
Misuse has the potential to cause personal injury.
Misuse has the potential to cause financial injury or damage to business operations or reputation.
Simplifying the process of TD construction is helpful to ease the task of TD writers and generators.
User stories provide a high-level summary of a requirement in the form of a single sentence that describes a stakeholder (Who has the need), a technical requirement (What they need; a capability or condition; features), and a functional requirement (Why they need it; the purpose or motivation; use cases). These are often stated in the form of a sentence: "As a Who I need What so that I can Why." For clarity, the following user stories break out each part in a list. Each user story will in addition identify one or more Use Case (or Use Case Categories) that establishes the motivation for the identified capability. Capabilities correspond to sets of features in other technical specifications.
Define how user stories and features are linked. We can link back from implemented features to user stories, but want to avoid unstable links e.g. to documents maintained in github. If we have bidirectional links we need some mechanism to keep them consistent.
Reusable Connection descriptions in a TD.
Better describe connection oriented protocols such as MQTT and WebSockets.
Simplify TDs in cases without usage of default terms or to avoid redundancy
There are at least three sub-problems that motivate this feature:
We will keep this and following subsection for now but should consider reorganizing them into user stories.
This section defines the properties required in an abstract Web of Things (WoT) architecture.
There are a wide variety of physical device configurations for WoT implementations. The WoT abstract architecture should be able to be mapped to and cover all of the variations.
There are already many existing IoT solutions and ongoing IoT standardization activities in many business fields. The WoT should provide a bridge between these existing and developing IoT solutions and Web technology based on WoT concepts. The WoT should be upwards compatible with existing IoT solutions and current standards.
WoT must be able to scale for IoT solutions that incorporate thousands to millions of devices. These devices may offer the same capabilities even though they are created by different manufacturers.
WoT must provide interoperability across device and cloud manufacturers. It must be possible to take a WoT enabled device and connect it with a cloud service from different manufacturers out of the box.
The W3C WoT Thing Architecture [[wot-architecture]] defines the abstract architecture of Web of Things and illustrates it with various system topologies. This section describes technical requirements derived from the abstract architecture.
The use cases help to identify basic components such as devices and applications, that access and control those devices, proxies (i.e., gateways and edge devices) that are located between devices. An additional component useful in some use cases is the directory, which assists with discovery.
Those components are connected to the internet or field networks in offices, factories or other facilities. Note that all components involved may be connected to a single network in some cases, however, in general components can be deployed across multiple networks.
Access to devices is made using a description of their functions and interfaces. This description is called Thing Description (TD). A Thing Description includes a general metadata about the device, information models representing functions, transport protocol description for operating on information models, and security information.
General metadata contains device identifiers (URI), device information such as serial number, production date, location and other human readable information.
Information models defines device attributes, and represent device’s internal settings, control functionality and notification functionality. Devices that have the same functionality have the same information model regardless of the transport protocols used.
Because many systems based on Web of Things architecture are crossing system Domains, vocabularies and meta data (e.g. ontologies) used in information models should be commonly understood by involved parties. In addition to REST transports, PubSub transports are also supported.
Security information includes descriptions about authentication, authorization and secure communications. Devices are required to put TDs either inside them or at locations external to the devices, and to make TDs accessible so that other components can find and access them.
Applications need to be able to generate and use network and program interfaces based on metadata (descriptions).
Applications have to be able to obtain these descriptions through the network, therefore, need to be able to conduct search operations and acquire the necessary descriptions over the network.
Digital Twins need to generate program interfaces internally based on metadata (descriptions), and to represent virtual devices by using those program interfaces. A twin has to produce a description for the virtual device and make it externally available.
Identifiers of virtual devices need to be newly assigned, therefore, are different from the original devices. This makes sure that virtual devices and the original devices are clearly recognized as separate entities. Transport and security mechanisms and settings of the virtual devices can be different from original devices if necessary. Virtual devices are required to have descriptions provided either directly by the twin or to have them available at external locations. In either case it is required to make the descriptions available so that other components can find and use the devices associated with them.
For TDs of devices and virtual devices to be accessible from devices, applications and twins, there needs to be a common way to share TDs. Directories can serve this requirement by providing functionalities to allow devices and twins themselves automatically or the users to manually register the descriptions.
Descriptions of the devices and virtual devices need to be searchable by external entities. Directories have to be able to process search operations with search keys such as keywords from the general description in the device description or information models.
Risks are defined in detail in the Security and Privacy Guidelines document. The following just relates risks to categories. Each use case that is subject to a risk results in requirements to mitigate that risk for that use case. Requirements are named for the associated mitigation. Some risks may require multiple mitigations.
The Web of Things primarily targets machine-to-machine communication. The humans involved are usually developers that integrate Things into applications. End-users will be faced with the front-ends of the applications or the physical user interfaces provided by devices themselves. Both are out of scope of the W3C WoT specifications. Given the focus on IoT instead of users, accessibility is not a direct requirement, and hence is not addressed within this specification.
There is, however, an interesting aspect on accessibility: Fulfilling the requirements above enables machines to understand the network-facing API of devices. This can be utilized by accessibility tools to provide user interfaces of different modality, thereby removing barriers to using physical devices and IoT-related applications.
The following is under review and development. In particular, the association of use cases with requirements ideally should be reviewed by use case submitters and the status of completion of each requirement is mostly still marked as TBD while we review the alignment of these requirements with current publications.
The WoT discovery process should have the following capabilities:
The Web of Things standardization initiative has liaisons with several other SDOs and collaborates on common use cases and alignment of terminology.
The following section is not exhaustive, it describes the current status, and additional liaisons are under consideration.
ECHONET Consortium is an organization that promotes
Communication protocol "ECHONET Lite" for home appliances and housing facilities,
which are essential elements of smart homes, to cooperate with each other.
We are standardizing the ECHONET Lite and promoting the spread of smart homes
with support for commercialization of devices which support the ECHONET Lite standards
and cooperation with related industries. We also develop guidelines for ECHONET Lite Web API
which can be used to access ECHONET Lite devices via a Web server with RESTful Web API.
At the PlugFest in October 2021, WoT consumers connected to ECHONET Lite Web API devices via an intermediary, which translates HTTP message format. We think that it is desirable for WoT specification to support transparent interconnection between a WoT consumer and non-WoT devices that use HTTP protocol as a transport protocol, including ECHONET Lite Web API devices. We hope that WoT WG would investigate a solution for it.
ECLASS has established itself as worldwide reference-data standard for the classification and unambiguous description of products and services.
The [[ECLASS]] e.V. association is currently working on a RDF transformation of the ECLASS Standard focusing the W3C WoT Standard.
OPC UA [[OPC UA]] is one of the important automation standards for device communication in the factory domain as well as for Industry 4.0 scenarios such as like flexible manufacturing.
WoT should support a standardized binding to OPC UA endpoints to enable simple application development such as for cross-domain applications.
Such a binding needs an own set of OPC UA specific vocabulary definitions which should be developed together with the experts from the OPC Foundation.
This guarantees that the binding is getting accepted within the OPC UA community as well as in the WoT community and avoids heterogeneous (project specific) definitions and incompatible OPC UA handlings in Thing Descriptions.
The EdgeX Foundry [EDGEX] is a community-driven project, organized under the Linux Foundation, to define a reference software architecture for IoT hubs. Its goal is to enable interoperability by combining a set of key IoT services with a set of interfaces to a variety of IoT device protocols and ecosystems. There is a reference implementation of the EdgeX architecture.
The EdgeX Foundry reference architecture provides a set of protocol translation services and exposes interfaces to a variety of ecosystems and devices. However, it currently lacks a standard and IoT-appropriate metadata standard to describe the device interfaces (and the data models for those interfaces) that it exposes on the network. The WoT Thing Description could fulfill this role; otherwise, the EdgeX Foundry architecture fits within the general framework of a WoT system.
Many thanks to the W3C staff and all other active Participants of the W3C Web of Things Interest Group (WoT IG) and Working Group (WoT WG) for their support, technical input and suggestions that led to improvements to this document.
Special thanks to all authors of use case descriptions (in alphabetical order) for their contributions to this document:
Special thanks to Dr. Kazuyuki Ashimura from the W3C for the continuous help and support of the work of the WoT Use Cases Task Force.