MMI as the possible user interface layer for the Web of Things (WoT) framework.
MMI integrates various user interface modalities (and devices which provide modlities) for WoT purposes.
Status of This Document
TBD
1 Template for Use Cases
This is a generic template for proposing use cases.
Why were you not able to use only existing standards to accomplish this?
Dependencies:
other use cases, proposals or other ongoing standardization activities which this use case is dependent on or related to
Viewpoint:
for example, specific industry (health care, hospitality industry, retail, automotive), accessibility or daily life, from both the user's viewpoint and the provider's viewpoint.
What needs to be standardized:
What are the new requirements, i.e. functions that don't exist, generated by this use case?
Comments:
any relevant comment that is good to track related to this use case
2 Use Cases
Note on Accessibility/Security/Privacy/Safety
We need to think about horizontal issues, e.g., accessibility, security, privacy and safety, for each use case to make the MMI-based systems even more useful.
User authentication, device authentication, physical danger and safety are typical viewpoints.
Authentication using SCXML should be fine but can become complex if you want to hide part of a screen from someone who's watching, for example.
Safety would be especially important for people with disabilities,
We need to think about the relationship between accessibility and security, if everything is connected.
Making the system easier to access for people with disabilities should not make it easy for evil people to access.
We'd like to update the UC template for that purpose.
2.1 High priority use cases
2.1.1 UC-2: Interaction with Robots
Submitter(s):Kaz & Debbie
Reviewer(s): Masahiro, Kosuke, Shinya
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Status:
The Japanese participants would be the best people to elaborate on this. Helena and Raj may also be interested.
Description:
controlling robots (including both industrial robots and personal robots) using multimodal interface like voice and gesture
A typical example is interaction of multiple pet robots, e.g., Pepper, AIBO, Nao (French), Robi (Japanese) and human(s). Interoperable robot interaction?
may use BehaviorML or VRML for the MC
Behavior Tree could be used for IM
possible requirements identified during TPAC breakout session:
offline operation
scheduled operation
realtime operation
authorization
discovery and vocabularies
human to machine
machine to machine
privacy
virtual robot/agent (software) to robot
asynchronous operation
messaging standards
graceful degradation
should be possible to control other devices through a robot instead of through smart phoneor remote
Motivation:
Interactive and adaptive control is useful in the case of accidents or errors
Elder people will increase in the near future and people would need help (or would enjoy interaction) with pet robots.
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
The current MMI Life Cycle events don't have the concept of scheduling or events scheduled to occur in the future. The simplest approach is to use the ExtensionNotification event, but that's not very standardized. Other ideas would be to use SMIL (but that's mostly for media), MIDI, SCXML, use the EMMA output functionality, or introduce a new attribute into the Life Cycle events. In addition to starting at a certain time, scheduling also introduces the idea of recurring events, for example, every day at 5:00.
Options
Data field of ExtensionNotification: This could be used, but the idea of scheduling is very generic and should be standardized.
SMIL: SMIL is used for presenting synchronized media, it doesn't seem appropriate for scheduling Life Cycle events, which may not be user-facing.
EMMA: The reasons against using the EMMA output functionality are similar to the arguments against using SMIL. EMMA output is designed for output that will be consumed by a user.
MIDI: MIDI is also designed for output to a user.
SCXML: the "delay" and "delayexpr" attributes of "<send>" may be suitable for scheduling.
Add a new common field or fields to the Life Cycle events.
MC's could also have an internal scheduling capability, independent of the IM
EMMA 2.0 output could also be responsible for scheduling, using "emma:planned-output", perhaps using SMIL, should look at starting relative to other events
could also use EMMA 2.0 for scheduling interactions that don't directly affect UI (scheduling DVR, for example)
The choice between 5 and 6 depends at least in part on whether the IM or the MC is responsible for scheduled action. If the IM is responsible for scheduling, SCXML "delay" would be more appropriate, because it would keep track of the schedule and send the Life Cycle events at the correct times. On the other hand, if the MC is responsible, it can be sent, for example, a StartRequest with a "delay", and the MC will be responsible for starting at the correct time. It may be that both are needed. If we want to maintain the independence of the MMI Architecture from SCXML, we can just say the IM (however it's implemented) is responsible for scheduling.
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.3.4 UC-7: Multimodal Appliances
Submitter(s):Kaz, Dirk (Harman)
Reviewer(s): Shinya, Masahiro, Kosuke, Ryuichi
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Status:
The Japanese participants would be the best people to elaborate on this
Relationship between Entities included in this use case:
Description:
control home appliances like rice cooker using multimodal interface like speech and gesture
user, location and orientation are identified, and appropriate service is provided
e.g., digital TV with speech interface
HMI which helps people use appliances at home easily, e.g., assisting people's sight by enlarging the image, assisting their hearing by clear audio and haptic interface
wearable devices and AR-ready display are nice to be included. maybe a smart car also could be a target.
we should consider what is the (significant) merit to use multimodal interface for multiple-device integration
for example, we could integrate multiple devices and multiple Web services so that we can make an order to buy a pizza
this is related to the UC-2, interaction with robots. robots could be used as smart/friendly/easy-to-use remotes for appliances
we can talk to a pet robot and ask him/her to turn on TV, change the temp. of air conditioner, or get a can of juice from fridge.
it is a kind of "personal agent" which remember each person's preference
Smart remote for appliances (former UC-18 which is merged with UC-7 here)
We could talk with a cute pet robot and we could ask him/her to control specific appliances or bring us a snack. The robot could interact with the world or with devices.
The robot could be an intermediary between the user and devices.
The robot could also be considered a kind of "user agent" that can do things for the user.
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
In In the paper cited above, the authors found that it was hard for the developers to be familiar with all the standards
In this project it was hard to discover devices and all their capabilities
The authors also discovered that they needed shortcuts for feedback. If the user's command has to be combined with information from the environment to understand the command. The information would just be sent to the fusion engine, instead of to the IM.
There should also be a shared data model that could be used by the fusion and fission engines as well as the IM (but not the MC's)
What needs to be standardized:
New features? APIs? data model? language?
A shared data model, like a blackboard, that could be used by the fusion and fission components and the upper interaction manager
Separate fission and fusion components
Comments:
Anything you want to add
Note:"UC-18 smart remote for appliances" has been merged with this use case.
2.1.3 UC-12: Smart Car Platform
Submitter(s):Kaz, Dirk (Harman)
Reviewer(s):
Tracker Issue ID: @@@
Category: 1. gap in existing specifications
Class:Various MCs
Status:
Dirk would be the best person to elaborate on this.
Description:
Car navigation system, HTML5-based IVI, smartphones and sensor devices within car controlled using MMI lifecycle events and (JSON version of) EMMA over WebSocket, use cases are navigation, entertainment, and telephony
There is an open source approach called GENIVI (Generic In Vehicle Infotainment). They have developed a vocabulary component that could be considered as the Data Component in the MMI Architecture. For the GUI part of the interaction the GENIVI Alliance use Qt or HTML5.
MMI lifecycle events could be used to integrate multiple devices and modalities within a car (and outside the car)
Motivation:
Most current car architectures are based on Genivi, employing Qt for the GUI part of the interaction and FRANCA (commonapi) for communication issues. The current architecture should be extended to enable interaction with components designed after the W3C MMI architecture. This aims for an easy extension of existing deployments with support of additional modalities, devices and sensors within and outside the car.
An interaction manager will take care about the business logic and accessing the existing components like HMI, Vehicle Interface, Phone, ...
FRANCA may be used to deliver MMI lifecycle events.
Dependencies:
Possible related standardization activities
The Automotive Working Group is developing an approach to low-level interaction in the vehicle with Web Sockets.
The Genivi Alliance is aiming for a resuable IVI platform as a flexible but standard reference architecture that may serve as a blueprint for building a full IVI solution
Gaps
Existing standards like Genivi are not prepared to be enhanced by additional modalities. However, the combination of Qt and Franca for automotive apps is kind of similar to HTML5 and SCXML for Web apps
GENIVI guys have started to consider the combination of HTML5 and JS for IVI apps
A consistent multimodal dialog concept is missing to allow for a seamless user experience across modalities.
What needs to be standardized:
Mapping of existing components to Modality Components and Interaction Managers as well as messaging concepts
Introduction of a topmost Interaction Manager to allow for a consistent user experience across modalities
Addition of new, so far unknown, modality components into the overall concept
Comments:
This effort should work closely with the related standardization activies to increase acceptance of suggested solutions.
2.1.4 UC-19 Handling millions of components
Submitter(s):Kaz, Debbie, Helena
Reviewer(s):Masairo, Shinya
Tracker Issue ID: @@@
Category: @@@
Class:Platform level
Status:
Helena would be the best person to elaborate on this.
Description:
everyone could have a personal user agent modality component
the IM could handle millions of devices
typical example is having a whole remote orchestra
car rental agency needs to handle many users and many cars
there could be an IM for the car rental company as well as an IM for each car
the car rental agency IM could talk with the user agent component for each user
the user agent could tell the car rental agency what the user' preferences are (this user is very tall, etc.)
a university could interact with the user agents for each student and staff
the car could interact with the parking lot manager about available spaces
lots of privacy and security issues
there might be a lot of clients that don't have a global vision, then there's a higher level manager
Motivation:
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
Are there any problems with scaling?
Requirements
need a specific time keeper (time code generator) like the conductor of an orchestra
need more than levels of accuracy for time management depending on each use case
if there are slow MCs, need to delay other MCs to synchronize all the MCs
The IM must be able to handle millions of devices, each one has an IM itself, we could use a complex MC for this, but we don't have any examples. The devices might be in families of clusters. We need a way to cluster, which could be with complex components, but we need an example.
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.1.4 UC-19 Handling millions of components
Note: There two "UC-19" sections. May I merge these???
Submitter(s):Kaz, Debbie, Helena
Category: 2 does have implications for discovery and registration?
Class:Various MCs
Status: Submitted to be commented.
Objects
Fuctions of Objects
Object
Identify
Track
Accountability
User Devices
Multiple user devices.
@
@
Master Display
The display to execute the current application.
Master Haptics Actuator
The haptics actuator used in the current appication.
Master Audio
The component rendering the audio content.
Actions
System prepares: search for devices in the network.
Each device is registered.
Interaction occurs (i.e. application loads).
Depending on the devices available, the application displays the visual contents on a Master Display choosed by the system.
Depending on the devices available, the application renders the auditive contents on a Master Audio choosed by the system.
Depending on the devices available, the application executes haptic contents on a Master Haptics Actuator choosed by the system.
Description:
Motivation:
Dependencies:
Viewpoint:
Home Automation, Entertainment, Medical environments.
What needs to be standardized:
Registration and Discovery. The behavior of the Ressources Manager indexing processes to store the device adresses and capabilities.
Comments:
No comments for the moment.
2.1.5 UC-22 Smart Hotel Room
Submitter(s):Debbie Dahl
Tracker Issue ID:
Category:
Class:Various MCs
How would you categorize this issue?
gap in existing specifications (=> IG to draft a proposal for an existing WG), or how does this work in existing architecture?
what implications does this UC have for discovery and registration?
Discovery and Registration functions are very important for this use case:
Users may be controlling the room by means of a mobile device, which must be able to find and make use of the room's services
Although hotel room functions are very similar to each other (lighting, HVAC, entertainment, for example) there are often differences (new services) so that an application needs to be able to adapt itself to whatever capabilities the room offers.
require new specification/WG: This can be addressed through existing specifications (EMMA, MMI Architecture, possibly ARIA), additional work on early stage specifications in progress (Discovery and Registration), existing work on security and privacy, and possibly new work on biometrics (for user authorization).
can be addressed as part of a guidelines document to be produced by the IG (=> The proponent should draft some text for the document)
The requirements can be partially addressed through the existing MMI Architecture specification.
Actions
Control a hotel room with a natural UI that takes into account user preferences* hotel guest->hotel room
Functions
Identify: guest (authorized guest, etc.) Make sure that the user is authorized to enter and control the room
Control: lighting, HVAC, entertainment, surveillance camera, locks, notify housekeeping, any other room functions
Description:
high level description/overview of the goals of the use case
This use case describes a Smart Hotel Room, which authorizes an arriving guest to control the room and access its capabilities. Because the guest will only be staying in the room a short time and doesn't have time to learn a special GUI interface, a natural (speech or language) user interface is important. In addition, because rooms vary in their capabilities, a discovery process is essential, so that the user can become aware of the room's capabilities. Security and privacy are important for keeping the guest safe. Also, levels of authorization will be needed. For example, a child guest might be allowed to control the lights, but not the HVAC. Finally, note that this use case is closely related to UC-20 on emergency notifications. An application that allows a user to control a hotel room could also offer emergency notifications.
Schematic Illustration (devices involved, work flows, etc) (Optional)
Reduces user frustration when users enter a new environment, especially when there are accessibility issues. for example, guests with limited mobility might find it difficult to use some physical controls.
Why were you not able to use only existing standards to accomplish this?
There needs to be additional work on discovery and registration and on security and privacy to make this use case possible.
Dependencies:
other use cases, proposals or other ongoing standardization activities which this use case is dependent on or related to
What needs to be standardized:
What are the new requirements, i.e. functions that don't exist, generated by this use case?
Comments:
any relevant comment that is good to track related to this use case
Submitter(s):company (name), ...
2.1.6 UC-23 Collaborative content sharing in a smart space
Submitter(s):Branimir (Wacom)
Category: 2
does have implications for discovery and registration?
Class:Various MCs
Status:
Who would work on this use case, progress, etc.
Objects
Fuctions of Objects
Object
Identify
Track
Accountability
User Device
One or multiple user devices.
@
@
Master Device
The device of the moderator.
@
@
Master Display
The display to share visual content.
Master Audio
To share audio content.
Actions
System prepares: loads its screen, loads audio facilities, prepares monitoring of devices, defines lilits of range -room boundaries-.
Master device registers.
Each participant registers.
Interaction occurs. 4 kinds of interactions are possible: user initiated, moderator initiated, and interruptions from user, moderator or the system.
A participant leave or all the participants leave the room.
The system goes to stand-by mode.
Description:
Students in a classroom each have a mobile device where they can take inked or typed notes and they like to share their work with the classroom in a shared screen.
The content can be created and edited in realtime.
The use case must be for authoritarian models and also the collaborative ones. A moderator can be needed in the case of a classroom o large meeting configurations.
Motivation:
Collaborative workspaces should be enhanced. Frictionless collaborative workspaces. Saving time in conferences, classrooms.
Web standards are not allowing a way to seamesly register or unregister devices. Ink is not a full fledged modality in collaboration workins cases. Ink is a slower way a communication rather than typing. The creative expression is constricted by inputs like typing. We dont have yet web standards for discovery and registration.
Dependencies:
The second screen work in the Web&TV Interest Group, Web signage Bussines Group, Web of Things Interest group, Echonet, DLNA (UPnP).
Viewpoint:
Education, Home Automation, Medical, Bussines environments.
What needs to be standardized:
Registration and Discovery. The behavior of the Ressources Manager (using SCXML) to monitor the device presence or absence.
Comments:
any relevant comment that is good to track related to this use case
2.2 Other use cases
2.1.1 UC-1: Synchronizing video stream and HMI (e.g., remote surgery)
The ability to synchronize (1) a video stream using holography and (2) advanced HMI like robot arm for remote surgery.
That would requires precise synchronization for realtime interaction. Also it would be nicer to have 3-d display and HMI to control the image from any angles using intuitive operation regardless of the user's knowledge on the system.
Typical example is the smart HMI from a famous movie of "Minority Report".
It is expected to handle various devices provided by multiple vendors.
Motivation:
This is being requested by the TV industry.
It may be possible to achieve this with existing web standards but this should be investigated.
Dependencies:
There are other activities going on in the W3C related to this use case. For example, Multi-Device Community Group, Web and TV Interest Group, Second Screen Working Group and possibly the Web of Things IG as well. It may be the case that these other efforts will be able to support the required functionality, but if there needs to be any MMI work, we should coordinate with the other groups. Also there is a proposal that there could be an HTML5 new features community group, and we should coordinate with that.
2.3.2 UC-3: Wearable devices
Submitter(s):Kaz
Reviewer(s): Shinya, Yasuhiro
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
Abstract:
controlling wearable devices (like sphygmomanometer or game devices) using multimodal interface like voice and gesture
may get connected with home network when the user comes home
may interact with head mount display like Google Glass
doctors might want to use
controlling devices, e.g., music instruments, by eye tracking or brain wave
There are various wearable devices, but currently the usage of wearable devices depends on identification and personalization using vendor specific account information, e.g., Google account or Apple ID. It also depends on smartphone's connectivity outside home.
Possible technology areas to attach:
There is a question on how to identify and manage the capability of each wearable device and what kind interaction could be made between users and devices.
MMI could be used to identify and manage devices and interactions.
For that, what are the key requirements to solve the issues:
make devices on different platforms interact with each other?
a smarter mechanism to identify multiple devices and let them interact with each other without each developer's specifying device capability explicitly
MMI integrates various user interface modalities (and devices which provide modlities) for WoT purposes. And wearable technology incorporates computer and advanced UI technolgies with clothing and accessories. So wearable technology is a good use case for MMI.
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.2.1 UC-4: MMI output devices: Controlling 3D Printers using a remote HMI
Submitter(s):Kaz
Reviewer(s): @@@
Tracker Issue ID: @@@
Category: @@@
Class:Chunk of MC
Description:
interaction with a 3D printer or vending machine using multimodal interface like voice and gesture
controlling a 3D printer using multiple modalities including gesture and speech.
may be applied to automatic cooker
this is an output device for MMI
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.2.2 UC-5: MMI input devices: Video Cameras work with HMI and sensors
Submitter(s):Kaz
Reviewer(s): @@@
Tracker Issue ID: @@@
Category: @@@
Class:Chunk of MC
Description:
interaction with a video camera, e.g., back view monitor for cars, using multimodal interface like voice and gesture
a video camera outside the entrance detects intruders and let us via an HMI
a video camera could be installed on a drone and controlled using gesture, speech, etc.
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.2.3 UC-6: Multimodal Guide Device at Museums
Submitter(s):Kaz
Reviewer(s): Shinya, Masahiro, Kosuke
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
not only audio but also any kind of modalities can be used, e.g., input from the user using gesture, speech, shaking the device
interaction between the user and the device is the key
location and orientation are identified, and appropriate service is provided
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
This UC could be applied to other places/scenarios than museums.
2.3.5 UC-8: MIDI-based Speech Synthesizer
Submitter(s):Kaz
Reviewer(s): Masahiro, Shinya, Kosuke
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
control MIDI-based voice generator with a prosody generator to make it a speech synthesizer using multimodal interface
not only speech synthesizers but also music synthesizers, music play devices, sirens at factories could be included
the following is a lit of example input modalities: gesture, movement of body parts (lips, eyes, eye blow, winking), humming, wearable suit
this use case reminds us of possible "semantic interpretation for MMI" which allows us to specify the semantics of the input
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.1.2 UC-9: Collaboration of multiple video cameras
Submitter(s):Kaz
Reviewer(s): @@@
Tracker Issue ID: @@@
Category: @@@
Class:Platform level
Description:
radio-controlled cars and helicopters collaboratively work with each other
each of them has video camera
need to identify their position and direciton
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
would be better to be merged with UC-5
2.3.6 UC-10: Geolocation device, e.g., GPS, as a MC for Location-based Services
Submitter(s):Kaz
Reviewer(s): Kosuke
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
radio-controlled cars and helicopters collaboratively work with each other
each of them has video camera
need to identify their position and direciton
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.3.7 UC-11: Smart Power Meter
Submitter(s):Kaz
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
smart power meter
Motivation:
@@@
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.3.9 UC-13: MMI Automatic visual data annotator
Submitter(s):Helena
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
@@@
Motivation:
In image recognition systems, the coordination between a semantic web inference engine and the low-level attributes recognition services can be synchronized by using MMI lifecycle events.
MMI lifecycle events could be used to integrate complementary and concurrent services within a recognition process
Dependencies:
List of possible related standardization activities...
Gaps
Currently it is very complex to produce well synchronized semantic fusion for multimodal recognition in the fashion media.
The recognition process is very specialized on vision techniques and it lacks of the inference strengths provided by a high level vision coming from semantic web ontologies
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.3.10 UC-14: Multimodal e-Text
Submitter(s):Kaz
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
Multiple modalities to be used to access the contents for e-Learning
MMI Architecture could be used to synchronize those multiple modalities.
Motivation:
Text books for e-Learning mainly use text and graphics using, e.g., HTML, CSS, SVG and MathML.
However, from the viewpoint of accessibility and better understanding for learners it would be better to have multimodal interface, e.g., speech recognition/synthesis, handwriting/gesture recognition, to access the e-Learning contents.
Dependencies:
HTML5, CSS
SMIL, SSML, SRGS/SISR, PLS
Second Screen, Multi-device timing
Annotation
Gaps
TBD
What needs to be standardized:
New features? APIs? data model? language?
Comments:
(nothing so far)
2.3.11 UC-15: English standardized tests through an MMI interface. OPENPAU project
1. Multiple modalities have been used to access the contents for second language acquisition e-learning.
2. The MMI Architecture has been used to synchronize multiple modalities of input navigation.
Motivation:
Specific developments to improve the accessibility of educational content for language learning environments by using MMI interfaces or via keyboard, voice and touchscreen.
7. Oriented user interfaces in conducting tests that allow the use of integrated MMI navigation.
Comments:
2.3.12 UC-16: Remote watching using video camera and MMI interfaces
Submitter(s):Kaz
Reviewer(s): Masahiro, Kosuke, Shinya
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
watching factories and homes remotely using sensors and video cameras
the output could be notification on the Web browser on our mobiles or simply sirens/audio signals at the factory or the
home
should be useful to support aged people, children, etc.
can use both specific vendor-proprietary sensors and standard ones
question on how to integrate more than one factories and homes at once
may be good to have an meta IM at the headquarters of the company which communicates with the sub IMs from all the factories of the company
this discussion makes me think about who in the home should be the IM, and how all the devices inside the home should be integrated
maybe we could have sub IMs for each room which handles devices within the assigned rooms
that implies each IM needs to talk with other (via the main IM of the home) to check which device is located at which room and negotiate with each other to recognize some device like tablet TV is moved from the living to the bedroom
Motivation:
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.1.2 UC-17: User interface as a sensor
Submitter(s):Helena
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
Many smart pictures are on the web, and each one is a sensor that detects user interaction
It's necessary to detect a click on the controller, the brand name, the , the scrolling on the page and the suggestion
It's important for the link to be correct
It's important to detect and record scrolling because it means that the user is reading, so the MC will store the event and the position
There are also passive sensors. A smart picture MC can be created
Motivation:
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
Requirements
be able to update data on the modality component
be able to update data on the IM
needs a "suspended state"
need of a virtualization process to complete the sensoring process
use of similarity data
bidirectional communication between Resources Manager and Modality Components
interface changes managed by the system (Resources Manager) and not user input (the MC)
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.3.15 UC-20 Emergency evacuation information and notification system
Submitter(s):Kaz, Debbie
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Various MCs
Description:
people might have different preference to get information on fire evacuation
what if a big apartment or hotel, or a small house with two floors
possibly multiple places on fire within a town
barrier-free safety
would apply to many kinds of buildings, house, apartment, office building, hotelor other public spaces
or even more general than that, you want it to be aware of where the fire is, for example and route people away from that
could be more general than fire
there are many relevant sensors from the WoT group, e.g., camera, heat, water or something for earthquakes; smoke detectors, all reporting their state to the IM which would notify the users
a friendly robot could help with this
it could help you physically or tell you things
could help children or other people who can't use a smartphone
could talk with the home server and ask the server to open the door
the robot could talk with the parents first
Motivation:
Dependencies:
List of possible related standardization activities...
Gaps
What can't be done with the existing mechanism?
Are there any problems with scaling?
Requirements
What needs to be standardized:
New features? APIs? data model? language?
Comments:
Anything you want to add
2.1.4 UC-21 Collaborative session by remote players
Submitter(s):Kaz, Masahiro, Shinya
Reviewer(s):
Tracker Issue ID: @@@
Category: @@@
Class:Platform level
Description:
multiple players collaboratively play a musical session
piano player in US
guitar player in France
singer in Japan
a computer-based musical instrument could be included