Dirk Schnelle-Walka Deborah Dahl, Conversational Technologies
Copyright © 2019-2020 the Contributors to the Voice Interaction Community Group, published by the Voice Interaction Community Group under the W3C Community Contributor License Agreement (CLA). A human-readable summary is available.
This documents describes a general architecture of Intelligent Personal Assistants and explores the potential for standardization. It is meant to be a first structured exploration of Intelligent Personal Assistants by idenitifying the components and their tasks. Subsequent work is expected to detail the interaction among the identified components and how they ought to perform their task as well as their actual tasks respectively. This document may need to be updated if any changes result of that detailing work.
This specification was published by the Voice Interaction Community Group. It is not a W3C Standard nor is it on the W3C Standards Track. Please note that under the W3C Community Contributor License Agreement (CLA) there is a limited opt-out and other conditions apply. Learn more about W3C Community and Business Groups.
Comments should be sent to the Voice Interaction Community Group public mailing list (public-voiceinteraction@w3.org), archived at https://lists.w3.org/Archives/Public/public-voiceinteraction
Intelligent Personal Assistants (IPA)s are already available in our daily lives through our smart phones. Apple’s Siri, Google Assistant, Microsoft’s Cortana, Samsung’s Bixby and many more are helping us with various tasks, like shopping, playing music, setting schedule, sending messages, and offering answers to simple questions. Additionally, we equip our households with smart speakers like Amazon’s Alexa or Google Home to be available without the need to pick up explicit devices for these sorts of tasks or even control household appliances in our homes. As of today, there is no interoperability between the available IPA providers. Especially for exchanging learned user behaviors this is unlikely to happen at all.
This document describes a general architecture of IPAs and explores the potential areas for standardization. It focuses on voice as the major input modality. However, the overall concept is not restricted to that but also covers purely text based interactions with so-called chatbots as well as interaction using multiple modalities. Conceptually, the authors define executing of actions in the user's environment, like turning on the light, as a modality. This means that components that deal with speech recognition, natural language understanding or speech synthesis will not necessarily be available in these deployments. In case of chatbots, they will be omitted. In case of multimodal interaction, they may be extended by components to recognize input from the respective modality, transform it into something meaningful and vice-versa to generate output in one or more modalities. Some modalities may be used as output-only, like turning on the light, while other modalities may be used as input-only, like touch.
Currently, users are mainly using the IPA Provider that is shipped with a certain piece of hardware. Thus, selection of a smart phone manufacturer actually determines which IPA implementation they are using. Switching among different IPA providers also involves switching a manufacturer, including high costs and getting used to some other way of operation that comes with the UX of the selected manufacturer. On the one hand users should have more freedom in selecting the IPA implementation they want. They are bound to use that service that is available in that implementation but not what they probably prefer. On the other hand, IPA providers, which are mainly producing software, must also function as hardware manufacturer to be successful. Additionally, manufacturers also have to take care to port existing services to their platform. A standardization would clearly lower the needed efforts for that and thus reduce costs.
In order to explore this, a typical usage scenario is described in the following section.
This section describes potential usages of IPAs.
A user would like to plan a trip to an international conference and she needs visa information and airline reservations. She will give the intelligent personal assistant her visa information (her citizenship, where she is going, purpose of travel, etc.) and it will respond by telling her the documentation she needs, how long the process will take and what the cost will be. This may require the personal assistant to consult with an auxiliary web service or another personal assistant that knows about visas.
Once the user has found out about the visa, she tells the PA that she wants to make airline reservations. She specifies her dates of travel and airline preferences and the PA then interacts with her to find appropriate flights.
A similar process will be repeated if the user wants to book a hotel, find a rental car, or find out about local attractions in the destination city. Booking a hotel as part of attending a conference could also involve finding out about a designated conference hotel or special conference rates.
Roles like user, developer, IPA supplier will be added in a future version of this document
In order to cope with such use cases as described above, an IPA may need to make use of several services describing the capabilities of the IPA. These services may be selected from a standardized market place. For the reminder of this document, we consider an IPA that is extendable via such a market place. This kind of IPA features the architectural buildings blocks shown in the following figure.
This architecture comprises 3 layers that are detailed in the following sections
Actual implementations may want to distinguish more than these layers.Clients enable the user to access the IPA via voice with the following characteristics.
General IPA Service API that mediates between the user and the overall IPA system. The service layer may be omitted in case the IPA Client communicates directly with Dialog Management. However, this is not recommended as it may contradict the principle of seperation-of-concerns. It has the following characteristics
Component that receives semantic information determined from user input, updates its internal state, decides upon subsequent steps to continue a dialog and provides output mainly as synthesized or recorded utterances. It has the following characteristics
The Automated Speech Recognizer (ASR) receives audio streams of recorded utterances and generates a recognition hypothesis as text strings. Conceptually, ASR is a modality recognizer for speech. It has the following characteristics
The Text-to-Speech (TTS) component receives text strings, which it converts into audio data. Conceptually, the TTS is a modality specifc renderer for speech. It has the following characteristics
The Core Dialog is able to handle basic functionality via Core Intent Sets to enable interaction with the user at all. This includes among others
Conceptually, the Core Dialog is a special Dialog as described in the following section that is always available
A Dialog is able to handle functionality that can be added to the capabilities of the Dialog Management through their associated Intent Sets. The Dialogs must server different purpose in a sense that they are unique for a certain task. E.g., only a single flight reservation dialog may exist at a time. Dialogs have the following characteristics
A Core Intent Set usually identifies tasks to be executed and define the capabilities of the Core Dialog. Conceptually, the Core Intent Sets are Intent Sets that are always available.
Intent Sets define actions along with their parameters that can be consumed by a corresponding Dialog and has the following characteristics
The Dialog X are able to handle functionality that can be added to the capabilities of the Dialog Manager through their associated Intent Set X. Dialog X extends the Core Dialogs and add functionality by custom Dialogs. The Dialog X's must server different purposes in a sense that they are unique for a certain task. E.g., only a single flight reservation dialog may exist at a time. They have the same characteristics as a Dialog.
An Intent Set X is a special Intent Set that identifies tasks that can be executed within the associated Dialog X.
The Dialog Registry manages all available Dialogs with their associated Intent Sets.
A service that provides access to all known IPA Providers. This service also maps the IPA Intent Sets to the Intent Sets in the Dialog layer. It has the following characteristics
A registry that knows how to access the known IPA Providers, i.e. which are available and credentials to access them. Storing of credentials must meet security and trust considerations that are expected from such a personalized service.
An NLU (Natural Language Understanding) component that is able to extract meaning as intents and associated entities from an utterance as text strings. It has the following characteristics
A generic Data Provider to aid the Core NLU determining the intent.
A provider of an IPA service, like
The IPA provider may be part of the IPA implementation as an IPA Provider or alternatively a subset of the original functionaliy as described below as part of another IPA implementation.
An NLU component that is able to extract meaning as intents and associated entities from an utterance as text strings for IPA Provider X. It has the following characteristics
An Intent Set that might be returned by the Provider NLU to handle the capabilities of IPA Provider X.
A data provider to aid the Provider NLU in determining the intent. It has the following characteristics
A knowledge graph to reason about the detected input from the Provider NLU and Data Provider to come up with some more meaningful results.
This section expands on the use case above, filling in details according to the sample architecture.
A user would like to plan a trip to an international conference and she needs visa information and airline reservations.
The user starts by asking a general purpose assistant (IPA Client, on the left of the diagram) about what the visa requirements are for her situation. For a common situation, such as citizens of the EU traveling to the United States, the IPA is able to answer the question directly from one of its dialogs 1-n getting the information from a web service that it knows about via the corresponding Data Provider. However, for less common situations (for example, a citizen of South Africa traveling to Japan), the generic IPA will try to identify a visa expert assistant application from the dialog registry. If it finds one, it will connect the user with the visa expert, one of the IPA providers on the right side. The visa expert will then engage in a dialog with the user to find out the dates and purposes of travel and will inform the user of the visa process.
Once the user has found out about the visa, she tells the IPA that she wants to make airline reservations. If she wants to use a particular service, or use a particular airline, she would say something like "I want to book a flight on American". The IPA will then either connect the user with American's IPA or, if American doesn't have an IPA, will inform the user of that fact. On the other hand, if the user doesn't specify an airline, the IPA will find a general flight search IPA from its registry and connect the user with the IPA for that flight search service. The flight search IPA will then interact with the user to find appropriate flights.
A similar process would be repeated if the user wants to book a hotel, find a rental car, find out about local attractions in the destination city, etc. Booking a hotel could also involve interacting with the conference's IPA to find out about a designated conference hotel or special rates.
The general architecture of IPAs described in this document should be detailed in subsequent documents. Further work must be done to
Component | Potentially related standards |
---|---|
IPA Client | |
IPA Service | none |
Dialog Management | |
TTS | |
ASR | |
Core Dialog | none |
Core Intent Set | none |
Dialog Registry | |
Provider Selection Service | none |
Accounts/Authentication | |
Core NLU | |
Data Provider | none |
The table above is not meant to be exhaustive nor does it claim that the identified standards are suited for IPA implementations but must be analyzed in more detail in subsequent work. The majority of them is a starting point for further refinement. For instance, the authors consider it unlikely that VoiceXML will actually be used in IPA implementations.
Out of scope of a possible standardization is the implementation inside the IPA Providers and a potential interoperability among them. However, it eases the the integration of their exposed services or even allow to use services across different providers. Actual IPA providers may make use of any upcoming standard to enhance their deployments as a market place of intelligent services.