This document provides implementation guidance for Verifiable Credentials.
Future versions of this document will be updated and maintained by the Credentials Community Group. Please consult that group for the most up to date version of this document.
The work on this document was carried out under tight time constraints due to limitations of the W3C process and publishing deadlines. Under such conditions, errors are unavoidable and some of the ideas presented here are incomplete. The Working Group hopes that in the future, W3C process can be revised to better support the dynamic nature of standards work in a more consistent way across different groups.
Comments regarding this document are welcome. Please file issues directly on GitHub, or send them to public-vc-comments@w3.org (subscribe, archives).
This guide provides some examples and resources for implementing protocols which make use of verifiable credentials, beyond those available in the core specification.
It may be useful to first familiarize yourself with the official Use Cases document, which offers a concise collection of examples of Verifiable Credentials as they may appear in everyday life, and how they may be used.
The data model specification contains the technical details about verifiable credentials. However, the data model specification does not specify any protocols for using verifiable credentials, nor any proof formats or additional identifiers upon which such protocols may depend.
When expressing statements about a specific entity, such as a person, product,
or organization, it is often useful to have an identifier for it so that others
can express statements about the same thing. The verifiable credentials
data model specification
contains numerous examples where the identifier is a
decentralized identifier,
also known as a DID. An example of a DID is did:example:123456abcdef
.
There is currently a proposed charter for a W3C Decentralized Identifier Working Group, which will put DIDs on track to become a W3C standard.
As of the publication of the verifiable credentials data model specification, DIDs are not necessary for verifiable credentials to be useful. Specifically, verifiable credentials do not depend on DIDs and DIDs do not depend on verifiable credentials. However, it is expected that many verifiable credentials will use DIDs and that software libraries implementing the data model specification will benefit from knowing how to resolve DIDs. DID-based URLs may be used to express identifiers associated with subjects, issuers, holders, credential status lists, cryptographic keys, and other machine-readable information associated with a verifiable credential.
Verification is the process a verifier or holder performs when presented with a verifiable presentation or verifiable credential. Verification includes checking the presented item against the core data model, and may also include validating the provided proof section and checking the item's status.
Conformant tooling that processes Verifiable Credentials will ensure that the core data model is verified when processing credentials.
There are many data verification languages, the following approach is one that should work for most use cases.
Protecting the integrity of content is an important component of verification. Verifiers need to have confidence that the content they rely on to verify credentials doesn't change without their knowledge. This content may include data schemas, identifiers, public keys, etc.
There are a number of ways to provide content integrity protection. A few of these are described in greater detail below.
Hashlink URLs can be used to provide content integrity for links to external resources.
A verifiable data registry can also provide content integrity protection. One example of a verifiable data registry which provides content integrity protection is a distributed ledger. This is a shared transaction record which provides mechanisms for verifying the content it stores. These mechanisms include consensus protocols, digital signatures, and verifiable data structures such as Merkle trees. These mechanisms provide cryptographic assurances that the content retrieved from the ledger has not been altered, and is complete.
Usage of verifiable credentials will often require referencing other credentials, embedding or attaching multiple credentials, or otherwise binding them together.
The simplest way for a credential to reference another external credential is to link to it, either directly by using its URI, or indirectly by providing a well-known ID (for example, a credential modeling an internal company Invoice may refer to its parent Purchase Order credential simply by the PO Number, relevant only within the context of this specific enterprise).
This method of linking to an external credential without using an integrity protection mechanism may be acceptable in some use cases, such as when both credentials are issued by the same entity, the verifier has a high level of trust and confidence in the issuer's security and auditing mechanisms, and the risk to the verifier is sufficiently low. However, implementers should keep in mind that although the credential that contains the reference may be integrity protected itself (by a cryptographic signature or a similar proof mechanism), the verifier has no way of knowing that the external credential being linked to has not been tampered with, unless the link itself has a content integrity protection mechanism built into it.
The recommended way of referencing an external credential from within a verifiable credential is to use a linking mechanism that cryptographically binds the contents of the target document to the URI itself. One way to accomplish this would be to use hashlinks or an equivalent URI scheme. Another mechanism would be to encode the full contents of the target credential in the URI itself, although this is much less commonly used, and the discussion of the practical limits of URI length are outside the scope of this document.
Issuers wishing to attach additional supporting information to a verifiable credential are encouraged to use the evidence property. Note that this can be done either by embedding the relevant evidence information in the credential itself, or by referencing it (with or without an integrity protection mechanism, as previously discussed).
This section describes possible relationships between a subject and a holder and how the Verifiable Credentials Data Model expresses these relationships. The following diagram illustrates these relationships, with the subsequent sections describing how each of these relationships are handled in the data model.
The most common relationship is when a subject is the holder. In this case, a verifier can easily deduce that a subject is the holder if the verifiable presentation is digitally signed by the holder and all contained verifiable credentials are about a subject that can be identified to be the same as the holder.
If only the credentialSubject
is allowed to insert a
verifiable credential into a verifiable presentation, the
issuer can insert the nonTransferable
property into
the verifiable credential, as described below.
The nonTransferable
property indicates that a
verifiable credential must only be encapsulated into a
verifiable presentation whose proof was issued by the
credentialSubject
. A verifiable presentation that contains
a verifiable credential containing the nonTransferable
property, whose proof creator is not the credentialSubject
,
is invalid.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.edu/credentials/3732", "type": ["VerifiableCredential", "ProofOfAgeCredential"], "issuer": "https://example.edu/issuers/14", "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "ageOver": 21 }, "nonTransferable": true, "proof": { .. "verificationMethod": "did:example:ebfeb1f712ebc6f1c276e12ec21", ... } }
In this case, the credentialSubject
property might contain
multiple properties, each providing an aspect of a description of the
subject, which combine together to unambiguously identify the
subject. Some use cases might not require the holder to be
identified at all, such as checking to see if a doctor (the subject) is
board-certified. Other use cases might require the verifier to use
out-of-band knowledge to determine the relationship between the subject
and the holder.
{
"@context": ["https://www.w3.org/2018/credentials/v1", "https://schema.org/"]
"id": "http://example.edu/credentials/332",
"type": ["VerifiableCredential", "IdentityCredential"],
"issuer": "https://example.edu/issuers/4",
"issuanceDate": "2017-02-24T19:73:24Z",
"credentialSubject": {
"name": "J. Doe",
"address": {
"streetAddress": "10 Rue de Chose",
"postalCode": "98052",
"addressLocality": "Paris",
"addressCountry": "FR"
},
"birthDate": "1989-03-15"
...
},
"proof": { ... }
}
The example above uniquely identifies the subject using the name, address, and birthdate of the individual.
Usually verifiable credentials are presented to verifiers by the subject. However, in some cases, the subject might need to pass the whole or part of a verifiable credential to another holder. For example, if a patient (the subject) is too ill to take a prescription (the verifiable credential) to the pharmacist (the verifier), a friend might take the prescription in to pick up the medication.
The data model allows for this by letting the subject issue a new verifiable credential and give it to the new holder, who can then present both verifiable credentials to the verifier. However, the content of this second verifiable credential is likely to be application-specific, so this specification cannot standardize the contents of this second verifiable credential. Nevertheless, a non-normative example is provided in Appendix .
The Verifiable Credentials Data Model supports the holder acting on behalf of the subject in at least the following ways. The:
credentialSubject
property.
The mechanisms listed above describe the relationship between the holder and the subject and helps the verifier decide whether the relationship is sufficiently expressed for a given use case.
The additional mechanisms the issuer or the verifier uses to verify the relationship between the subject and the holder are outside the scope of this specification.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.edu/credentials/3732", "type": ["VerifiableCredential", "AgeCredential", "RelationshipCredential"], "issuer": "https://example.edu/issuers/14", "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "ageUnder": 16, "parent": { "id": "did:example:ebfeb1c276e12ec211f712ebc6f", "type": "Mother" } }, "proof": { ... } // the proof is generated by the DMV }
In the example above, the issuer expresses the relationship between the child and the parent such that a verifier would most likely accept the credential if it is provided by the child or the parent.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.edu/credentials/3732", "type": ["VerifiableCredential", "RelationshipCredential"], "issuer": "https://example.edu/issuers/14", "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1c276e12ec211f712ebc6f", "child": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "type": "Child" } }, "proof": { ... } // the proof is generated by the DMV }
In the example above, the issuer expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if they are provided by the child or if the credential above is provided with any of the child's credentials.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.org/credentials/23894", "type": ["VerifiableCredential", "RelationshipCredential"], "issuer": "http://example.org/credentials/23894", "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "parent": { "id": "did:example:ebfeb1c276e12ec211f712ebc6f", "type": "Mother" } }, "proof": { ... } // the proof is generated by the child }
In the example above, the child expresses the relationship between the child and the parent in a separate credential such that a verifier would most likely accept any of the child's credentials if the credential above is provided.
Similarly, the strategies described in the examples above can be used for many other types of use cases, including power of attorney, pet ownership, and patient prescription pickup.
When a subject passes a verifiable credential to another holder, the subject might issue a new verifiable credential to the holder in which the:
The holder can now create a verifiable presentation containing these two verifiable credentials so that the verifier can verify that the subject gave the original verifiable credential to the holder.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "https://example.com/VP/0987654321", "type": ["VerifiablePresentation"], "verifiableCredential": [ { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://pharma.example.com/credentials/3732", "type": ["VerifiableCredential", "PrescriptionCredential"], "issuer": "https://pharma.example.com/issuer/4", "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "prescription": {....} }, "credentialStatus": { "id": "https://pharma.example.com/credentials/status/3#94567", "type": "RevocationList2020Status", "revocationListIndex": "94567", "revocationListCredential": "https://pharma.example.com/credentials/status/3" }, "proof": {....} }, { "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "https://example.com/VC/123456789", "type": ["VerifiableCredential", "PrescriptionCredential"], "issuer": "did:example:ebfeb1f712ebc6f1c276e12ec21", "issuanceDate": "2010-01-03T19:53:24Z", "credentialSubject": { "id": "did:example:76e12ec21ebhyu1f712ebc6f1z2", "prescription": {....} }, "proof": { "type": "RsaSignature2018", "created": "2018-06-17T10:03:48Z", "proofPurpose": "assertionMethod", "jws": "pYw8XNi1..Cky6Ed=", "verificationMethod": "did:example:ebfeb1f712ebc6f1c276e12ec21/keys/234" } } ], "proof": [{ "type": "RsaSignature2018", "created": "2018-06-18T21:19:10Z", "proofPurpose": "authentication", "verificationMethod": "did:example:76e12ec21ebhyu1f712ebc6f1z2/keys/2", "challenge": "c0ae1c8e-c7e7-469f-b252-86e6a0e7387e", "jws": "BavEll0/I1..W3JT24=" }] }
In the above example, a patient (the original subject) passed a prescription (the original verifiable credential) to a friend, and issued a new verifiable credential to the friend, in which the friend is the subject, the subject of the original verifiable credential is the issuer, and the credential is a copy of the original prescription.
When an issuer wants to authorize a holder to possess a credential that describes a subject who is not the holder, and the holder has no known relationship with the subject, then the issuer might insert the relationship of the holder to itself into the subject's credential.
Verifiable credentials are not an authorization framework and therefore delegation is outside the scope of this specification. However, it is understood that verifiable credentials are likely to be used to build authorization and delegation systems. The following is one approach that might be appropriate for some use cases.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.edu/credentials/3732", "type": ["VerifiableCredential", "NameAndAddress"], "issuer": "https://example.edu/issuers/14", "holder": { "type": "LawEnforcement", "id": "did:example:ebfeb1276e12ec21f712ebc6f1c" }, "issuanceDate": "2010-01-01T19:23:24Z", "credentialSubject": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "name": "Mr John Doe", "address": "10 Some Street, Anytown, ThisLocal, Country X" }, "proof": { "type": "RsaSignature2018", "created": "2018-06-17T10:03:48Z", "proofPurpose": "assertionMethod", "verificationMethod": "https://example.edu/issuers/14/keys/234", "jws": "pY9...Cky6Ed = " } }
The Verifiable Credentials Data Model currently does not support either of these scenarios. It is for further study how they might be supported.
There are at least two different cases to consider where an entity wants to dispute a credential issued by an issuer:
address
property is incorrect or out of date.
The mechanism for issuing a DisputeCredential
is the same as
for a regular credential, except that the credentialSubject
identifier in the DisputeCredential
property is the identifier of
the disputed credential.
For example, if a credential with an identifier of
https://example.org/credentials/245
is disputed, an entity
can issue one of the credentials shown below. In the first example, the
subject might present this to the verifier along with the disputed
credential. In the second example, the entity might publish the
DisputeCredential
in a public venue to make it known that the
credential is disputed.
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.com/credentials/123", "type": ["VerifiableCredential", "DisputeCredential"], "credentialSubject": { "id": "http://example.com/credentials/245", "currentStatus": "Disputed", "statusReason": { "@value": "Address is out of date", "@language": "en" }, }, "issuer": "https://example.com/people#me", "issuanceDate": "2017-12-05T14:27:42Z", "proof": { ... } }
{ "@context": "https://w3id.org/credentials/v1", "id": "http://example.com/credentials/321", "type": ["VerifiableCredential", "DisputeCredential"], "credentialSubject": { "id": "http://example.com/credentials/245", "currentStatus": "Disputed", "statusReason": { "@value": "Credential contains disputed statements", "@language": "en" }, "disputedClaim": { "id": "did:example:ebfeb1f712ebc6f1c276e12ec21", "address": "Is Wrong" } }, "issuer": "https://example.com/people#me", "issuanceDate": "2017-12-05T14:27:42Z", "proof": { ... } }
In the above verifiable credential, the issuer is claiming that the address in the disputed verifiable credential is wrong. For example, the subject might wrongly be claiming to have the same address as that of the issuer.
If a credential does not have an identifier, a content-addressed identifier can be used to identify the disputed credential. Similarly, content-addressed identifiers can be used to uniquely identify individual claims.
Verifiable credentials may be presented to a verifier by
using a verifiable presentation. A verifiable presentation can
be targeted to a specific verifier by using a Linked Data Proof
that includes a domain
and challenge
. This also
helps prevent a verifier from reusing a verifiable presentation
as their own.
The domain
value can be any string or URI, and the
challenge
should be a randomly generated string.
The following sample verifiable presentation is for authenticating to
a website, https://example.com
.
{ "@context": [ "https://www.w3.org/2018/credentials/v1" ], "type": "VerifiablePresentation, "verifiableCredential": { ... }, "proof": { "type": "Ed25519Signature2018", "created": "2019-08-13T15:09:00Z", "challenge": "d1b23d3...3d23d32d2", "domain": "https://example.com", "jws": "eyJhbGciOiJFZERTQSIsImI2NCI6ZmFsc2UsImNyaXQiOlsiYjY0Il19..uyW7Hv VOZ8QCpLJ63wHode0OdgWjsHfJ0O8d8Kfs55dMVEg3C1Z0bYUGV49s8IlTbi3eXsNvM63n vah79E-lAg", "proofPurpose": "authentication" } }
The JWT aud
claim name refers to (i.e., identifies) the
intended audience of the verifiable presentation (i.e., the verifier(s)).
Consequently this is an alternative to the Linked Data Proof method specified
above. It lets the holder indicate which verifier(s) it allows
to verify the verifiable presentation. Any JWT-compliant verifier
that is not identified in the aud
is required to
reject the JWT (see RFC 7519).
RFC 7519 defines
aud
as "an array of case-sensitive strings, each containing a
StringOrURI
value". For use in a verifiable presentation,
we strongly suggest that this be restricted to a single URI value, equal to the URI
of the intended verifier.
The data model specification
provides no guidance of how to transform this JWT claim into a property
of the verifiable presentation, nor vice versa. We strongly suggest that the
aud
JWT claim be mapped to the verifier
property of the verifiable presentation.
{
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://www.w3.org/2019/credentials/v1.1"
],
"type": "VerifiablePresentation",
"verifiableCredential": [" ... "],
"holder": "did:example:ebfeb1f712ebc6f1c276e12ec21",
"verifier": "https://some.verifier.com"
}
{
"iss": "did:example:ebfeb1f712ebc6f1c276e12ec21",
"jti": "urn:uuid:3978344f-8596-4c3a-a978-8fcaba3903c5",
"aud": "https://some.verifier.com",
"nbf": 1541493724,
"iat": 1541493724,
"exp": 1573029723,
"nonce": "343s$FSFDa-",
"vp": {
"@context": [
"https://www.w3.org/2018/credentials/v1",
"https://www.w3.org/2018/credImpGuide/v1"
],
"type": "VerifiablePresentation",
"verifiableCredential": [" ... "]
}
}
The Verifiable Credentials Data Model is designed around an open world assumption, meaning that any entity can say anything about another entity. This approach enables permissionless innovation; there is no centralized registry or authority through which an extension author must register themselves nor the specific data models and vocabularies they create.
Instead, credential data model authors are expected to use machine-readable vocabularies through the use of [LINKED-DATA]. This implementation guide provides examples for how to express data models using a data format that is popular with software developers and web page authors called [JSON-LD]. This data format provides features that enable authors to express their data models in idiomatic JSON while also ensuring that their vocabulary terms are unambigiously understood, even by software that does not implement JSON-LD processing.
The Verifiable Credentials data model also uses a graph-based data model, which allows authors to model both simple relationships that describe one or more attributes for a single entity and complex multi-entity relationships.
The rest of this section describes how to author extensions that build on the Verifiable Credentials Data Model.
We expect the most common extensions to the Verifiable Credentials Data Model to be new credential types. Whenever someone has something to say about one or more entities and they want their authorship to be verifiable, they should use a Verifiable Credential. Sometimes there may be an existing credential type, that someone else has created, that can be reused to make the statements they want to make. However, there are often cases where new credential types are needed.
New credential types can be created by following a few steps. This guide will also walk you through creating an example new credential type. At a high level, the steps to follow are:
So, let's walk through creating a new credential type which we will
call ExampleAddressCredential
. The purpose of this credential
will be to express a person's postal address.
First, we must design a data model for our new credential type. We know that we will need to be able to express the basics of a postal address, things like a person's city, state, and zipcode. Of course, those items are quite US centric, so we should consider internationalizing those terms. But before we go further, since we're using [LINKED-DATA] vocabularies, there is a good chance that commonly known concepts may already have a vocabulary that someone else has created that we can leverage.
If we are going to use someone else's vocabulary, we will want to make sure it is stable and unlikely to change in any significant way. There may even be technologies that we can make use of that store immutable vocabularies that we can reference, but those are not the focus of this example. Here we will rely on the inertia that comes from a very popularly used vocabulary on the Web, schema.org. It turns out that this vocabulary has just what we need; it has already modeled a postal address and even has examples for how to express it using JSON-LD.
Please note that schema.org is developed incrementally, meaning that the definition
of a term today may differ from a future definition, or even be removed. Although
schema.org developers encourage using the latest release, as in the simple
non-versioned schema.org URLs such as http://schema.org/Place
in
structured data applications, there are times in which more precise versioning is
important. Schema.org also provides dated snapshots of each release, including both
human and machine readable definitions of the schema.org core vocabulary. These are
linked from the releases page.
For instance, instead of the unversioned URI http://schema.org/Place
,
you might use the versioned URI
https://schema.org/version/3.9/schema-all.html#term_Place
.
In addition, the schemaVersion
property has been defined to provide a
way for documents to indicate the specific intended version of schema.org's
definitions.
Using the schema.org vocabulary and JSON-LD we can express a person's address like so:
{
"@context": [
"http://schema.org"
],
"type": "Person",
"address": {
"type": "PostalAddress",
"streetAddress": "123 Main St."
"addressLocality": "Blacksburg",
"addressRegion": "VA",
"postalCode": "24060",
"addressCountry": "US"
}
}
Note the above @context
key in the JSON. This @context
refers to a machine-readable file (also expressed in JSON) that provides
term definitions [JSON-LD]. A term definition maps a key or type used in the
JSON, such as address
or PostalAddress
, to a globally
unique identifier: a URL.
This ensures that when software sees the @context
http://schema.org, that it will interpret the the keys and types in the
JSON in a globally consistent way, without requiring developers to use full URLs
in the JSON or in the code that may traverse it. As long as the software is
aware of the specific @context
used (or if it uses JSON-LD
processing to transform it to some other known @context
), then it
will understand the context in which the JSON was written and meant to
be understood. The use of @context
also allows [JSON-LD] keywords
such as @type
to be aliased to the simpler type
as
is done in the above example.
Note that we could also express the JSON using full URLs, if we want to avoid using @context. Here is what the example would look like if we did that:
{ "@type": "http://schema.org/Person", "http://schema.org/address": { "@type": "http://schema.org/PostalAddress", "http://schema.org/streetAddress": "123 Main St." "http://schema.org/addressLocality": "Blacksburg", "http://schema.org/addressRegion": "VA", "http://schema.org/postalCode": "24060", "http://schema.org/addressCountry": "US" } }
While this form is an acceptable way to express the information such that it is
unambiguous, many software developers would prefer to use more idiomatic JSON.
The use of @context
enables idiomatic JSON without losing global
consistency and without the need for a centralized registry or authority for
creating extensions. Note that @context
can also have more than
one value. In this case, a JSON array is used to express multiple values, where
each value references another context that defines terms. Using this mechanism
we can first bring in the terms defined in the Verifiable Credentials Data Model
specification and then bring in the terms defined by schema.org:
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "http://schema.org" ], ... "credentialSubject": { "type": "Person", "address": { "type": "PostalAddress", "streetAddress": "123 Main St." "addressLocality": "Blacksburg", "addressRegion": "VA", "postalCode": "24060", "addressCountry": "US" } }, ... }
Note, however, that each context might have a different definition for
the same term, e.g., the JSON key address
might map to a different
URL in each context. By default, [JSON-LD] allows terms in a
@context
to be redefined using a last term wins
order. While these changes can be safely dealt with by using JSON-LD
processing, we want to lower the burden on consumers of Verifiable Credentials.
We want consumer software to be able to make assumptions about the meaning of
terms by only having to read and understand the string value associated with
the @context
key. We don't want them to have to worry about terms
being redefined in unexpected ways. That way their software can inspect only
the @context
values and then be hard coded to understand the
meaning of the terms.
In order to prevent term redefinition, the [JSON-LD] @protected
feature must be applied to term definitions in the @context
. All
terms in the core Verifiable Credentials @context
are
already protected in this way. The only time that an existing term is allowed
to be redefined is if the new definition is scoped underneath another new term
that is defined in a context. This matches developer expectations and
ensures that consumer software has strong guarantees about the semantics of the
data it is processing; it can be written such that it is never confused about
the definition of a term. Note that consumers must determine their own risk
profile for how to handle any credentials their software processes that include
terms that it does not understand.
Given the above, there is at least one reason why we don't want to use
the schema.org context: it is designed to be very flexible and thus
does not use the @protected
feature. There are a few additional
reasons we want to create our own [JSON-LD] context though. First, the
schema.org context does not define our new credential type:
ExampleAddressCredential. Second, it is not served via a secure
protocol (e.g., https); rather, it uses http. Note that this
is less of a concern than it may seem, as it is recommended that all Verifiable
Credential consumer software hard code the @context
values it
understands and not reach out to the Web to fetch them. Lastly, it is a very
large context, containing many more term definitions than are necessary for our
purposes.
So, we will create our own [JSON-LD] context that expresses just those term definitions that we need for our new credential type. Note that this does not mean that we must mint new URLs; we can still reuse the schema.org vocabulary terms. All we are doing is creating a more concise and targeted context. Here's what we'll need in our context:
{ "@version": 1.1, "@protected": true, "ExampleAddressCredential": "https://example.org/ExampleAddressCredential", "Person": { "@id": "http://schema.org/Person", "@context": { "@version": 1.1, "@protected": true, "address": "http://schema.org/address" } }, "PostalAddress": { "@id": "http://schema.org/PostalAddress", "@context": { "@version": 1.1, "@protected": true, "streetAddress": "http://schema.org/streetAddress", "addressLocality": "http://schema.org/addressLocality", "addressRegion": "http://schema.org/addressRegion", "postalCode": "http://schema.org/postalCode", "addressCountry": "http://schema.org/addressCountry" } } }
The above context defines a term for our new credential type ExampleAddressCredential, mapping it to the URL https://example.org/ExampleAddressCredential. We could have also chosen a URI like urn:private-example:ExampleAddressCredential, but this approach would not allow us to serve up a Web page to describe it, if we so desire. The context also defines the terms for types Person and PostalAddress, mapping them to their schema.org vocabulary URLs. Furthermore, when those types are used, it also defines protected terms for each of them via a scoped context, mapping terms like address and streetAddress to their schema.org vocabulary URLs. For more information on how to write a JSON-LD context or scoped contexts, see the [JSON-LD] specification.
Now that we have a [JSON-LD] context, we must give it a URL. Technically speaking, we could just use a URI, for example, a private URN such as urn:private-example:my-extension. However, if we want people to be able to read and discover it on the Web, we should give it a URL like https://example.org/example-address-credential-context/v1.
When this URL is dereferenced, it should return application/ld+json by default, to allow JSON-LD processors to process the context. However, if a user agent requests HTML, it should return human readable text that explains, to humans, what the term definitions are and what they map to. Since we're reusing an existing vocabulary, schema.org, we can also simply link to the definitions of the meaning of our types and terms via their website. If we had created our own new vocabulary terms, we would describe them on our own site, ideally including machine readable Information as well.
Now we're ready for our context to be used by anyone who wishes to issue an ExampleAddressCredential!
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://example.org/example-address-credential-context/v1" ], "id": "https://example.org/credentials/1234", "type": "ExampleAddressCredential", "issuer": "https://example.org/people#me", "issuanceDate": "2017-12-05T14:27:42Z", "credentialSubject": { "id": "did:example:1234", "type": "Person", "address": { "type": "PostalAddress", "streetAddress": "123 Main St." "addressLocality": "Blacksburg", "addressRegion": "VA", "postalCode": "24060", "addressCountry": "US" } }, "proof": { ... } }
Note that writing this new credential type requires permission from no one, you must only adhere to the above referenced standards.
The [[[vc-data-model]]] specifies a minimal set of
JWT claim names that are to be used to represent the
properties of a verifiable credential
and its credentialSubject
. Implementers may wish to extend a
verifiable credential with some
properties that are new (e.g., drivingLicenseNumber
,
mySpecialProperty
or that are already registered with IANA
as JWT claim names (e.g., given_name
.
phone_number_verified
.
As the [[[vc-data-model]]] states, such extension properties are best
placed directly in either the JWT vc
claim or the
credentialSubject
property of the vc
claim
as appropriate, although they MAY be placed directly into their own
JWT claims.
If implementers wish to use JWT claim names for these extensions, the following steps are recommended. Note that there are three types of JWT claim name: public, named with a URI; private, named with a local name; and registered with IANA.
credentialSubject
property into the JWT claim.
credentialSubject
, as appropriate.
The JSON-LD Context declaration mechanism is used by implementations to signal the context in which the data transmission is happening to consuming applications:
{ "@context": [ "https://www.w3.org/2018/credentials/v1", "https://www.w3.org/2018/credentials/examples/v1" ], "id": "http://example.edu/credentials/1872", ...
Extension authors are urged to publish two types of information at the
context URLs. The first type of information is for machines, and is the
machine-readable JSON-LD Context. The second type of information is for
humans, and should be an HTML document. It is suggested that the default
mode of operation is to serve the machine-readable JSON-LD Context as that is
the primary intended use of the URL. If content-negotiation is supported,
requests for text/html
should result in a human readable document.
The human readable document should at least contain usage information for the
extension, such as the expected order of URLs associated with the
@context
property, specifications that elaborate on the extension,
and examples of typical usage of the extension.
The verifiable credentials data model is designed to be proof format agnostic. The specification does not normatively require any particular digital proof or signature format. While the data model is the canonical representation of a verifiable credential or verifiable presentation, the proving mechanisms for these are often tied to the syntax used in the transmission of the document between parties. As such, each proofing mechanism has to specify whether the validation of the proof is calculated against the state of the document as transmitted, against the transformed data model, or against another form. At the time of publication, at least two proof formats are being actively utilized by implementers, and the Working Group felt that documenting what these proof formats are and how they are being used would be beneficial to other implementers.
This guide provides tables in section Benefits of JWTs and section Benefits of JSON-LD and LD-Proofs that compare three syntax and proof format ecosystems; JSON+JWTs, JSON-LD+JWTs, and JSON-LD+LD-Proofs.
Because the Verifiable Credentials Data Model is extensible, and agnostic to any particular proof format, the specification and use of additional proof formats is supported.
The Verifiable Credentials Data Model is designed to be compatible with a variety of existing and emerging syntaxes and digital proof formats. Each approach has benefits and drawbacks. The following table is intended to summarize a number of these native trade-offs.
The table below compares three syntax and proof format ecosystems; JSON+JWTs, JSON-LD+JWTs, and JSON-LD+LD-Proofs.
Feature | JSON + JWTs |
JSON‑LD + JWTs |
JSON‑LD + LD‑Proofs |
---|---|---|---|
PF1a. Proof format supports Zero-Knowledge Proofs. | ✓ | ✓ | ✓ |
PF2a. Proof format supports arbitrary proofs such as Proof of Work, Timestamp Proofs, and Proof of Stake. | ✓ | ✓ | ✓ |
PF3a. Based on existing official standards. | ✓ | ✖ | ✖ |
PF4a. Designed to be small in size. | ✓ | ✖ | ✖ |
PF5a. Offline support without further processing. | ✓ | ✖ | ✖ |
PF6a. Wide adoption in other existing standards. | ✓ | ✓ | ✖ |
PF7a. No type ambiguity. | ✓ | ✖ | ✖ |
PF8a. Broad library support. | ✓ | ✖ | ✖ |
PF9a. Easy to understand what is signed. | ✓ | ✓ | ✖ |
PF10a. Ability to be used as authn/authz token with existing systems. | ✓ | ✓ | ✖ |
PF11a. No additional canonicalization required. | ✓ | ✖ | ✖ |
PF12a. No Internet PKI required. | ✓ | ✖ | ✖ |
PF13a. No resolution of external documents needed. | ✓ | ✖ | ✖ |
Some of the features listed in the table above are debateable, since a feature can always be added to a particular syntax or digital proof format. The table is intended to identify native features of each combination such that no additional language design or extension is required to achieve the identified feature. Features that all languages provide, such as the ability to express numbers, have not been included for the purposes of brevity. Find more information about different proof formats in the next section.
@context
. This means that
a verifiable credential system would rely on existing Internet PKI to a certain extend and
cannot be fully decentralized. A JWT-based system does not need to introduce this dependency.
The Verifiable Credentials Data Model is designed to be compatible with a variety of existing and emerging syntaxes and digital proof formats. Each approach has benefits and drawbacks. The following table is intended to summarize a number of these native trade-offs.
The table below compares three syntax and proof format ecosystems; JSON+JWTs, JSON-LD+JWTs, and JSON-LD+LD-Proofs. Readers should be aware that Zero-Knowledge Proofs are currently proposed as a sub-type of LD-Proofs and thus fall into the final column below.
Feature | JSON + JWTs |
JSON‑LD + JWTs |
JSON‑LD + LD‑Proofs |
---|---|---|---|
PF1b. Support for open world data modelling. | ✖ | ✓ | ✓ |
PF2b. Universal identifier mechanism for JSON objects via the use of URIs. | ✖ | ✓ | ✓ |
PF3b. A way to disambiguate properties shared among different JSON documents by mapping them to IRIs via a context. | ✖ | ✓ | ✓ |
PF4b. A mechanism to refer to data in an external document, where the data may be merged with the local document without a merge conflict in semantics or structure. | ✖ | ✓ | ✓ |
PF5b. The ability to annotate strings with their language. | ✖ | ✓ | ✓ |
PF6b. A way to associate arbitrary datatypes, such as dates and times, with arbitrary property values. | ✖ | ✓ | ✓ |
PF7b. A facility to express one or more directed graphs, such as a social network, in a single document. | ✖ | ✓ | ✓ |
PF8b. Supports signature sets. | ✖ | ✖ | ✓ |
PF9b. Embeddable in HTML such that search crawlers will index the machine-readable content. | ✖ | ✖ | ✓ |
PF10b. Data on the wire is easy to debug and serialize to database systems. | ✖ | ✖ | ✓ |
PF11b. Nesting signed data does not cause data size to double for every embedding. | ✖ | ✖ | ✓ |
PF12b. Proof format supports Zero-Knowledge Proofs. | ✖ | ✖ | ✓ |
PF13b. Proof format supports arbitrary proofs such as Proof of Work, Timestamp Proofs, and Proof of Stake. | ✖ | ✖ | ✓ |
PF14b. Proofs can be expressed unmodified in other data syntaxes such as YAML, N-Quads, and CBOR. | ✖ | ✖ | ✓ |
PF15b. Changing property-value ordering, or introducing whitespace does not invalidate signature. | ✖ | ✖ | ✓ |
PF16b. Designed to easily support experimental signature systems. | ✖ | ✖ | ✓ |
PF17b. Supports signature chaining. | ✖ | ✖ | ✓ |
PF18b. Does not require pre-processing or post-processing. | ✖ | ✖ | ✓ |
PF19b. Canonicalization requires only base-64 encoding. | ✖ | ✖ | ✓ |
Some of the features listed in the table above are debateable, since a feature can always be added to a particular syntax or digital proof format. The table is intended to identify native features of each combination such that no additional language design or extension is required to achieve the identified feature. Features that all languages provide, such as the ability to express numbers, have not been included for the purposes of brevity.
@context
property. JSON has no such feature.
The Verifiable Credentials Data Model is designed to be compatible with a variety of existing and emerging digital proof formats. Each proof format has benefits and drawbacks. Many proof formats cannot selectively reveal attribute values from a verifiable credential; they can only reveal all (or none).
Zero-Knowledge Proofs (ZKPs) are a proof format that enables data-minimization features in verifiable presentations, such as selective disclosure and predicate proofs.
Currently, disclosing data is an all or nothing process, whether online or off. Many digital identity systems reveal all the attributes in a digital credential. The simplest method for signing a verifiable credential signs the entire credential and when presented, fully discloses all the attributes.
Along with a full disclosure of all the attributes in a verifiable credential, standard verifiable presentations reveal the actual signature. With both the data and signature in hand, a verifier has a complete copy of the credential. Without care, this could enable the verifier to impersonate the holder. Also, since the signature is the same every time this credential is presented, the signature itself is a unique identifier and becomes PII (personally identifiable information).
It is also possible to fully disclose the attributes in a zero-knowledge verifiable credential. Unlike non-ZKP methods, zero-knowledge methods do not reveal the actual signature; instead, they only reveal a cryptographic proof of a valid signature. Only the holder of the signature has the information needed to present the credential to a verifier. This means that zero-knowledge methods provide a holder additional protection from impersonation. Because the signature is not revealed, it also cannot be used as a unique identifier.
Selective disclosure means that a holder doesn't have to reveal all of the attributes contained in a verifiable credential. This reduces the liability of handling or holding data that it is not necessary to share or collect.
Non-ZKP methods for selective disclosure often require the credential issuer to create a unique credential for each individual attribute, or possible combination of attributes. This could quickly become impractical as the number of credentials or combinations thereof exponentially explodes. Atomic credentials (which only contain a single attribute) may also not guarantee that the data is properly paired when used in a verifiable presentation. For example, a holder has two vehicle credentials, one for a 2018 Mazda with 15,000 miles and the other for a 1965 Lincoln with 350,000 miles. With atomic credentials it may be possible to claim the user has a 1965 Lincoln with 15,000 miles.
Zero-knowledge methods allow a holder to choose which attributes to reveal and which attributes to withhold on a case-by-case basis without involving the issuer. The credential issuer only needs to provide a single verifiable credential that contains all of the attributes. Each attribute is individually incorporated into the signature. This enables two options: to reveal the attribute or to prove that you know the value of the attribute without revealing it. For example, a credential with attributes for name, birthdate, and address can be used in a presentation to reveal only your name.
Non-ZKP methods implementing selective disclosure often requires the cooperation of the issuer. Selective disclosure using zero-knowledge methods gives the holder personal control over what to reveal. A verifiable presentation based on zero-knowledge proof mechanisms only contains those attributes and associated values that are required to satisfy the presentation requirements.
A predicate proof is a proof that answers a true-or-false question. For example, "Are you over the age of 18?" Using non-ZKP methods, predicate proofs must be provided by the issuer as one of the attributes of a verifiable credential. This means that in order for a non-ZKP credential to be used to prove age-over-18, it would need to contain the attribute age-over-18. This credential could not be used to reveal your birthdate, unless it also included a birthdate claim. It also couldn't be used to prove age-over-25. To prove age-over-25, the holder would need to have received a credential with an age-over-25 claim.
Using zero-knowledge methods, predicate proofs can be generated by the holder at the time of presentation without issuer involvement. For example, a verifiable credential with the claim birthdate can be used in a verifiable presentation to prove age-over-18. The same credential could then be used in another presentation to prove age-over-25, all without revealing the holder's birthdate.
Verifiable credentials may need to be revocable. If an issuer can revoke a credential, verifiers must be able to determine a credential's revocation status.
Non-ZKP methods for checking revocation status may require the verifier to directly contact the issuer. Less restrictive checks could be made against a list of revoked credential identifiers posted in a public registry. The holder is required to disclose the credential identifier to the verifier so that it can be checked. The verifier is then responsible for doing the work to check revocation.
Using zero-knowledge methods, the credential identifier can be checked against a list of revoked credential identifiers without revealing the identifier. This reduces the ability of network monitors to correlate a holder's credential presentations, and removes the ability of an issuer to be made aware of the presentation of verifiable credentials they have issued.
Correlation is the ability to link data from multiple interactions to a single user. Correlation can be performed by a verifier, by issuers and verifiers working together, or by a third party observing interactions on the network. Correlation is a way to collect data about a holder without the holder's consent or knowledge. It is also a way to deanonymize private transactions. For example, a holder might use a verifiable credential to prove they are authorized to vote, then submit a secret ballot. If it is possible to correlate the holder's credential with the secret ballot, thereby linking a specific vote to a specific voter, it would be detrimental to the democratic process and could enable retaliation.
One way to reduce correlation is through data minimization, by sharing only the information required to complete a transaction. Another way to reduce correlation is to make each interaction look unique. When interactions disclose unique identifiers, an observer can link multiple interactions to a single user. Non-ZKP methods with only a single identifier per user create correlation opportunities by embedding that identifier in multiple credentials or interactions. Zero-knowledge proofs remove this linkability between interactions.
Non-ZKP methods that reveal all attributes and use unique identifiers are completely correlatable. Zero-knowledge methods enable data minimization and allow holders to have trusted interactions with verifiers without dependence on unique identifiers.
Although correlation can never be eliminated completely, the goal of zero-knowledge methods is to reduce the probability of correlation and to put control over the level of correlation into the hands of the verifiable credential holder.
Zero-knowledge methods are more complex than non-ZKP methods. Cryptographic engineers must understand complicated protocols and write code to create libraries that support zero-knowledge methods. System implementers can then use these libraries without being exposed to the underlying complexity, but must trust that the implementation was done correctly. They can utilize the features of selective disclosure and bring the benefits of the method to their customers without a significant increase in effort over using non-ZKP methods.
Due to the underlying complexity, zero-knowledge methods require more CPU and memory to use. This also adds to the time required to create and verify proofs. This should be considered when using less capable devices such as IOT devices or older phones.
Another drawback of zero-knowledge proofs is that they tend to be larger than simple signatures.
There is a perception that zero-knowledge methods are new and untested. Zero-knowledge methods were first introduced in 1989 as a way to guard secrets. Although they may not be well understood by the general public, they have received considerable review and scrutiny in the cryptographic community. They are considered just as secure as many common cryptographic techniques in use today.
Entities that use verifiable credentials and verifiable presentations should follow protocols that enable progressive trust. Progressive trust refers to enabling individuals to share information about themselves only on an as needed basis, slowing building up more trust as more information is shared with another party.
Progressive trust is strongly related to the principle of data minimization, and enabled by technologies such as selective disclosure and predicate proofs. We encourage the use of progressive trust as a guiding principle for implementers as they develop protocols for issuers, holders, and verifiers.
Data minimization is a principle that encourages verifiers to request the minimum amount of data necessary from holders, and for holders to only provide the minimum amount of data to verifiers. This "minimum amount of data" depends on the situation and may change over the course of a holder's interaction with a verifier.
For example, a holder may apply for a loan, with a bank acting as the verifier. There are several points at which the bank may want to determine whether the holder is qualified to continue in the process of applying for the loan; for instance, the bank may have a policy of only providing loans to existing account holders. A protocol that follows the principle of data minimization would allow the holder to reveal to the verifier only that they are an existing account holder, before the bank requests any additional information, such as account balances or employment status. In this way, the applicant may progressively entrust the bank with more information, as the data needed by the bank to make its determinations is requested a piece at a time, as needed, rather than as a complete set, up front.
Selective disclosure is the ability of a holder to select some elements of a verifiable credential to share with a verifier, without revealing the rest. There are several different methods which support selective disclosure, we provide three examples:
Another technique which may be used to support progressive trust is to use predicates as the values of revealed claims. Predicates allow a holder to provide True/False values to a verifier rather than revealing claim values.
Predicate proofs may be enabled by verifiable credential issuers
as claims, e.g., the credentialSubject
may include an
ageOver18
property rather than a birthdate
property. This would allow holders to provide proof that they are
over 18 without revealing their birthdates.
Certain signature types enable predicate proofs by allowing claims from a
standard verifiable credential to be presented as predicates. For
example, a
Camenisch-Lysyanskaya signed verifiable credential that contains a
credentialSubject
with a birthdate
property may
be included in a verifiable presentation as a derived credential that
contains an ageOver18
property.
The examples provided in this section are intended to illustrate some possible mechanisms for supporting progressive trust, not provide an exhaustive or comprehensive list of all the ways progressive trust may be supported. Research in this area continues with the use of cutting-edge proof techniques such as zk-SNARKS and Bulletproofs, as well as different signature protocols.
A draft report by the Credentials Community Group on data minimization may also be useful reading for implementers looking to enable progressive trust.
The W3C Web Authentication specification extends the capabilities of in-browser web applications by enabling them to strongly authenticate users with the aid of scoped public key-based credentials. It defines the idea of authenticators, which are cryptographic entities that can generate and store public key credentials at the behest of a Relying Party, subject to user consent, mediated by the web browser to preserve user privacy.
Since the key based credentials created by Web Authentication Level 1 authenticators are narrowly scoped to a particular Relying Party origin, they are unsuited (in their current form) to general purpose signature and verification operations. However, many web developers working with Verifiable Credentials have expressed interest in leveraging the Web Authentication API, since it provides a secure browser-mediated interface to crucial key management infrastructure.
The Web Authentication Working Group has agreed to address this use case in the WebAuthn Level 2 specification, and is currently working to enable the kind of cross-origin usage that would allow the WebAuthn API to be used for verifiable presentations. For example, verifiable credential wallets could allow authentication based on verifiable presentations, by using WebAuthn authenticators to sign those presentations with challenges from verifier websites.
The W3C Verifiable Claims Working Group has produced a test suite in order for implementers to confirm their conformance with the current specifications.
You can review the current implementation report, which contains conformance testing results for submitted implementations supporting the Verifiable Credentials Data Model specification.