This is a supporting document for the RDF Dataset Canonicalization specification, providing some extra explanation of the problem space and associated use cases.
This document is an updated version of the original explainer document that supported the original chartering of the working group.
This short document provides some basic background to the development of RDF Dataset Canonicalization [[rdf-canon]]. It includes a number of representative use cases that originally motivated the work.
For a precise definition of the various terms and concepts, the reader should refer to the formal RDF specification [[rdf11-concepts]].
R, R' and S each denote an RDF Dataset [[rdf11-concepts]].
R = S denotes that R and S are identical RDF Datasets.
Two RDF Datasets are identical if and only if they have the same default graph (under set equality) and the same set of named graphs (under set equality).
If R and S are identical, we may equivalently say that they are the same RDF Dataset.
R ≈ S denotes that R and S are isomorphic RDF Datasets.
In particular, R is isomorphic with S if and only if it is possible to map (i.e., relabel) the blank nodes of R to the blank nodes of S in a one-to-one manner, generating an RDF dataset R' such that R' = S.
RDF Dataset Canonicalization is a function C that maps an RDF Dataset to an RDF Dataset in a manner that satisfies the following two properties for all RDF Datasets R and S:
We may refer to C(R) as the canonical form of R (under C ).
Such a canonicalization function can be implemented, in practice, as a procedure that deterministically labels all blank nodes of an RDF Dataset in a one-to-one manner, without depending on any feature of the input serialization (blank node labels, order of the triples, etc.) of the input RDF Dataset.
It is important to emphasize that the term "canonicalization" is used here in its very generic form, described as:
In computer science, canonicalization […] is a process for converting data that has more than one possible representation into a "standard", "normal", or canonical form.
Source: Wikipedia.
Canonicalization, as used in the context of this document, is defined on an abstract data model (i.e., on an RDF Dataset [[rdf11-concepts]]), regardless of a specific serialization. (It could also be referred to as a "canonical labeling scheme"). It is therefore very different from the usage of the term in, for example, the "Canonical XML" [[xml-c14n11]] or the "JSON Canonicalization Scheme" [[rfc8785]] specifications which are, essentially, syntactic transformations of the respective documents. Any comparison with those documents can be misleading.
Though canonical labeling procedures for directed and undirected graphs have been studied for several decades, only in the past 10 years have two generalized and comprehensive approaches been proposed for RDF Graphs and RDF Datasets:
The introduction of Aidan Hogan’s paper [[hogan-2017]] also contains a more thorough description of the underlying mathematical challenges.
One possible approach to calculating the hash of an RDF Dataset R may imply the following steps:
The main challenge for the Working Group was to provide a standard for the RDF Dataset Canonicalization function.
The immediate result of the Canonicalization Algorithm is a canonical representation of the RDF dataset represented as N-Quads, using canonical blank node identifiers, and placed in code point order. Since the media type for N-Quads is defined in [[[n-quads]]] [[n-quads]] as `application/n-quads`, no further media registration is needed for this form. An alternate output of the Canonicalization Algorithm is the issued identifiers map, which may be represented as JSON [[RFC8259]].
The Canonicalization Algorithm operates over RDF quads, rather than separately canonicalizing each RDF graph contained within a dataset. This is necessary as blank nodes may be shared among graphs within an RDF dataset, or be used as the graph name for a named graph within the dataset, making it important that the full context of each blank node be considered when assigning canonical identifiers.
When the hash of a file is transferred from one system to another and the file is used as is, there is no need for any processing other than checking the value of the hash of the file. This is true for RDF just as for any other data formats; this means that any existing hash functions may be used on the original file. However, RDF has many serializations for datasets, notably TriG, JSON-LD, N-Quads, and, informally, CBOR-LD. The space efficient verification use case points to a need in some circumstances to transform — usually to minimize — the data that is transferred. In this scenario, a hash on the original file, such as a hash calculated on a JSON-LD file, is not appropriate, as the conversion will make it invalid. A hash of the abstract dataset, on the other hand, will still be valid.
The working group considered the algorithms proposed by Aidan Hogan [[hogan-2017]] and by Rachel Arnold and Dave Longley [[arnold-longley-2020]]. The latter was the basis for the final report on the topic published by the Credentials Community Group [[CCG-RDC-FINAL]]. It was largely through that effort that the Arnold/Longley approach enjoyed significant implementation experience before the working group was chartered, backed up by a test suite. For this reason, after careful consideration of the alternatives, efforts focused on formalizing the Arnold/Longley approach.
A major issue that the working group had to tackle was precisely what would the serialized output of the algorithm be that could then be hashed? The current N-Triples Recommendation [[n-triples-20140225]] includes the definition of a canonical form. However, the same is not true for N-Quads [[n-quads-20140225]]. This is expected to be added to a forthcoming version of the N-Quads specification but, for now, normative text is included in the RDF Dataset Canonicalization document. It is necessary to serialize the algorithm output as N-Quads, rather than N-Triples, as the input and output are RDF Datasets, which may include multiple graphs.
On a related issue, the RDF Dataset Canonicalization specification includes an important note about the canonical form of literals, especially language tags.
Standards are not developed in a vacuum. Rather, they are developed as one of several moving pieces that interact to some degree. The Working Group was very aware of the development of RDF 1.2 by the RDF-Star Working Group, for example. Once that large scale work is complete (or nearing completion), it may be desirable to revisit RDF Dataset Canonicalization. The work on RDF Surfaces may also entail further changes to RDF Dataset Canonicalization. Waiting until one or both of these projects had been completed would have introduced a delay to developing RDF Datasaet Canonicalization, perhaps of a year or more. To avoid such a delay, the decision was made to focus purely on the existing RDF 1.1 series of standards.
Another issue facing the community at the time of writing is the base direction of RDF Literals. The addition of language tags to literals is well-used in RDF, however, it is not always clear whether the text should be rendered left to right or right to left. Although this is clearly relevant to dataset canonicalization, the problem has much wider applicability and is, rightly, being tackled by the RDF-Star Working Group. For dataset canonicalization, we simply note that this is an issue that may require future work.
Some typical use cases for RDF Dataset Canonicalization and/or signatures are: