Copyright © 2020 W3C® (MIT, ERCIM, Keio, Beihang). W3C liability, trademark and permissive document license rules apply.
JSON-LD [JSON-LD11] offers a JSON-based serialization for Linked Data. One of the primary uses of JSON-LD is its ability to exchange RDF [RDF11-CONCEPTS] data across the Web. This can be done by first serializing RDF to JSON-LD, after which data consumers can deserialize JSON-LD to RDF.
Since RDF datasets may contain many triples, and JSON-LD documents don't have size limits, such documents could in some cases become very large. For these cases, the ability to serialize and deserialize JSON-LD in a streaming way offers many advantages. Streaming processing allows large documents to be parsed with only a limited amount of memory, and processed chunks can be emitted as soon as they are processed, as opposed to waiting until the whole dataset or document has been processed.
The recommended processing algorithms [JSON-LD11-API] do not work in a streaming manner, as these first load all required data in memory, after which this data can be processed. This note discusses the processing of JSON-LD in a streaming manner. Concretely, a set of guidelines is introduced for efficiently serializing and deserializing JSON-LD in a streaming way. These guidelines are encapsulated in a JSON-LD streaming document form, and a streaming RDF form. These forms, when they are detected, allow implementations to apply streaming optimizations.
This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at https://www.w3.org/TR/.
This is an unofficial proposal.
This document was published by the JSON-LD Working Group as a First Public Working Group Note.
GitHub Issues are preferred for discussion of this specification. Alternatively, you can send comments to our mailing list. Please send them to public-json-ld-wg@w3.org (archives).
Please see the Working Group's implementation report.
Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.
This document was produced by a group operating under the W3C Patent Policy. The group does not expect this document to become a W3C Recommendation.
This document is governed by the 1 March 2019 W3C Process Document.
This document discusses the concerns on serializing and deserializing JSON-LD in a streaming manner. This document is primarily intended for the following audiences:
To understand the basics in this note you must first be familiar with JSON, which is detailed in [RFC8259]. You must also understand the JSON-LD syntax defined in JSON-LD 1.1 [JSON-LD11], which is the base syntax used for streaming processing. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [RDF11-CONCEPTS].
As well as sections marked as non-normative, all authoring guidelines, diagrams, examples, and notes in this specification are non-normative. Everything else in this specification is normative.
The key words MAY, MUST, MUST NOT, SHOULD, and SHOULD NOT in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.
Streaming RDF Deserializers and Streaming RDF Serializers can claim conformance to this specification.
A conforming Streaming RDF Deserializer is a system that can deserialize JSON-LD to RDF for JSON-LD documents adhering to the Streaming Document Form as defined in this specification, and is a conforming RDF Deserializer according to the JSON-LD [JSON-LD11] specification minus the exceptions listed in this specification.
A conforming Streaming RDF Serializer is a system that can serialize RDF to JSON-LD for RDF datasets adhering to the streaming RDF dataset form as defined in this specification, and is a conforming RDF Serializer according to the JSON-LD [JSON-LD11] specification minus the exceptions listed in this specification.
The processing discussion in this specification merely contains implementation guidelines. Thus, Streaming RDF Deserializers and Streaming RDF Serializers may implement the algorithms given in this specification in any way desired, so long as the end result is indistinguishable from the result that would be obtained by the specification's algorithms.
Implementers can partially check their level of conformance to this specification by successfully passing the test cases of the Streaming JSON-LD test suite. Note, however, that passing all the tests in the test suite does not imply complete conformance to this specification. It only implies that the implementation conforms to aspects tested by the test suite.
There are multiple ways of describing data in JSON-LD, each having their own use cases. This section introduces a streaming JSON-LD document form, which enables JSON-LD documents to be deserialized in a streaming manner.
The order in which key-value pairs occur in JSON-LD nodes conveys no meaning. For instance, the following two JSON-LD documents have the same meaning, even though they are syntactically different.
{
"@context": "http://schema.org/",
"@id": "https://www.rubensworks.net/#me",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
"@context": "http://schema.org/",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg",
"@id": "https://www.rubensworks.net/#me"
}
In a streaming JSON-LD document, the order of certain keys is important. This is because streaming JSON-LD processors may require the presence of some keys before others can be processed, and ordering keys in certain ways may lead to better processing performance.
In the two snippets before, the first example can be processed more efficiently by a streaming processor.
Concretely, a streaming JSON-LD deserializer can emit an RDF triple each time a property ("name"
, "url"
, "image"
) has been read, because the "@id"
has been defined before.
This is because the "@id"
defines the RDF subject,
the property key defines the RDF predicate,
and the property value defines RDF object.
This ensures that all required information is needed for constructing and emitting an RDF triple each time a property is encountered.
For the second example, where the "@id"
is only defined at the end,
a streaming deserializer would have to buffer the properties until the "@id"
key is encountered.
Since the RDF subject of our triples is defined via "@id"
,
the RDF triples can only be emitted after this last key has been read.
In order for a JSON-LD document to be a in a streaming document form, the keys in each JSON node MUST be ordered according to the following order:
@context
@type
Each of these keys is optional, and may be omitted. Only those that are present must occur in the following order.
This order is important because @context
can change the meaning of all following entries in the node and its children.
Additionally, @type
could indicate a type-scoped context,
which may have the same implications of an @context
.
This means that these MUST always be processed before all other entries.
Entries in nodes have a defined order when serialized as JSON. However, this this order is not always kept by JSON parsing libraries. This means that streaming processors MUST make use of JSON parsers that preserve this order to be effective.
@type
can be aliased to other keys,
for which the order also applies.
In addition to the required key ordering,
an @id
key SHOULD be present as the first entry of the other properties,
right after @context
and @type
.
By placing @id
before other properties in a node,
streaming deserializers can determine the subject of this node early on,
and they can immediately emit following properties as RDF triples as soon as they are read.
If a node does not define an explicit @id
,
the subject of this node usually becomes an implicit blank node.
To improve potential processing performance,
it is recommended to always add an explicit blank node identifier in these cases using @id
.
While not recommended, @id
can also come after any other properties,
which requires the streaming deserializer to buffer these properties
until an @id
is read, or the node closes.
This recommended key ordering SHOULD be followed by streaming document authors.
Streaming processors implementations MUST NOT assume that a given streaming document will adhere to this recommendation.
If a processor sees that a document does not adhere to this recommendation, then it MAY produce a warning.
This section is non-normative.
Hereafter, a couple of JSON-LD document examples are listed that either adhere, adhere with non-recommended order, or do not adhere to the streaming document form.
{
"@context": "http://schema.org/",
"@id": "https://www.rubensworks.net/#me",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
"@context": "http://schema.org/",
"@id": "_:blank_node",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
"@context": "http://schema.org/",
"@id": "https://www.rubensworks.net/#me",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg",
"knows": {
"@id": "https://greggkellogg.net/foaf#me",
"name": "Gregg Kellogg"
}
}
{
"@context": "http://schema.org/",
"@id": "https://www.rubensworks.net/#me",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg",
"knows": {
"@context": {
"name": "http://xmlns.com/foaf/0.1/name"
},
"@id": "https://greggkellogg.net/foaf#me",
"name": "Gregg Kellogg"
}
}
{
"@context": {
"Person": {
"@id": "http://schema.org/Person",
"@context": "http://schema.org/"
}
},
"@type": "Person",
"@id": "https://www.rubensworks.net/#me",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
"@context": "http://schema.org/",
// @id is missing, considered a blank node
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
// @context is not required, but @id is recommended here
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/url": {"@id": "https://www.rubensworks.net/"},
"http://schema.org/image": {"@id": "https://www.rubensworks.net/img/ruben.jpg"}
}
{
"@id": "https://www.rubensworks.net/#me",
// @context must come before @id
"@context": "http://schema.org/",
"name": "Ruben Taelman",
"url": "https://www.rubensworks.net/",
"image": "https://www.rubensworks.net/img/ruben.jpg"
}
{
"http://schema.org/name": "Ruben Taelman",
"@type": "http://schema.org/Person", // @type must come before properties
"http://schema.org/url": {"@id": "https://www.rubensworks.net/"},
"http://schema.org/image": {"@id": "https://www.rubensworks.net/img/ruben.jpg"}
}
JSON-LD documents can be signaled or requested in streaming document form.
The profile URI identifying the streaming document form
is http://www.w3.org/ns/json-ld#streaming
.
The following example illustrates how this profile parameter can be used to request a streaming document over HTTP.
GET /ordinary-json-document.json HTTP/1.1
Host: example.com
Accept: application/ld+json;profile=http://www.w3.org/ns/json-ld#streaming
Requests the server to return the requested resource as JSON-LD in streaming document form.
This section introduces a streaming RDF dataset form, which enables RDF datasets to be processed in a streaming manner so that they can efficiently serialized into JSON-LD by a streaming JSON-LD processor.
The order in which RDF triples occur in an RDF dataset convey no meaning. For instance, the following two RDF datasets (serialized in RDF 1.1 Turtle [Turtle]) have the same meaning, even though they have a different order of triples.
@prefix schema: <http://schema.org/> .
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
<https://greggkellogg.net/foaf#me> schema:name "Gregg Kellogg" .
<https://greggkellogg.net/foaf#me> schema:url <https://greggkellogg.net/> .
@prefix schema: <http://schema.org/> .
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
<https://greggkellogg.net/foaf#me> schema:name "Gregg Kellogg" .
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
<https://greggkellogg.net/foaf#me> schema:url <https://greggkellogg.net/> .
For streaming JSON-LD processors, the order of RDF triples may be important. Processors that read triples one by one, and convert them to a JSON-LD document in a streaming manner, can benefit from having triples in a certain order.
For instance, the order from first snippet above can lead to more compact JSON-LD documents than the order from the second snippet
when handled by a streaming JSON-LD processor.
This is because the first order groups triples with the same subject,
which can be exploited during streaming JSON-LD serialization by using the same "@id"
key.
The second order mixes subjects, which means that streaming JSON-LD serialization will have to assign separate "@id"
keys
for each triple, resulting in duplicate "@id"
keys.
Streaming JSON-LD serializations of both examples can be seen below.
[
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
},
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/name": "Gregg Kellogg",
"http://schema.org/url": { "@id": "https://greggkellogg.net/" }
}
]
[
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman"
},
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Gregg Kellogg"
},
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
},
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/url": { "@id": "https://greggkellogg.net/" }
}
]
This section introduces recommendations for defining the order in an RDF dataset, such that it can be processed more efficiently by streaming JSON-LD processors.
@graph
nodes.
@id
keys.
@graph
nodes with @id
.
One straightforward way to follow the first three recommendations is by sorting all triples (or quads)
in the order graph, subject, predicate, object.
Existing triple stored may already perform this kind of grouping automatically.
Note also that, depending on the triples, it might not be possible to strictly comply with all these recommendations.
An RDF dataset that adheres to at least one of these recommendations is considered to have a streaming RDF dataset form.
This section is non-normative.
Hereafter, a couple of RDF datasets are listed, together with corresponding serialized JSON-LD in a streaming manner. Each example illustrates the importance of the recommended triple ordering within the streaming RDF dataset form.
The examples using named graphs are serialized in the RDF 1.1 TriG format [TriG].
@prefix schema: <http://schema.org/> .
<http://example.org/graph1> {
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
}
<http://example.org/graph1> {
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
}
<http://example.org/graph2> {
<https://greggkellogg.net/foaf#me> schema:name "Gregg Kellogg" .
}
<http://example.org/graph2> {
<https://greggkellogg.net/foaf#me> schema:url <https://greggkellogg.net/> .
}
[
{
"@id": "http://example.org/graph1",
"@graph": [
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
}
]
},
{
"@id": "http://example.org/graph2",
"@graph": [
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/name": "Gregg Kellogg",
"http://schema.org/url": { "@id": "https://greggkellogg.net/" }
}
]
}
]
@prefix schema: <http://schema.org/> .
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
<https://greggkellogg.net/foaf#me> schema:name "Gregg Kellogg" .
<https://greggkellogg.net/foaf#me> schema:url <https://greggkellogg.net/> .
[
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
},
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/name": "Gregg Kellogg",
"http://schema.org/url": { "@id": "https://greggkellogg.net/" }
}
]
@prefix schema: <http://schema.org/> .
<https://www.rubensworks.net/#me> schema:name "Ruben" .
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
<https://www.rubensworks.net/#me> schema:url <https://github.com/rubensworks/> .
[
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": [
"Ruben",
"Ruben Taelman"
],
"http://schema.org/url": [
{ "@id": "https://www.rubensworks.net/" },
{ "@id": "https://github.com/rubensworks/" }
]
}
]
@prefix schema: <http://schema.org/> .
<http://example.org/graph1> {
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
}
<http://example.org/graph1> {
<https://www.rubensworks.net/#me> schema:url <https://www.rubensworks.net/> .
}
<http://example.org/graph1> schema:name "Graph 1" .
<http://example.org/graph2> {
<https://greggkellogg.net/foaf#me> schema:name "Gregg Kellogg" .
}
<http://example.org/graph2> {
<https://greggkellogg.net/foaf#me> schema:url <https://greggkellogg.net/> .
}
<http://example.org/graph2> schema:name "Graph 2" .
[
{
"@id": "http://example.org/graph1",
"@graph": [
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
}
],
"name": "Graph 1"
},
{
"@id": "http://example.org/graph2",
"@graph": [
{
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/name": "Gregg Kellogg",
"http://schema.org/url": { "@id": "https://greggkellogg.net/" }
}
],
"name": "Graph 2"
}
]
@prefix schema: <http://schema.org/> .
<https://www.rubensworks.net/#me> schema:name "Ruben Taelman" .
<https://www.rubensworks.net/#me> schema:knows <https://greggkellogg.net/foaf#me> .
<https://greggkellogg.net/foaf#me> schema:knows "Gregg Kellogg" .
[
{
"@id": "https://www.rubensworks.net/#me",
"http://schema.org/name": "Ruben Taelman",
"http://schema.org/knows": {
"@id": "https://greggkellogg.net/foaf#me",
"http://schema.org/name": "Gregg Kellogg"
}
}
]
Whenever a JSON-LD document is present in streaming document form, or if an RDF dataset is present in a streaming RDF dataset form, a processor MAY process these in a streaming manner.
This section describes high-level guidelines for processing JSON-LD in a streaming manner. Concretely, guidelines are given for deserializing JSON-LD to RDF, and serializing RDF to JSON-LD. Further details on processing can be found in JSON-LD 1.1 Processing Algorithms and API [JSON-LD11-API].
A streaming deserializer MAY be implemented by considering a JSON-LD document as a stream of incoming characters. By reading character-by-character, a deserializer can detect the contained JSON nodes and its key-value pairs.
A streaming deserializer MUST assume that the required key ordering of a streaming document is present. If a different order is detected, an error MUST be thrown with error code "invalid streaming key order"
.
The first expected entry in a node is @context
.
If such an entry is present, all following entries in this node can make use of it, possibly inheriting parts of the context from parent nodes.
If such an entry is not present, only contexts from parent nodes are considered for this node.
If an @type
entry (or any alias of @type
) is detected,
it is checked whether or not it defines a type-scoped context according to the current node's context.
If this defines a type-scoped context, the context for the current node is overridden.
Additionally, the @type
must emit rdf:type
triples based on the current node's subject and values.
This subject will possibly only be determined later on, which will require buffering of these incomplete triples.
In case multiple type-scoped contexts apply, they must not be processed by order of appearance, but using the lexicographical order.
If an @id
entry is detected, the RDF subject for the current node is defined for later usage.
Any other entries that are detected before @id
must be buffered until @id
is found, or the node closes (which sets the subject to a fresh blank node).
For every other property, the default JSON-LD algorithms are followed based on the current node's subject.
As an example of a system architecture of a streaming JSON-LD deserializer can be found in this blog post.
A streaming JSON-LD serializer reads triples one by one,
and outputs a JSON-LD document character-by-character,
which can be emitted in a streaming manner.
This MAY be a JSON-LD document in the streaming document form.
A streaming serializer can benefit from having triples ordered following a streaming RDF dataset form, but it SHOULD NOT assume that RDF datasets follow this form in full.
As a basis, a streaming serializer can produce an array of node objects or graph objects, each one representing a single RDF triple/quad.
On top of this base case, several optimizations can be applied to achieve a more compact representation in JSON-LD. These optimizations are dependent on the surrounding triples, which is determine by the overall triple order.
When a JSON-LD context is passed to a streaming serializer, compaction techniques MAY be applied. For instance, instead of writing properties as full IRIs, they can be compacted based on the presence of terms and prefixes in the context.
Due to the chained nature of RDF lists, serializing them to JSON-LD with the @list
keyword in a streaming way may not always be possible,
since you may not know beforehand if a triple is part of a valid RDF list.
Optionally, a streaming RDF serializer MAY provide an alternative method to emit @list
keywords.
Since streaming RDF processors process triples one by one, so that they don't need to keep all triples in memory, they loose the ability to deduplicate triples. As such, a streaming JSON-LD serializer MAY produce JSON-LD that contains duplicate triples.