JSON-LD [[JSON-LD11]] offers a JSON-based serialization for Linked Data. One of the primary uses of JSON-LD is its ability to exchange RDF [[?RDF11-CONCEPTS]] data across the Web. This can be done by first serializing RDF to JSON-LD, after which data consumers can deserialize JSON-LD to RDF.

Since RDF datasets may contain many triples, and JSON-LD documents don't have size limits, such documents could in some cases become very large. For these cases, the ability to serialize and deserialize JSON-LD in a streaming way offers many advantages. Streaming processing allows large documents to be parsed with only a limited amount of memory, and processed chunks can be emitted as soon as they are processed, as opposed to waiting until the whole dataset or document has been processed.

The recommended processing algorithms [[JSON-LD11-API]] do not work in a streaming manner, as these first load all required data in memory, after which this data can be processed. This note discusses the processing of JSON-LD in a streaming manner. Concretely, a set of guidelines is introduced for efficiently serializing and deserializing JSON-LD in a streaming way. These guidelines are encapsulated in a JSON-LD streaming document form, and a streaming RDF form. These forms, when they are detected, allow implementations to apply streaming optimizations.

This is an unofficial proposal.

Introduction

This document discusses the concerns on serializing and deserializing JSON-LD in a streaming manner. This document is primarily intended for the following audiences:

To understand the basics in this note you must first be familiar with JSON, which is detailed in [[RFC8259]]. You must also understand the JSON-LD syntax defined in [[[JSON-LD11]]] [[JSON-LD11]], which is the base syntax used for streaming processing. To understand how JSON-LD maps to RDF, it is helpful to be familiar with the basic RDF concepts [[?RDF11-CONCEPTS]].

Streaming RDF Deserializers and Streaming RDF Serializers can claim conformance to this specification.

A conforming Streaming RDF Deserializer is a system that can deserialize JSON-LD to RDF for JSON-LD documents adhering to the Streaming Document Form as defined in this specification, and is a conforming RDF Deserializer according to the JSON-LD [[JSON-LD11]] specification minus the exceptions listed in this specification.

A conforming Streaming RDF Serializer is a system that can serialize RDF to JSON-LD for RDF datasets adhering to the streaming RDF dataset form as defined in this specification, and is a conforming RDF Serializer according to the JSON-LD [[JSON-LD11]] specification minus the exceptions listed in this specification.

The processing discussion in this specification merely contains implementation guidelines. Thus, Streaming RDF Deserializers and Streaming RDF Serializers may implement the algorithms given in this specification in any way desired, so long as the end result is indistinguishable from the result that would be obtained by the specification's algorithms.

Implementers can partially check their level of conformance to this specification by successfully passing the test cases of the Streaming JSON-LD test suite. Note, however, that passing all the tests in the test suite does not imply complete conformance to this specification. It only implies that the implementation conforms to aspects tested by the test suite.

Streaming Document Form

There are multiple ways of describing data in JSON-LD, each having their own use cases. This section introduces a streaming JSON-LD document form, which enables JSON-LD documents to be deserialized in a streaming manner.

Importance of Key Ordering

The order in which key-value pairs occur in JSON-LD nodes conveys no meaning. For instance, the following two JSON-LD documents have the same meaning, even though they are syntactically different.

          {
            "@context": "http://schema.org/",
            "@id": "https://www.rubensworks.net/#me",
            "name": "Ruben Taelman",
            "url": "https://www.rubensworks.net/",
            "image": "https://www.rubensworks.net/img/ruben.jpg"
          }
        
          {
            "@context": "http://schema.org/",
            "name": "Ruben Taelman",
            "url": "https://www.rubensworks.net/",
            "image": "https://www.rubensworks.net/img/ruben.jpg",
            "@id": "https://www.rubensworks.net/#me"
          }
        

In a streaming JSON-LD document, the order of certain keys is important. This is because streaming JSON-LD processors may require the presence of some keys before others can be processed, and ordering keys in certain ways may lead to better processing performance.

In the two snippets before, the first example can be processed more efficiently by a streaming processor. Concretely, a streaming JSON-LD deserializer can emit an RDF triple each time a property ("name", "url", "image") has been read, because the "@id" has been defined before. This is because the "@id" defines the RDF subject, the property key defines the RDF predicate, and the property value defines RDF object. This ensures that all required information is needed for constructing and emitting an RDF triple each time a property is encountered.

For the second example, where the "@id" is only defined at the end, a streaming deserializer would have to buffer the properties until the "@id" key is encountered. Since the RDF subject of our triples is defined via "@id", the RDF triples can only be emitted after this last key has been read.

Required Key Ordering

In order for a JSON-LD document to be a in a streaming document form, the keys in each JSON node MUST be in the following order:

  1. @context
  2. @type
  3. Other properties

Each of these keys is optional, and may be omitted. Only those that are present must occur in the following order.

This order is important because @context can change the meaning of all following entries in the node and its children. Additionally, @type could indicate a type-scoped context, which may have the same implications of an @context. This means that these MUST always be processed before all other entries.

Entries in nodes have a defined order when serialized as JSON. However, this this order is not always kept by JSON parsing libraries. This means that streaming processors MUST make use of JSON parsers that preserve this order to be effective.

@type can be aliased to other keys, for which the order also applies.

In addition to the required key ordering, an @id key SHOULD be present as the first entry of the other properties, right after @context and @type.

By placing @id before other properties in a node, streaming deserializers can determine the subject of this node early on, and they can immediately emit following properties as RDF triples as soon as they are read.

If a node does not define an explicit @id, the subject of this node usually becomes an implicit blank node. To improve potential processing performance, it is recommended to always add an explicit blank node identifier in these cases using @id.

While not recommended, @id can also come after any other properties, which requires the streaming deserializer to buffer these properties until an @id is read, or the node closes.

Examples

Hereafter, a couple of JSON-LD document examples are listed that either adhere, adhere with non-recommended order, or do not adhere to the streaming document form.

Valid Examples

                {
                  "@context": "http://schema.org/",
                  "@id": "https://www.rubensworks.net/#me",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg"
                }
              
                {
                  "@context": "http://schema.org/",
                  "@id": "_:blank_node",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg"
                }
              
                {
                  "@context": "http://schema.org/",
                  "@id": "https://www.rubensworks.net/#me",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg",
                  "knows": {
                    "@id": "https://greggkellogg.net/foaf#me",
                    "name": "Gregg Kellogg"
                  }
                }
              
                {
                  "@context": "http://schema.org/",
                  "@id": "https://www.rubensworks.net/#me",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg",
                  "knows": {
                    "@context": {
                      "name": "http://xmlns.com/foaf/0.1/name"
                    },
                    "@id": "https://greggkellogg.net/foaf#me",
                    "name": "Gregg Kellogg"
                  }
                }
              
                {
                  "@context": {
                    "Person": {
                      "@id": "http://schema.org/Person",
                      "@context": "http://schema.org/"
                    }
                  },
                  "@type": "Person",
                  "@id": "https://www.rubensworks.net/#me",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg"
                }
              

Valid, Non-recommended Examples

                {
                  "@context": "http://schema.org/",
                  ####// @id is missing, considered a blank node####
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg"
                }
              
                {
                  ####// @context is not required, but @id is recommended here####
                  "http://schema.org/name": "Ruben Taelman",
                  "http://schema.org/url": {"@id": "https://www.rubensworks.net/"},
                  "http://schema.org/image": {"@id": "https://www.rubensworks.net/img/ruben.jpg"}
                }
              

Invalid Examples

                {
                  "@id": "https://www.rubensworks.net/#me",
                  ####// @context must come before @id####
                  "@context": "http://schema.org/",
                  "name": "Ruben Taelman",
                  "url": "https://www.rubensworks.net/",
                  "image": "https://www.rubensworks.net/img/ruben.jpg"
                }
              
                {
                  "http://schema.org/name": "Ruben Taelman",
                  "@type": "http://schema.org/Person", ####// @type must come before properties####
                  "http://schema.org/url": {"@id": "https://www.rubensworks.net/"},
                  "http://schema.org/image": {"@id": "https://www.rubensworks.net/img/ruben.jpg"}
                }
              

Streaming Document Profile

JSON-LD documents can be signaled or requested in streaming document form. The profile URI identifying the streaming document form is http://www.w3.org/ns/json-ld#streaming.

The following example illustrates how this profile parameter can be used to request a streaming document over HTTP.

GET /ordinary-json-document.json HTTP/1.1
Host: example.com
Accept: application/ld+json;profile=http://www.w3.org/ns/json-ld#streaming

Requests the server to return the requested resource as JSON-LD in streaming document form.

Streaming RDF Form

This section introduces a streaming RDF dataset form, which enables RDF datasets to be processed in a streaming manner so that they can efficiently serialized into JSON-LD by a streaming JSON-LD processor.

Importance of Triple Ordering

The order in which RDF triples occur in an RDF dataset convey no meaning. For instance, the following two RDF datasets (serialized in [[[?Turtle]]] [[?Turtle]]) have the same meaning, even though they have a different order of triples.

          
        
          
        

For streaming JSON-LD processors, the order of RDF triples may be important. Processors that read triples one by one, and convert them to a JSON-LD document in a streaming manner, can benefit from having triples in a certain order.

For instance, the order from first snippet above can lead to more compact JSON-LD documents than the order from the second snippet when handled by a streaming JSON-LD processor. This is because the first order groups triples with the same subject, which can be exploited during streaming JSON-LD serialization by using the same "@id" key. The second order mixes subjects, which means that streaming JSON-LD serialization will have to assign separate "@id" keys for each triple, resulting in duplicate "@id" keys.

Streaming JSON-LD serializations of both examples can be seen below.

          [
            {
              "@id": "https://www.rubensworks.net/#me",
              "http://schema.org/name": "Ruben Taelman",
              "http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
            },
            {
              "@id": "https://greggkellogg.net/foaf#me",
              "http://schema.org/name": "Gregg Kellogg",
              "http://schema.org/url": { "@id": "https://greggkellogg.net/" }
            }
          ]
        
          [
            {
              "@id": "https://www.rubensworks.net/#me",
              "http://schema.org/name": "Ruben Taelman"
            },
            {
              "@id": "https://www.rubensworks.net/#me",
              "http://schema.org/name": "Gregg Kellogg"
            },
            {
              "@id": "https://greggkellogg.net/foaf#me",
              "http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
            },
            {
              "@id": "https://greggkellogg.net/foaf#me",
              "http://schema.org/url": { "@id": "https://greggkellogg.net/" }
            }
          ]
        

This section introduces recommendations for defining the order in an RDF dataset, such that it can be processed more efficiently by streaming JSON-LD processors.

  1. Group triples with the same named graph.
    Allows grouping of @graph nodes.
  2. Group triples with the same subject.
    Allows grouping of @id keys.
  3. Group triples with the same predicate.
    Allows grouping of property keys.
  4. Group triples with a given term as named graph together with triples having this term as subject.
    Allows the combination of @graph nodes with @id.
  5. Group triples with a given term as object with triples having this term as subject.
    Allows nesting of nodes within other nodes.

An RDF dataset that adheres to at least one of these recommendations is considered to have a streaming RDF dataset form.

Examples

Hereafter, a couple of RDF datasets are listed, together with corresponding serialized JSON-LD in a streaming manner. Each example illustrates the importance of the recommended triple ordering within the streaming RDF dataset form.

The examples using named graphs are serialized in the [[[TriG]]] format [[TriG]].

@graph grouping

                
              
                [
                  {
                    "@id": "http://example.org/graph1",
                    "@graph": [
                      {
                        "@id": "https://www.rubensworks.net/#me",
                        "http://schema.org/name": "Ruben Taelman",
                        "http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
                      }
                    ]
                  },
                  {
                    "@id": "http://example.org/graph2",
                    "@graph": [
                      {
                        "@id": "https://greggkellogg.net/foaf#me",
                        "http://schema.org/name": "Gregg Kellogg",
                        "http://schema.org/url": { "@id": "https://greggkellogg.net/" }
                      }
                    ]
                  }
                ]
              

@id grouping

                
              
                [
                  {
                    "@id": "https://www.rubensworks.net/#me",
                    "http://schema.org/name": "Ruben Taelman",
                    "http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
                  },
                  {
                    "@id": "https://greggkellogg.net/foaf#me",
                    "http://schema.org/name": "Gregg Kellogg",
                    "http://schema.org/url": { "@id": "https://greggkellogg.net/" }
                  }
                ]
              

Property grouping

                
              
                [
                  {
                    "@id": "https://www.rubensworks.net/#me",
                    "http://schema.org/name": [
                      "Ruben",
                      "Ruben Taelman"
                    ],
                    "http://schema.org/url": [
                      { "@id": "https://www.rubensworks.net/" },
                      { "@id": "https://github.com/rubensworks/" }
                    ]
                  }
                ]
              

@graph and @id grouping

                
              
                [
                  {
                    "@id": "http://example.org/graph1",
                    "@graph": [
                      {
                        "@id": "https://www.rubensworks.net/#me",
                        "http://schema.org/name": "Ruben Taelman",
                        "http://schema.org/url": { "@id": "https://www.rubensworks.net/" }
                      }
                    ],
                    "name": "Graph 1"
                  },
                  {
                    "@id": "http://example.org/graph2",
                    "@graph": [
                      {
                        "@id": "https://greggkellogg.net/foaf#me",
                        "http://schema.org/name": "Gregg Kellogg",
                        "http://schema.org/url": { "@id": "https://greggkellogg.net/" }
                      }
                    ],
                    "name": "Graph 2"
                  }
                ]
              

Subject and object grouping

                
              
                [
                  {
                    "@id": "https://www.rubensworks.net/#me",
                    "http://schema.org/name": "Ruben Taelman",
                    "http://schema.org/knows": {
                      "@id": "https://greggkellogg.net/foaf#me",
                      "http://schema.org/name": "Gregg Kellogg"
                    }
                  }
                ]
              

Streaming Processing

Whenever a JSON-LD document is present in streaming document form, or if an RDF dataset is present in a streaming RDF dataset form, a processor MAY process these in a streaming manner.

This section describes high-level guidelines for processing JSON-LD in a streaming manner. Concretely, guidelines are given for deserializing JSON-LD to RDF, and serializing RDF to JSON-LD. Further details on processing can be found in [[[JSON-LD11-API]]] [[JSON-LD11-API]].

Deserialization

A streaming deserializer MAY be implemented by considering a JSON-LD document as a stream of incoming characters. By reading character-by-character, a deserializer can detect the contained JSON nodes and its key-value pairs.

A streaming deserializer MUST assume that the required key ordering of a streaming document is present. If a different order is detected, an error MUST be thrown with error code "invalid streaming key order".

The first expected entry in a node is @context. If such an entry is present, all following entries in this node can make use of it, possibly inheriting parts of the context from parent nodes. If such an entry is not present, only contexts from parent nodes are considered for this node.

If an @type entry (or any alias of @type) is detected, it is checked whether or not it defines a type-scoped context according to the current node's context. If this defines a type-scoped context, the context for the current node is overridden.
Additionally, the @type must emit rdf:type triples based on the current node's subject and values. This subject will possibly only be determined later on, which will require buffering of these incomplete triples.

In case multiple type-scoped contexts apply, they must not be processed by order of appearance, but using the lexicographical order.

If an @id entry is detected, the RDF subject for the current node is defined for later usage. Any other entries that are detected before @id must be buffered until @id is found, or the node closes (which sets the subject to a fresh blank node).

For every other property, the default JSON-LD algorithms are followed based on the current node's subject.

As an example of a system architecture of a streaming JSON-LD deserializer can be found in this blog post.

Serialization

A streaming JSON-LD serializer reads triples one by one, and outputs a JSON-LD document character-by-character, which can be emitted in a streaming manner.
This MAY be a JSON-LD document in the streaming document form.

A streaming serializer can benefit from having triples ordered following a streaming RDF dataset form, but it SHOULD NOT assume that RDF datasets follow this form in full.

As a basis, a streaming serializer can produce an array of node objects or graph objects, each one representing a single RDF triple/quad.

On top of this base case, several optimizations can be applied to achieve a more compact representation in JSON-LD. These optimizations are dependent on the surrounding triples, which is determined by the overall triple order.

When a JSON-LD context is passed to a streaming serializer, compaction techniques MAY be applied. For instance, instead of writing properties as full IRIs, they can be compacted based on the presence of terms and prefixes in the context.

Due to the chained nature of RDF lists, serializing them to JSON-LD with the @list keyword in a streaming way may not always be possible, since you may not know beforehand if a triple is part of a valid RDF list. Optionally, a streaming RDF serializer MAY provide an alternative method to emit @list keywords.

Since streaming RDF processors process triples one by one, so that they don't need to keep all triples in memory, they loose the ability to deduplicate triples. As such, a streaming JSON-LD serializer MAY produce JSON-LD that contains duplicate triples.