This document describes the best practices for identifying language and base direction in data formats used on the Web.

We welcome comments on this document, but to make it easier to track them, please raise separate issues for each comment, and point to the section you are commenting on using a URL.

Introduction

This document was developed as a result of observations by the Internationalization Working Group over a series of specification reviews related to formats based on JSON, WebIDL, and other non-markup data languages. Unlike markup formats, such as XML, these data languages generally do not provide extensible attributes and were not conceived with built-in language or direction metadata.

Natural language information on the Web depends on and benefits from the presence of language and direction metadata. Along with support for Unicode, mechanisms for including and specifying the base direction and the natural language of spans of text are one of the key internationalization considerations when developing new formats and technologies for the Web.

Markup formats, such as HTML and XML, as well as related styling languages, such as CSS and XSL, are reasonably mature and provide support for the interchange and presentation of the world's languages via built-in features. Data formats need similar mechanisms in order to ensure a complete and consistent support for the world's languages and cultures.

Terminology

This section defines terminology necessary to understand the contents of this document. The terms defined here are specific to this document.

A producer is any process where natural language string data is created for later storage, processing, or interchange.

A consumer is any process that receives natural language strings, either for display or processing.

A serialization agreement (or "agreement" for short) is the common understanding between a producer and consumer about the serialization of string metadata: how it is to be understood, serialized, read, transmitted, removed, etc.

Language negotiation is any process which selects or filters content based on language. Usually this implies selecting content in a single language (or falling back to some meaningful default language that is available) by finding the best matching values when several languages or locales [[LTLI]] are present in the content. Some common language negotiation algorithms include the Lookup algorithm in [[BCP47]] or the BestFitMatcher in [[ECMA-402]].

LTR stands for "left-to-right" and refers to the inline base direction of left-to-right [[!UAX9]]. This is the base text direction used by languages whose starting character progression begins on the left side of the page in horizontal text. It's used for scripts such as Latin, Cyrillic, Devanagari, and many others.

RTL stands for "right-to-left" and refers to the inline base direction of right-to-left [[!UAX9]]. This is the base text direction used by languages whose starting character progression begins on the right side of the page in horizontal text. It's used for scripts such as Arabic, Hebrew, Syriac, and a few others.

If you are unfamiliar with bidirectional or right-to-left text, there is a basic introduction here. Additional materials can be found in the Internationalization Working Group's Techniques Index.

The String Lifecycle

It's not possible to consider alternatives for handling string metadata in a vacuum: we need to establish a framework for talking about string handling and data formats.

Producers

A string can be created in a number of ways, including a content author typing strings into a plain text editor, text message, or editing tool; or a script scraping text from web pages; or acquisition of an existing set of strings from another application or repository. In the data formats under consideration in this document, many strings come from back end data repositories or databases of various kinds. Sources of strings often provide an interface, API, or metadata that includes information about the base direction and language of the data. Some also provide a suitable default for when the direction or language is not provided or specified. In this document, the producer of a string is the source, be it human or a mechanism, that creates or provides a string for storage or transmission.

When a string is created, it's necessary to (a) detect or capture the appropriate language and base direction to be associated with the string, and (b) take steps, where needed, to set the string up in a way that stores and communicates the language and base direction.

For example, in the case of a string that is extracted from an HTML form, the base direction can be detected from the computed value of the form's field. Such a value could be inherited from an earlier element, such as the html element, or set using markup or styling on the input element itself. The user could also set the direction of the text by using keyboard shortcut keys to change the direction of the form field. The dirname attribute provides a way of automatically communicating that value with a form submission.

Similarly, language information in an HTML form would most likely be inherited from the lang attribute on the html tag, or any element in the tree with a lang attribute.

If the producer of the string is receiving the string from a location where it was stored by another producer, and where the base direction/language has already been established, the producer needs to understand that the language and base direction has already been set and convert or encode that information for its consumers.

Consumers

A consumer is an application or process that receives a string for processing and possibly places it into a context where it will be exposed to a user. For display purposes, it must ensure that the base direction and language of the string is correctly applied to the string in that context. For processing purposes, it must at least persist the language and direction and may need to use the language and direction data in order to perform language-specific operations.

Displaying the string usually involves applying the base direction and language by constructing additional markup, adding control codes, or setting display properties. This indicates to rendering software the base direction or language that should be applied to the string in this display context to get the string to appear correctly. For text direction, it must also isolate embedded strings from the surrounding text to avoid spill-over effects of the bidi algorithm [[UAX9]]. For language, it must make clear the boundaries for the range of text to which the language applies.

Note that a consumer of one document format might be a producer of another document format.

Serialization Agreements

Between any producer and consumer, there needs to be an agreement about what the document format contains and what the data in each field or attribute means. Any time a producer of a string takes special steps to collect and communicate information about the base direction or language of that string, it must do so with the expectation that the consumer of the string will understand how the producer encoded this information. If no action is taken by the producer, the consumer must still decide what rules to follow in order to decide on the appropriate base direction and language, even if it is only to provide some form of default value.

In some systems or document formats, the necessary behaviour of the producers and consumers of a string are fully specified. In others, such agreements are not available; it is up to users to provide an agreement for how to encode, transmit, and later decode the necessary language or direction informat. Low level specifications, such as JSON, do not provide a string metadata structure by default, so any document formats based on these need to provide the "agreement" themselves.

Why is this important?

Information about the language of content is important when processing and presenting natural language data for a variety of reasons. When language information is not present, the resulting degradation in appearance or functionality can frustrate users, render the content unintelligible, or disable important features. Some of the affected processes include:

Similarly, direction metadata is important to the Web. When a string contains text in a script that runs right-to-left (RTL), it must be possible to eventually display that string correctly when it reaches an end user. For that to happen, it is necessary to establish what base direction needs to be applied to the string as a whole. The appropriate base direction cannot always be deduced by simply looking at the string; even if it were possible, the producer and consumer of the string would need to use the same heuristics to interpret its direction.

Static content, such as the body of a Web page or the contents of an e-book, often has language or direction information provided by the document format or as part of the content metadata. Data formats found on the Web generally do not supply this metadata. Base specifications such as Microformats, WebIDL, JSON, and more, have tended to store natural language text in string objects, without additional metadata.

This places a burden on application authors and data format designers to provide the metadata on their own initiative. When standardized formats do not address the resulting issues, the result can be that, while the data arrives intact, its processing or presentation cannot be wholly recovered.

In a distributed Web, any consumer can also be a producer for some other process or system. Thus, a given consumer might need to pass language and direction metadata from one document format (and using one agreement) to another consumer using a different document format. Lack of consistency in representing language and direction metadata in serialization agreements poses a threat to interoperability and a barrier to consistent implementation.

An Example

Suppose that you are building a Web page to show a customer's library of e-books. The e-books exist in a catalog of data and consist of the usual data values. A JSON file for a single entry might look something like:

{
    "id": "1118008189",
    "title": "HTML و CSS: تصميم و إنشاء مواقع الويب",
    "authors": [ "Jon Duckett" ],
    "language": "ar",
    "pubDate": "2008-01-01",
    "publisher": "مكتبة",
    "coverImage": "https://example.com/images/learning_web_design_cover.jpg",
    // etc.
},

Each of the above is a data field in a database somewhere. There is even information about what language the book is in: ("language": "ar").

A well-internationalized catalog would include additional metadata to what is shown above. That is, for each of the fields containing natural language text, such as the title and authors fields, there should be language and base direction information stored as metadata. (There may be other values as well, such as pronunciation metadata for sorting East Asian language information.) These metadata values are used by consumers of the data to influence the processing and enable the display of the items in a variety of ways. As the JSON data structure provides no place to store or exchange these values, it is more difficult to construct internationalized applications.

One work-around might be to encode the values using a mix of HTML and Unicode bidi controls, so that a data value might look like one of the following:

// following examples are NOT recommended
// contains HTML markup
"title": "<span lang='ar' dir='rtl'>HTML و CSS: تصميم و إنشاء مواقع الويب</span>",
// contains LRM as first character
"authors": [ "\u200eJon Duckett" ], 

But JSON is a data interchange format: the content might not end up with the title field being displayed in an HTML context. The JSON above might very well be used to populate, say, a local data store which uses native controls to show the title and these controls will treat the HTML as string contents. Producers and consumers of the data might not expect to introspect the data in order to supply or remove the extra data or to expose it as metadata. Most JSON libraries don't know anything about the structure of the content that they are serializing. Producers want to generate the JSON file directly from a local data store, such as a database. Consumers want to store or retrieve the value for use without additional consideration of the content of each string. In addition, either producers or consumers can have other considerations, such as field length restrictions, that are affected by the insertion of additional controls or markup. Each of these considerations places special burden on implementers to create arbitrary means of serializing, deserializing, managing, and exchanging the necessary metadata, with interoperability as a casualty along the way.

(As an aside, note that the markup shown in the above example is actually needed to make the title as well as the inserted markup display correctly in the browser.)

Isn't Unicode Enough?

[[!Unicode]] and its character encodings (such as UTF-8) are key elements of the Web and its formats. They provide the ability to encode and exchange text in any language consistently throughout the Internet. However, Unicode by itself does not guarantee perfect presentation and processing of natural language text, even though it does guarantee perfect interchange.

Several features of Unicode are sometimes suggested as part of the solution to providing language and direction metadata. Specificially, Unicode bidi controls are suggested for handling direction metadata. In addition, there are "tag" characters in the U+E0000 block of Unicode originally intended for use as language tags (although this use is now deprecated).

There are a variety of reasons why the addition of characters to data in an interchange format is not a good idea. These include:

This last consideration is important to call out: document formats are often built and serialized using several layers of code. Libraries, such as general purpose JSON libraries, are expected to store and retrieve faithfully the data that they are passed. Higher-level implementations also generally concern themselves with faithful serialization and de-serialization of the values that they are passed. Any process that alters the data itself introduces variability that is undesirable. For example, consider an application's unit test that checks if the string returned from the document is identical to the one in the data catalog used to generate the document. If bidi controls, HTML markup, or Unicode language tags have been inserted, removed, or changed, the strings might not compare as equal, even though they would be expected to be the same.

Best Practices for Communicating Language and Direction

This section contains the Best Practices as identified by the Internationalization Working Group. [[!RFC2119]] keywords have their usual meaning.

The main issue is how a producer of a string knows how to encode and a consumer of a string will know how to find and interpret the language-related features that ought to be used for that string when it is eventually processed or displayed to the user. This section describes the current best practices, as well as several alternatives that were considered (with reasons for why they are not considered the best practice).

The TAG and I18N WG are currently discussing what the best practice recommendations should be. This subsection represents our understanding currently.

Use metadata to indicate the language of each natural language string field.

Use metadata to indicate the base direction of each natural language string field.

For consistency between specifications and implementations, the Localizable structure is RECOMMENDED for defining natural language string fields in JSON based file formats.

The use of metadata for supplying both the language and base direction of natural language string fields ensures that the necessary information is present, can be supplied and extracted with the minimal amount of processing, and does not require producers or consumers to scan the string or alter the data.

The use of metadata for indicating base direction is preferred, as it avoids requiring the consumer to interpolate the direction using methods such as first strong or which require modification of the data itself (such as the insertion of RLM/LRM markers or bidirectional controls).

Specifications and document formats MAY define a field to provide the default language for a given document.

Specifications and document formats MAY define a field to provide the default direction for a given document.

Document level defaults, when combined with per-field metadata, can reduce the overall complexity of a given document instance, since the language and direction values don't have to be repeated across many fields.

Specifications MUST NOT assume that a document-level default is sufficient.

The name @language is RECOMMENDED as the name of the default language value

The name @dir is RECOMMENDED as the default direction value.

Interoperability is enhanced when specifications all use the same attribute name for the document-level default language and document-level default base direction.

If metadata is not available and cannot otherwise be provided, a base direction MAY be interpolated from available language metadata.

The script subtag of a language tag (or the "likely" script subtag based on [[!BCP47]] and [[!LDML]]) can sometimes be used to provide a base direction when other data is not available. Note that using language information is a "last resort" and specifications SHOULD NOT use it as the primary way of indicating direction: make the effort to provide for metadata.

Specifications MUST NOT require the production or use of paired bidi controls.

Another way to say this is: do not require implementations to modify data passing through them. Unicode bidi control characters might be found in a particular piece of string content, where the producer or data source has used them to make the text display properly. That is, they might already be part of the data. Implementations should not disturb any controls that they find—but they shouldn't be required to produce additional controls on their own.

Requirements and Use Cases

This section of the document describes in depth the need for language and direction metadata and various use cases helpful in understanding the best practices and alternatives listed above.

Identifying the Language of Content

Definitions

Language metadata typically indicates the intended linguistic audience or user of the resource as a whole, and it's possible to imagine that this could, for a multilingual resource, involve a property value that is a list of languages. A property that is about language metadata may have more than one value, since it aims to describe all potential users of the information

The text-processing language is the language of a particular range of text (which could be a whole resource or just part of it). A property that represents the text-processing language needs to have a single value, because it describes the text content in such a way that tools such as spell-checkers, default font applicators, hyphenation and line breakers, case converters, voice browsers, and other language-sensitive applications know which set of rules or resources to apply to a specific range of text. Such applications generally need an unambiguous statement about the language they are working on.

Language Tagging Use Cases

Kensuke is reading an old Tibetan manuscript from the Dunhuang collection. The tool he is using to read the manuscript has access to annotations created by scholars working in the various languages of the International Dunhuang Project, who are commenting on the text. The section of the manuscript he is currently looking at has commentaries by people writing in Chinese, Japanese, and Russian. Each of these commentaries is stored in a separate annotation, but the annotations point to the same point in the target document. Each commentary is mainly written in the language of the scholar, but may contain excerpts from the manuscript and other sources written in Tibetan as well quoted text in Chinese and English. Some commentaries may contain parallel annotations, each in a different language. For example, there are some with the same text translated into Japanese, Chinese and Tibetan.

Kensuke speaks Japanese, so he generally wants to be presented with the Japanese commentary.

Capturing the language of the audience

The annotations containing the Japanese commentary have a language property set to "ja" (Japanese). The tool he is using knows that he wants to read the Japanese commentaries, and it uses this information to select and present to him the text contained in that body. This is language information being used as metadata about the intended audience – it indicates to the application doing the retrieval that the intended consumer of the information wants Japanese.

Some of the annotations contain text in more than one language. For example, there are several with commentary in Chinese, Japanese and Tibetan. For these annotations, it's appropriate to set the language property to "ja,zh,bo" – indicating that both Japanese and Chinese readers may want to find it.

The language tagging that is happening here is likely to be at the resource level, rather than the string level. It's possible, however, that the text-processing language for strings inside the resource may be assumed by looking at the resource level language tag – but only if it is a single language tag. If the tag contains "ja,zh,bo" it's not clear which strings are in Japanese, which are in Chinese, and which are in Tibetan.

Capturing the text-processing language

Having identified the relevant annotation text to present to Kensuke, his application has to then display it so that he can read it. It's important to apply the correct font to the text. In the following example, the first line is labeled ja (Japanese), and the second zh-Hant (Traditional Chinese) respectively. The characters on both lines are the same code points, but they demonstrate systematic differences between how those and similar codepoints are rendered in Japanese vs. Chinese fonts. It's important to associate the right forms with the right language, otherwise you can make the reader uncomfortable or possibly unhappy.

雪, 刃, 直, 令, 垔

So, it's important to apply a Japanese font to the Japanese text that Kensuke is reading. There are also language-specific differences in the way text is wrapped at the end of a line. For these reasons we need to identify the actual language of the text to which the font or the wrapping algorithm will be applied.

Another consideration that might apply is the use of text-to-speech. A voice browser will need to know whether to use Japanese or Chinese pronunciations, voices, and dictionaries for the ideographic characters contained in the annotation body text.

Various other text rendering or analysis tools need to know the language of the text they are dealing with. Many different types of text processing depend on information about the language of the content in order to provide the proper processing or results and this goes beyond mere presentation of the text. For example, if Kensuke wanted to search for an annotation, the application might provide a full text search capability. In order to index the words in the annotations, the application would need to split the text according to word boundaries. In Japanese and Chinese, which do not use spaces in-between words, this often involves using dictionaries and heuristics that are language specific.

We also need a way to indicate the change of language to Chinese and Tibetan later in the commentary for some annotations, so that appropriate fonts and wrapping algorithms can be applied there.

Additional Requirements for Localization

Having viewed the commentaries he is interested in, Kensuke realizes that he needs another reference work, but he's not sure of the catalog number. He uses an application for searching out catalog entries. This application is written in JavaScript and can be switched between several languages, according to the user preference. One way to accomplish this would be to reload the application's user interface from the server each time the user selects a new language. However, because this application is relatively small, the developer has elected to package all of the translations with the JavaScript (there are several open source projects that allow runtime selection of locale). Similarly, the catalog search service sends records back in all of the available languages, rather than pre-selecting according to the user's current language preference.

The original example shows a data record available in a single language. But some applications, such as the catalog search tool and its supporting service, might need the ability to send multiple languages for the same field, such as when localizing an application or when multilingual data is available. This is particularly true in cases like this, when the producer needs to support consumers that perform their own language negotiation or when the consumer cannot know which language or languages will be selected for display.

Serialization agreements to support this therefore need to represent several different language variations of the same field. For instance, in the example above the values title or description might each have translations available for display to users who speak a language other than English. Or an application might have localized strings that the consumer can select at runtime. In some cases, all language variations might be shown to the user. In other cases, the different language values might be matched to user preferences as part of language negotiation to select the most appropriate language to show.

When multiple language representations are possible, a serialization might provide a means (defined in the specification for that document format) for setting a default value for language or direction for the whole of the document. This allows the serialized document to omit language and direction metadata from individual fields in cases where they match the default.

Identifying the Base Direction of Content

In order for a consumer to correctly display bidirectional text, such as those in the following use cases, there must be a way for the consumer to determine the required base direction for each string. It is not enough to rely on the Unicode Bidirectional Algorithm to solve these issues. What is needed is a way to establish the overall directional context in which the string will be displayed (which is what 'base direction' means).

These use cases illustrate situations where a failure to apply the necessary base direction creates a problem.

Final punctuation

This use case consists of a string containing Hebrew text followed by punctuation – in this case an exclamation mark. The characters in this string are shown here in the order in which they are stored in memory.

"בינלאומי!"

If the string is dropped into a LTR context, it will display like this, which is incorrect – the exclamation mark is on the wrong side:

בינלאומי!

Dropped into a RTL context, this will be the result, which is correct:

בינלאומי!

The Hebrew characters are reversed by applying the Unicode Bidirectional Algorithm (UBA). However, in a LTR context the UBA cannot make the exclamation mark appear to the left of the Hebrew text, where it belongs, unless the base direction is set to RTL around it.

Initial Latin

In this case the Hebrew word is preceded by some Latin text (here a hashtag). The characters in the order in which they are stored in memory.

"bidi בינלאומי"

If the string is dropped into a LTR context, it will display like this, which is incorrect – the word 'bidi' should be to the right:

bidi בינלאומי

Dropped into a RTL context, this will be the result, which is correct:

bidi בינלאומי

The Hebrew characters are reversed by applying the Unicode Bidirectional Algorithm (UBA). However, in a LTR context the UBA cannot make the 'bidi' word appear to the right of the Hebrew text, where it belongs, unless the base direction is set to RTL around it.

This has an additional complication. Often, applications will test the first strong character in the string in order to guess the base direction that needs to be applied. In this case, that heuristic will produce the wrong result.

Notice how our original example demonstrates this. The title of the book was displayed in an LTR context like this:

HTML و CSS: تصميم و إنشاء مواقع الويب

However, the title is not displayed properly. The first word in the title is "HTML" and it should show on the right side, like this:

HTML و CSS: تصميم و إنشاء مواقع الويب

Bidirectional text ordering

In this case the string contains three words with different directional properties. Here are the characters in the order in which they are stored in memory.

"one שתיים three"

If the string is dropped into a LTR context, it will display like this, which is correct:

one שתיים three

Dropped into a RTL context, this will be the result, which is incorrect – the order of the items has changed:

one שתיים three

This can be much worse when combined with punctuation, or in this case an example of markup in an educational context (inserted into a RTL context). The sequence 'one two three' should just be wrapped in span markup:

<span>one שתיים three</span>

Interpreted HTML

The characters in this string are shown in the order in which they are stored in memory.

"<span dir='ltr'>one שתיים three</span>"

This use case is for applications that will parse the string and convert any HTML markup to the DOM. In this case, the text should be rendered correctly in an HTML context because the dir attribute indicates the base direction to be applied within the markup. (It also applies bidi isolation to the text in browsers that fully support bidi markup, avoiding any spill-over effects.) It relies, however, on a system where the consumer expects to receive HTML, and knows how to handle bidi markup.

It also requires the producer to take explicit action to identify the appropriate base direction and set up the required markup to indicate that.

Neutral LTR text

The text in this use case could be a phone number, product catalogue number, mac address, etc. The characters in this string are shown in the order in which they are stored in memory.

"123 456 789"

If the string is dropped into a LTR context, it will display like this, which is correct:

123 456 789

Dropped into a RTL context, this will be the result, which is incorrect – the sequencing is wrong, and this may not even be apparent to the reader:

123 456 789

When presented to a user, the order of the numbers must remain the same even when the directional context of the surrounding text is RTL. There are no strong directional characters in this string, and the need to preserve a strong LTR base direction is more to do with the type of information in the string than with the content.

Spill-over effects

A common use for strings is to provide data that is inserted into a page or user interface at runtime. Consider a scenario where, in a LTR application environment, you are displaying book names and the number of reviews each book has received. The display should produce something ordered like this:

$title - $numReviews reviews

Then you insert a book with a title like that in the original example. You would expect to see this:

HTML: تصميم و إنشاء مواقع الويب - 4 reviews

What you would actually see is this:

HTML: تصميم و إنشاء مواقع الويب - 4 reviews

This problem is caused by spillover effects as the Unicode bidirectional algorithm operates on the text inside and outside the inserted string without making any distinction between the two.

The solution to this problem is called bidi isolation. The title needs to be directionally isolated from the rest of the text.

What consumers need to do

Given the use cases in this section it will be clear that a consumer cannot simply insert a string into a target location without some additional work or preparation taking place, first to establish the appropriate base direction for the string being inserted, and secondly to apply bidi isolation around the string.

This requires the presence of markup or Unicode formatting controls around the string. If the string's base direction is opposite that into which it is being inserted, the markup or control codes need to tightly wrap the string. Strings that are inserted adjacent to each other all need to be individually wrapped in order to avoid the spillover issues we saw in the previous section.

[[HTML5]] provides base direction controls and isolation for any inline element when the dir attribute is used, or when the bdi element is used. When inserting strings into plain text environments, isolating Unicode formatting characters need to be used. (Unfortunately, support for the isolating characters, which the Unicode Standard recommends as the default for plain text/non-markup applications, is still not universal.)

The trick is to ensure that the direction information provided by the markup or control characters reflects the base direction of the string.

Approaches Considered for Identifying the Base Direction

The fundamental problem for bidirectional text values is how a consumer of a string will know what base direction should be used for that string when it is eventually displayed to a user. Note that some of these approaches for identifying or estimating the base direction have utility in specific applications and are in use in different specifications such as [[HTML5]]. The issue here is which are appropriate to adopt generally and specify for use as a best practice in document formats.

First-strong property detection (alone)

This approach is NOT recommended.

This section looks at the use of first-strong detection as the sole method for identifying base direction for a string.

How it works

A producer doesn't need to do anything.

The string is stored as it is.

Consumers must look for the first character in the string with a strong Unicode directional property, and set the base direction to match it. They then take appropriate action to ensure that the string will be displayed as needed. This is not quite so simple as it may appear, for the following reasons:

  1. Characters at the start of string without a strong direction (eg. punctuation, numbers, etc) and isolated sequences (ie. sequences of characters surrounded by RLI/LRI/FSI...PDI formatting characters) within a string must be skipped in order to find the first strong character.
  2. The detection algorithm needs to be able to handle markup at the start of the string. It needs to be able to tell whether the markup is just string text, or whether the markup needs to be parsed in the target location – in which case it must understand the markup, and understand any direction-related information that is carried in the markup.

Advantages

Where it is reliable, information about direction can be obtained without any changes to the string, and without the agreements and structures that would be needed to support out-of-band metadata.

Issues

The main problem with this approach is that it produces the wrong result for

  1. strings that begin with a strong character with a different directionality than that needed for the string overall (eg. an Arabic tweet that starts with a hashtag)
  2. strings that don't have a strong directional character (such as a telephone number) are likely to be displayed incorrectly in a RTL context.
  3. strings that begin with markup, such as <span>, since the first strong character is always going to be LTR.

In cases where the entire string starts and ends with RLI/LRI/FSI...PDI formatting characters, it is not possible to detect the first strong character by following the Unicode Bidirectional Algorithm. This is because the algorithm requires that bidi-isolated text be excluded from the detection.

If no strong directional character is found in the string, the direction should probably be assumed to be LTR, and the consumer should act on that basis. This has not been tested fully, however.

If a string contains markup that will be parsed by the consumer as markup, there are additional problems. Any such markup at the start of the string must also be skipped when searching for the first strong directional character.

If parseable markup in the string contains information about the intended direction of the string (for example, a dir attribute with the value rtl in HTML), that information should be used rather than relying on first-strong heuristics. This is problematic in a couple of ways: (a) it assumes that the consumer of the string understands the semantics of the markup, which may be ok if there is an agreement between all parties to use, say, HTML markup only, but would be problematic, for example, when dealing with random XML vocabularies, and (b) the consumer must be able to recognise and handle a situation where only the initial part of the string has markup, ie. the markup applies to an inline span of text rather than the string as a whole.

If, however, there is angle bracket content that is intended to be an example of markup, rather than actual markup, the markup must not be skipped – trying to display markup source code in a RTL context yields very confusing results! It isn't clear how a consumer of the string would always know the difference between examples and parseable strings.

Additional notes

Although first-strong detection is outlined in the Unicode Bidirectional Algorithm (UBA) [[!UAX9]], it is not the only possible higher-level protocol mentioned for estimating string direction. For example, Twitter and Facebook currently use different default heuristics for guessing the base direction of text – neither use just simple first-strong detection, and one uses a completely different method.

Metadata

This approach is Recommended.

How it works

A producer ascertains the base direction of the string and adds that to a metadata field that accompanies the string when it is stored or transmitted.

There are a couple of possible approaches:

  1. Label every string for base direction.
  2. Rely on the consumer to do first-strong detection, and label only those strings which would produce the wrong result (ie. a RTL string that starts with LTR strong characters).

If storing or transmitting a set of strings at a time, it helps to have a field for the resource as a whole that sets a global, default base direction which can be inherited by all strings in the resource. Note that in addition to a global field, you still need the possibility of attaching string-specific metadata fields in cases where a string's base direction is not that of the default. The base direction set on an individual string must override the default.

Consumers would need to understand how to read the metadata sent with a string, and would need to apply first-strong heuristics in the absence of metadata.

illustrates how this could be implemented.

Advantages

Passing metadata as separate data value from the string provides a simple, effective and efficient method of communicating the intended base direction without affecting the actual content of the string.

If every string is labelled for direction, or the direction for all strings can be ascertained by applying the global setting and any string-specific deviations, it avoids the need to inspect and run heuristics on the string to determine its base direction.

Issues

Out-of-band information needs to be associated with and kept with strings. This may be problematic for some sets of string data which are not part of a defined framework.

In particular, JSON-LD doesn't allow direction to be associated with individual strings in the same way as it works for language.

Augmenting first-strong by inserting RLM/LRM markers

This approach is NOT recommended.

How it works

A producer ascertains the base direction of the string and adds an marker character (either U+200F RIGHT-TO-LEFT MARK (RLM) or U+200E LEFT-TO-RIGHT MARK (LRM)) to the beginning of the string. The marker is not functional, ie. it will not automatically apply a base direction to the string that can be used by the consumer, it is simply a marker.

There are a number of possible approaches:

  1. Add a marker to every string.
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).
  3. Assume a default of LTR (no marker), and apply only RLM markers.

Consumers apply first-strong heuristics to detect the base direction for the string. The RLM and LRM characters are strongly typed, directionally, and should therefore indicate the appropriate base direction.

Advantages

It provides a reliable way of indicating base direction, as long as the producer can reliably apply markers.

In theory, it should be easier to spot the first-strong character in strings that begin with markup, as long as the correct RLM/LRM is prepended to the string.

Issues

If the producer is a human, they could theoretically apply one of these characters when creating a string in order to signal the directionality. One problem, especially on mobile devices, is the availability or inconvenience of inputting an RLM/LRM character. Perhaps more important, because the characters are invisible and because Unicode bidi is complicated, it can be difficult for the user to know that a bidi control will be necessary (or even what it is).

Furthermore, if a person types information into, say, an HTML form and relies on the form's base direction (in a RTL page) or use of shortcut keys to make the string look correct in the form field, they would not need to add RLM/LRM to make the string 'look correct' for themselves. However, outside of that context the string would look incorrect unless an appropriate strong character was added to it. Similarly, strings scraped from a web page that has dir=rtl set in the html element would not normally have or need an RLM/LRM character at the start of the string in HTML.

Another issue with this approach is that is changes the string value and identity. This may also create problems for working with string length or pointer positions, especially if some producers add markers and others don't.

If directional information is contained in markup that will be parsed as such by the consumer (for example, dir=rtl in HTML), the producer of the string needs to understand that markup in order to set or not set an RLM/LRM character as appropriate. If the producer always adds RLM/LRM to the start of such strings, the consumer is expected to know that. If the producer relies instead on the markup being understood, the consumer is expected to understand the markup.

The producer of a string should not automatically apply RLM or LRM to the start of the string, but should test whether it is needed. For example, if there's already an RLM in the text, there is no need to add another. If the context is correctly conveyed by first-strong heuristics, there is no need to add additional characters either. Note, however, that testing whether supplementary directional information of this kind is needed is only possible if the producer has access, and knows that it has access, to the original context of the string. Many document formats are generated from data stored away from the original context. For example, the catalog of books in the original example above is disconnected from the user inputing the bidirectional text.

Paired formatting characters

This approach is NOT recommended.

How it works

A producer ascertains the base direction of the string and adds a directional formatting character (one of U+2066 LEFT-TO-RIGHT ISOLATE (LRI), U+2067 RIGHT-TO-LEFT ISOLATE (RLI), U+2068 FIRST STRONG ISOLATE (FSI), U+202A LEFT-TO-RIGHT EMBEDDING (LRE), or U+202B RIGHT-TO-LEFT EMBEDDING (RLE)) to the beginning of the string, and U+2069 POP DIRECTIONAL ISOLATE (PDI) or U+202C POP DIRECTIONAL FORMATTING (PDF) to the end.

There are a number of possible approaches:

  1. Add the formatting codes to every string.
  2. Rely on the consumer to do first-strong detection, and add a marker to only those strings which would produce the wrong result (eg. a RTL string that starts with LTR strong characters).

Consumers would theoretically just insert the string in the place it will be displayed, and rely on the formatting codes to apply the base direction. However, things are not quite so simple (see below).

There are two types of paired formatting characters. The original set of controls provide the ability to add an additional level of bidirectional "embedding" to the Unicode bidirectional Algorithm. More recently, Unicode added a complementary set of "isolating" controls. Isolating controls are used to surround a string. The inside of the string is treated as its own bidirectional sequence, and the string is protected against spill-over effects related to any surrounding text. The enclosing string treats the entire surrounded string as a single unit that is ignored for bidi reordering. This issue is described here.

Code Point Abbreviation Description Code Point Abbreviation Description
U+200A LRE Left to Right Embedding U+2066 LRI Left to Right Isolate
U+200B RLE Left to Right Embedding U+2067 RLI Right to Left Isolate
U+2068 FSI First String Isolate
U+200C PDF Pop Directional Formatting (ending an embedding) U+2069 PDI Pop Directional Isolate (ending an isolate)

If paired formatting characters are used, they should be isolating, ie. starting with RLI, LRI, FSI, and not with RLE or LRE.

Advantages

There are no real advantages to using this approach.

Issues

This approach is only appropriate if it is acceptable to change the value of the string. In addition to possible issues such as changed string length or pointer positions, this approach runs a real and serious risk of one of the paired characters getting lost, either through handling errors, or through text truncation, etc.

A producer and a consumer of a string would need to recognise and handle a situation where a string begins with a paired formatting character but doesn't end with it because the formatting characters only describe a part of the string.

Unicode specifies a limit to the number of embeddings that are effective, and embeddings could build up over time to exceed that limit.

Consuming applications would need to recognise and appropriately handle the isolating formatting characters. At the moment such support for RLI/LRI/FSI is far from pervasive.

This approach would disqualify the string from being amenable to UBA first-strong heuristics if used by a non-aware consumer, because the Unicode bidi algorithm is unable to ascertain the base direction for a string that starts with RLI/LRI/FSI and ends with PDI. This is because the algorithm skips over isolated sequences and treats them as a neutral character. A consumer of the string would have to take special steps, in this case, to uncover the first-strong character.

Script subtags

This approach is only recommended as a workaround for situations that prevent the use of metadata.

How it works

A producer applies language tags to strings, specifying, in particular, the script in use.

There are a number of possible approaches:

  1. Label every string for language+script.
  2. It may be reasonable to assume a default of LTR for all strings unless marked with a script subtag that indicates RTL. Any string that needs to have an overall base direction of RTL should be labelled for language by the producer using a script subtag.
  3. Set a default language for a set of strings, at a higher level, but provide a mechanism to override that default for a given string where needed.

Consumers would identify strings associated with languages that are written RTL by default, and apply a RTL base direction to those strings.

The W3C Internationalization Working Group recommends that formats and applications should associate dedicated metadata relating to base text direction with strings wherever possible. In cases where that is not possible due to legacy constraints, but where language metadata can be associated with each string, it may be possible to use the language metadata as a fallback method of identifying the direction for a string (eg. JSON-LD, RDF, etc).

Note, however, that the approach outlined here is only appropriate when declaring information about the overall base direction to be associated with a string. We do not recommend generalised use of language data to indicate text direction, especially within strings, since the usage patterns are not interchangeable.

Note, secondly, that language information must use [[BCP47]] language tags, and that the portion of the language tag that carries the information is the script subtag, not the primary language subtag. For example, Azeri may be written LTR (with the Latin or Cyrillic scripts) or RTL (with the Arabic script). Therefore, the subtag az is insufficient to clarify intended direction. A language tag such as az-Arab, however, can generally be relied upon to indicate that the overall base direction should be RTL.

Unicode locale identifiers, defined as part of the [[!LDML]] specification, include a "likely subtag" mechanism that can sometimes be used to supply the script subtag when the base language tag does not include it. For example, a language tag such as ar (Arabic), implies the Arab (Arabic) script subtag, since nearly all Arabic is written in this script.

Advantages

There is no need to inspect or change the string itself.

This approach avoids the issues associated with first-strong detection when the first-strong character is not indicative of the necessary base direction for the string, and avoids issues relating to the interpretation of markup.

Note that a string that begins with markup that sets a language for the string text content (eg. <cite lang=“en-Latn”>) is not problematic here, since that language declaration is not expected to play into the setting of the base direction.

Issues

There are many strings which are not language-specific but which absolutely need to be wrapped by a mechanism that explicitly associates them with a particular base direction for correct consumption. For example, MAC addresses inserted into a RTL context need to be displayed with a LTR overall base direction and isolation from the surrounding text. It's not clear how to distinguish these cases from others (in a way that would be feasible when using direction metadata).

The list of script subtags may be added to in future. In that case, any subtags that indicate a default RTL direction need to be added to the lists used by the consumers of the strings.

It is perhaps possible to limit the use of script subtag metadata to situations where first-strong heuristics are expected to fail - provided that such cases can be identified, and appropriate action taken by the producer (not always reliable). Consumers would then need to use first-strong heuristics in the absence of a script subtag in order to identify the appropriate base direction. The use of script subtags should not, however, be restricted to strings that need to indicate direction; it is perfectly valid to associate a script subtag with any string.

There are some rare situations where the base direction can not necessarily be identified from the script subtag, but these are really limited to archaic usage of text. For example, Japanese and Chinese text prior to World War 2 was often written RTL, rather than LTR. Languages such as those written using Egyptian Hieroglyphs, or the Tifinagh Berber script, could formerly be written either LTR or RTL, however the default for scholastic research tends to LTR.

Require bidi markup for content

This approach is NOT recommended except under agreements that expect to exclusively interchange HTML or XML markup data.

How it works

The producer ensures that all strings begin and end with markup which indicates the appropriate base direction for that string. This requires the producer to examine the string. If the string is not bounded by markup with directional information, the producer must add wrap the string with elements that have the dir or its:direction [[!ITS20]] attributes, or other markup appropriate to a given XML application. If the string is bounded by markup, but it is something such as an HTML h1 element, the producer needs to introduce directional information into the existing markup, rather than simply surround the string with a span.

This example uses HTML markup. (Simply to make the example easier to read, it shows the text content of the string as it should be displayed, rather than in the order in which the characters are stored.)

The consumer then relies on the markup to set the base direction around the text content of the string when it is displayed. (Note that, unless additional metadata is provided, the consumer cannot remove the markup before integrating the string in the target location, because it cannot tell what markup has been added by the producer and what was already there. In general, however, such added markup is harmless.)

Advantages

The benefit for content that already uses markup is clear. The content will already provide complete markup necessary for the display and processing of the text or it can be extracted from the source page context. HTML and XML processors already know how to deal with this markup and provide ready validation.

For HTML, the dir attribute bidirectionally isolates the content from the surrounding text, which removes spillover conflicts. This reduces the work of the consumer.

Markup can also be used for string-internal directional information, something base direction on its own cannot solve.

Issues

Effectively, all levels of the implementation stack have to participate in understanding the markup (or ensure that they do no harm).

If the system uses HTML, end to end, then appropriate markup is available and its semantics are understood (ie. the dir attribute, and the bdi and bdo elements). For XML applications, however, there is no standard markup for bidi support. Such markup would need to first be defined, and then understood by both the producer and consumer.

A key downside of this approach is that many data values are just strings. As with adding Unicode tags or Unicode bidi controls, the addition of markup to strings alters the original string content. Altering the length of the content can cause problems with processes that enforce arbitrary limits or with processes that "sanitize" content by escaping HTML/XML unsafe characters such as angle brackets.

Another issue is the work and sophistication required for producers to examine strings and add markup as needed.

There are limits to the number of embeddings allowed by the Unicode bidirectional algorithm. Consumers would need to ensure that this limit is not passed when embedding strings into a wider context.

The addition of markup also requires consumers to guard against the usual problems with markup insertion, such as XSS attacks.

Create a new bidi datatype

This approach is NOT recommended.

How it works

This is similar to the idea of sending metadata with a string as discussed previously, however the metadata is not stored in a completely separate field (as in ), or inserted into the string itself (as in ), but is associated with the string as part of the string format itself.

Some datatypes, such as [[RDF-PLAIN-LITERAL]], already exist that allow for language metadata to be serialized as part of a string value. However, these do not include a consideration for base direction. This might be addressed by defining a new datatype (or extending an existing one) that document formats could then use to serialize natural language strings that includes both language and direction metadata.

Note that the last string does not include language information because it is an internal data value, but does include direction information because strings of this kind must be presented in the LTR order.

Producers would need to attach the direction information to a string.

Again, it would be sensible to establish rules that expect the consumer to use first-strong heuristics for those strings that are amenable to that approach, and for the producer to only add directional information if the first-strong approach would otherwise produce the wrong result. This would greatly simplify the management of strings and the amount of data to be transmittted, because the number of strings requiring metadata is relatively small.

The consumer would look to see whether the string has metadata associated with it, in which case it would set the indicated base direction. Otherwise, it would use first-strong heuristics to determine the base direction of the string.

Advantages

If a new datatype were added to JSON to support natural language strings, then specifications could easily specify that type for use in document formats. Since the format is standardized, producers and consumers would not need to guess about direction or language information when it is encoded.

Issues

Apart from the fact that this currently doesn't work, the downside of adding a datatype is that JSON is a widely implemented format, including many ad-hoc implementations. Any new serialization form would likely break or cause interoperability problems with these existing implementations. JSON is not designed to be a "versioned" format. Any serialization form used would need to be transparent to existing JSON processors and thus could introduce unwanted data or data corruption to existing strings and formats.

Approaches Considered for Identifying the Language of Content

This section deals with different means of determining or conveying the language of string values.

Metadata

This approach is recommended.

How it works

Producers include additional fields or use data structures that include language metadata for each natural language text value in a document.

Consumers can then extract and process this data for their own needs, or forward it as necessary.

We RECOMMEND the use of the Localizable dictionary structure.

Advantages

Using a consistent and well-defined data structure makes it more likely that different standards are composable and will work together seamlessly.

Metadata can be supplied without affecting the content itself.

Where metadata is unavailable, it can be omitted.

Consumers and producers do not have to instrospect the data outside of their normal processing.

Issues

Serialized files utilizing the dictionary and its data values will contain additional fields and can be more difficult to read as a result.

For existing document formats, it represents a change to the values being exchanged.

Provide document-level default language

This approach is recommended when combined with Localizable strings. It is the same concept as found in the section on providing a default direction.

How it works

When a document contains a series of values that are all in the same language, specifying the language at the document level is an effective way of passing the necessary metadata.

[[JSON-LD]] includes some data structures that are partially helpful. Notably, it defines string internationalization in the form of a context-scoped @language value which can be associated with blocks of JSON or within individual objects. There is no definition for base direction, so the @context mechanism currently doesn't provide a way to identify both language and direction.

Advantages

Many documents consist of a series of records or values that are all in the same language. Using a document-level default makes the documents themselves less verbose.

A document-level default make language negotiation between multiple files in different languages slightly easier, since the value is found in a consistent location.

Issues

Document-level defaults by themselves cannot handle mixed language content.

Require markup for content

This approach is NOT recommended except in special cases where the content being exchanged is expected to consist of and is restricted to literal values in a given markup language.

How it works

When a document is expected to consist of HTML or XML fragments and will be processed and displayed strictly in a markup context, the producer can use markup to convey the language of the content by wrapping strings with elements that have the lang or xml:lang attributes.

Advantages

This approach, and thus the advantages, are effectively the same as in this section.

Issues

See above.

Use Unicode language tag characters

This approach is NOT recommended.

How it works

Producers insert Unicode tag characters into the data to tag strings with a language.

Consumers process the Unicode tag characters and use them to assign the language.

Unicode defines special characters that can be used as language tags. These characters are "default ignorable" and should have no visual appearance. Here is how Unicode tags are supposed to work:

Each tag is a character sequence. The sequence begins with a tag identification character. The only one currently defined is U+E0001, which identifies [[!BCP47]] language tags. Other types of tags are possible, via private agreement. The remainder of the Unicode block for forming tags mirrors the printable ASCII characters. That is, U+E0020 is space (mirroring U+0020), U+E0041 is capital A (mirroring U+0041), and so forth. Following the tag identification character, producers use each tag character to spell out a [[!BCP47]] language tag using the upper/lowercase letters, digits, and the hyphen character. A given source language tag, which is composed from ASCII letters, digits and hyphens, can be transmogrified into tags by adding 0xE0000 to each character's code point. Additional structure, such as a language priority list (see [[RFC4647]]) might be constructed using other characters such as comma or semi-colon, although Unicode does not define or even necessarily permit this.

The end of a tag's scope is signalled by the end of the string, or can be signalled explicitly using the cancel tag character U+E007F, either alone (to cancel all tags) or preceeded by the language tag identification character U+E0001 (i.e. the sequence <U+E0001,U+E007F> to end only language tags).

Tags therefore have a minimum of three characters, and can easily be 12 or more. Furthermore, these characters are supplementary characters. That is, they are encoded using 4-bytes per character in UTF-8 and they are encoded as a surrogate pair (two 16-bit code units) in UTF-16. Surrogate pairs are needed to encode these characters in string types for languages such as Java and JavaScript that use UTF-16 internally. The use of surrogates makes the strings somewhat opaque. For example, U+E0020 is encoded in UTF-16 as 0xDB40.DC20 and in UTF-8 as the byte sequence 0xF3.A0.80.A0.

Advantages

These language tag characters could be used as part of normal Unicode text without modification to the structure of the document format.

Issues

Unicode tag characters are strongly deprecated by the Unicode Consortium. These tag characters were intended for use in language tagging within plain text contexts and are often suggested as an alternate means of providing in-band non-markup language tagging. We are unaware of any implementations that use them as language tags.

Applications that treat the characters as unknown Unicode characters will display them as tofu (hollow box replacement characters) and may count them towards length limits, etc. So they are only useful when applications or interchange mechanisms are fully aware of them and can remove them or disregard them appropriately. Although the characters are not supposed to be displayed or have any effect on text processing, in practice they can interfere with normal text processes such as truncation. line wrapping, hyphenation, spell-checking and so forth.

By design, [[!BCP47]] language tags are intended to be ASCII case-insensitive. Applications handling Unicode tag characters would have to apply similar case-insensitivity to ensure correct identification of the language. (The Unicode data doesn't specify case conversion pairings for these characters; this complicates the processing and matching of langauge tag values encoded using the tag characters.)

Moreover, language tags need to be formed from valid subtags to conform to [[!BCP47]]. Valid subtags are kept in an IANA registry and new subtags are added regularly, so applications dealing with this kind of tagging would need to always check each subtag against the latest version of the registry.

The language tag characters do not allow nesting of language tags. For example, if a string contains two languages, such as a quote in French inside an English sentence, Unicode tag characters can only indicate where one language starts. To indicate nested languages, tags would need to be embedded into the text not just prefixed to the front.

Although never implemented, other types of tags could be embedded into a string or document using Unicode tag characters. It is possible for these tags to overlap sections of text tagged with a language tag.

Finally, Unicode has recently "recycled" these characters for use in forming sub-regional flags, such as the flag of Scotland (🏴󠁧󠁢󠁳󠁴󠁿󠁧), which is made of the sequence:󠁢󠁳󠁣󠁴󠁿

  • 🏴 [U+1F3F4 WAVING BLACK FLAG]
  • 󠁧 [U+E0067 TAG LATIN SMALL LETTER G]
  • 󠁢 [U+E0062 TAG LATIN SMALL LETTER B]
  • 󠁳 [U+E0073 TAG LATIN SMALL LETTER S]
  • 󠁣 [U+E0063 TAG LATIN SMALL LETTER C]
  • 󠁴 [U+E0074 TAG LATIN SMALL LETTER T]
  • 󠁿 [U+E007F CANCEL TAG]

The above is a new feature of emoji added in Unicode 10.0 (version 5.0 of UTR#51) in June 2017. Proper display depends on your system's adoption of this version.

Use a language detection heuristic

This approach is NOT recommended.

How it works

Producers do nothing.

Consumers run a language detection algorithm to determine the language of the text. These are usually statistically based heuristics, such as using n-gram frequency in a language, possibly coupled with other data.

Advantages

There are no fundamental advantages to this approach.

Issues

Heuristics are more accurate the longer and more representative the text being scanned is. Short strings may not detect well.

Language detection is limited to the languages for which one has a detector.

Inclusions, such as personal or brand names in another language or script, can throw off the detection.

Language detection tends to be slow and can be memory intensive. Simple consumers probably can't afford the complexity needed to determine the language.

Implementation Considerations

This section contains additional recommendations or considerations for specification authors or implementers to consider.

Use Language Indexing for language negotiation

This approach is recommended.

How it works

Producers sometimes need to supply multiple language values for the same content item or data record. One use for this language negotiation by the consumer. Language indexing from JSON-LD provides a mechanism for organizing Localizable strings into arrays with multiple languages for the same value.

A language-indexed array uses the language tag as a key within the array like this:

Notice that this format embeds the lang field both as a key in the array and inside the enclosed Localizable so that the selected or preferred value can easily be extracted as a complete JavaScript or JSON object.

For example, if the language requested were U.S. English (en-US), this format makes it easier to match and extract the best fitting title object {"value": "Learning Web Design", "lang": "en"}.

Advantages

Where language indexing is not used, as in this example, an implementation would have to iterate over a perhaps-substantial list of alternative values:

One potential advantage is that the indexed language tag can indicate the intended audience of the value separately from the language tag of the actual data value. An example of this might be the use of language ranges from [[!RFC4647]], as in the following example, where a more specific language value might be wrapped with a less-specific language tag. In this example, the content has been labeled with a specific language tag (de-DE), but is available and applicable to users who speak other variants of German, such as de-CH or de-AT:

A less common example would be when a system supplies a specific value in a different ("wrong") language from the indexing language tag, perhaps because the actual translated value is missing:

Issues

The primary issue with this approach is the need to extract the indexing language tag from the content in order to generate the index value. Producers might also need to have a serialization agreement with consumers about whether the indexing language tag will be in any way canonicalized. For example, the language tag cel-gaulish is one of the [[!BCP47]] grandfathered language tags. Some implementations, such as those following the rules in [[!CLDR]], would prefer that this tag be replaced with a modern equivalent (xtg-x-cel-gaulish in this case) for the purposes of language negotiation.

Automatic production of bidi controls in producers

This approach is NOT recommended.

Unicode bidirectional controls are plain text control characters that can be used to indicate that a span of text should be "embedded" in or "isolated" from the surrounding bidirectional context. A frequent question is whether producers, such as content management systems or document formats, should generate or store these characters by default around strings that can appear in multiple contexts—in large part to try to avoid spill-over effects.

An example of this would be a set of translated data values in an application. Since the content author cannot know the bidi context in which the string will be displayed in advance but potentially can know each string's intended base direction, pre-wrapping the text with control characters could help insulate the strings from improper display by the consumer.

How it works

Producers use local direction metadata to wrap each string with the appropriate bidirectional embedding or isolating control. For example:

Consumers can then insert the string into any bidirectional context that supports isolation and get the intended display.

HTML5 [[HTML5]] introduced isolation at the element level by default. This allows for text insertion into an HTML context without the need for isolating controls. However, not all strings appear in an HTML context. Use of strings in a plain text or other display context cannot be guaranteed the isolating behavior provided by HTML.

While some strings can certainly benefit from using isolating bidi controls, consistent usage can produce layers of overhead, processing, and validation that are unnecessary. Use of these controls should be reserved for cases in which the assembly and presentation of the text depends on runtime directional determination. For example, isolating controls can be included around a variable name in a string whose contents will be determined at runtime.

Advantages

Isolating wrapped strings can be inserted into any context (that understands isolating controls) without additional processing.

Issues

For most text the controls are superfluous and contribute to storage and processing overhead.

The controls affect the length and content of the string. Operations such as string truncation or substring extraction need to keep the controls paired to avoid unintended consequences. And the string will not be equal to the original string.

There is a danger of multiple levels of embedding building up if applications blindly apply an additional layer of isolation or embedding.

Parking lot for more implementation considerations

This section contains stuff that should be merged into the previous section. I created a separate section temporarily so as not to produce editing conflicts.

Resource-level bidi metadata

How it works

[[JSON-LD]] includes some data structures that are partially helpful with regard to language metadata.

Here is the record used in the original example with an illustration of how, in an ideal world, a default direction could be added for the whole of the resource. The example also shows the use of a localizable string to override the default for the author field.

Issues

A major problem with this approach, currently, is that the JSON context mechanism doesn't currently supply an attribute for base direction. Changes or additions would be needed to extend the mechanism to support direction metadata.

The Localizable WebIDL Dictionary

This section contains a WebIDL definition for a Localizable dictionary.

To be effective, specification authors should consistently use the same formats and data structures so that the majority of data formats are interoperable (in other words, so that data can be copied between many formats without having to apply additional processing). We recommend adoption of the Localizable WebIDL "dictionary" as the best available format for JSON-derived formats to do that.

By defining the language and direction in a WebIDL dictionary form, specifications can incorporate language and direction metadata for a given String value succinctly. Implementations can recyle the dictionary implementation straightforwardly.

Acknowledgements

The Internationalization (I18N) Working Group would like to thank the following contributors to this document: Mati Allouche, David Baron, Ivan Herman, Tobie Langel, Sangwhan Moon, Felix Sasaki, Najib Tounsi, and many others.

The following pages formed the initial basis of this document: