This document provides a checklist of internationalization-related considerations when developing a specification. Most checklist items point to detailed supporting information in other documents. Where such information does not yet exist, it can be given a temporary home in this document. The information in this document will change regularly as new content is added and existing content is modified in the light of experience and discussion.

This document provides advice to specification developers about how to incorporate requirements for international use. What is currently available here is expected to be useful immediately, but is still an early draft and the document is in flux, and will grow over time as knowledge applied in reviews and discussions can be crystallized into guidelines.

Introduction

Developers of specifications need advice to ensure that what they produce will work for communities around the globe.

The Internationalization (i18n) WG tries to assist working groups by reviewing specifications and engaging in discussion. Often, however, such interventions come later in the process than would be ideal, or mean that the i18n WG has to repeat the same information for each working group it interacts with.

It would be better if specification developers could access a checklist of best practices, which points to explanations, examples and rationales where developers need it. Developers would then be able to build this knowledge into their work from the earliest stages, and could thereby reduce rework needed when the i18n WG reviews their specification.

This document contains the beginnings of a checklist, and points to locations where you can find explanations, examples and rationales for recommendations made.  If there is no such other place, that extra information will be added to this document. It may also be used to develop ideas and organize them.

The guidelines in this document are not intended to be hard and fast requirements. This document will achieve a significant part of its purpose if, where you don't understand the guidelines or disagree with them, you contact the Internationalization WG to discuss what should be done.

In this document, the term natural language is usually used to refer to the portions of a document or protocol intended for human consumption. The term localizable text is used to refer to the natural language content of formal languages, protocol syntaxes and the like, as distinct from syntactic content or user-supplied values. See the [[I18N-GLOSSARY]] for definitions of these and other terms used by the Internationalization Working Group.

Create a github checklist

A checklist feature is provided with this page to help you review your spec for internationalization. The results of the review should be posted to a GitHub issue.

Follow these steps for each section that is relevant to your spec:

  1. Open the checklist by clicking on "Show the self-review checklist".
  2. For each requirement that is relevant to your spec, click on the first checkbox.
  3. For each requirement that your spec fulfills, click on the second checkbox. (Tip: To save time, clicking on the second checkbox will automatically turn on the first checkbox, too.)
  4. When finished, click on the button "Create markdown for GitHub". This will produce markdown for just the requirements that you indicated were relevant to your spec.
  5. Copy the markdown code to a comment in a GitHub issue where you are capturing the results of your self-review work. If you have already done a review using the short review checklist you should copy the results produced here to other comment fields in that issue. This keeps all the review information together. Note that you'll need to repeat this copy-paste for each of the sections that contain requirements relevant to your spec.
  6. Add clarification notes for the results by editing the markdown in the GitHub issue.
  7. Ensure that your GitHub issue has the i18n-tracker label set, so that the Internationalization WG is aware of your review results.

When and how to write an Internationalization Considerations section in your spec

See related review comments.

All additions of or changes to an Internationalization Considerations section MUST be reviewed by the Internationalization (i18n) WG.

If you create an internationalization considerations section, it MUST have the title Internationalization Considerations or Internationalization (i18n) Considerations.

Specifications are not required to include a special section or appendix describing internationalization considerations of their specification. In general, the Internationalization WG instead prefers that information about language, regional, or cultural variation, support, or adaptation appear in the body of the specification, closely associated with the relevant features.

However, there are a few cases in which you might consider providing a section like this. Consider including an internationalization considerations section when:

If you decide to create an Internationalization Considerations section, it will usually be as an appendix. However, the order and placement relative to other parts of your spec or to other appendices is up to you.

If you decide to create an Internationalization Considerations section, you need to mention it in your horizontal review request to the Internationalization WG. The review request template includes a checkbox which allows you to do this easily.

Language

Language basics

It should be possible to associate a language with any piece of localizable text or natural language content.

Where possible, there should be a way to label natural language changes in inline text.

Text is rendered or processed differently according to the language it is in. For example, screen readers need to be prompted when a language changes, and spell checkers should be language-sensitive. When rendering text a knowledge of language is need in order to apply correct fonts, hyphenation, line-breaking, upper/lower case changes, and other features.

For example, ideographic characters such as 雪, 刃, 直, 令, 垔 have slight but important differences when used with Japanese vs Chinese fonts, and it's important not to apply a Chinese font to the Japanese text, and vice versa when it is presented to a user.

Consider whether it is useful to express the [=intended linguistic audience=] of a resource, in addition to specifying the language used for text processing.

Language information for a given resource can be used with two main objectives in mind: for text-processing, or as a statement of the intended use of the resource. We will explain the difference below.

Text-processing language information

A language declaration that indicates the [=text-processing language=] for a range of text must associate a single language value with a specific range of text.

When specifying the text-processing language you are declaring the language in which a specific range of text is actually written, so that user agents or applications that manipulate the text, such as voice browsers, spell checkers, style processors, hyphenators, etc., can apply the appropriate rules to the text in question. So we are, by necessity, talking about associating a single language with a specific range of text.

It is normal to express a text-processing language as the default language, for processing the resource as a whole, but it may also be necessary to indicate where the language changes within the resource.

Use the HTML lang and XML xml:lang language attributes where appropriate to identify the text processing language, rather than creating a new attribute or mechanism.

To identify the text-processing language for a range of text, HTML provides the lang attribute, while XML provides xml:lang which can be used in all XML formats. It's useful to continue using those attributes for relevant markup formats, since authors recognize them, as do HTML and XML processors.

Language metadata about the resource as a whole

It may also be useful to describe the language of a resource as a whole. This type of language declaration is called the intended linguistic audience of a resource. For example, such metadata may be used for searching, serving the right language version, classification, etc.

This type of language declaration differs from that of the text-processing declaration in that (a) the value for such declarations may be more than one language subtag, and (b) the language value declared doesn't indicate which bits of a multilingual resource are in which language.

It should be possible to associate a metadata-type language declaration (which indicates the intended use of the resource rather than the language of a specific range of text) with multiple language values.

The language(s) describing the intended use of a resource do not necessarily include every language used in a document. For example, many documents on the Web contain embedded fragments of content in different languages, whereas the page is clearly aimed at speakers of one particular language. For example, a German city-guide for Beijing may contain useful phrases in Chinese, but it is aimed at a German-speaking audience, not a Chinese one.

On the other hand, it is also possible to imagine a situation where a document contains the same or parallel content in more than one language. For example, a web page may welcome Canadian readers with French content in the left column, and the same content in English in the right-hand column. Here the document is equally targeted at speakers of both languages, so there are two audience languages. Another use case is a blog or a news page aimed at a multilingual community, where some articles on a page are in one language and some in another. In this case, it may make sense to list more than one language tag as the value of the language declaration.

Attributes that express the language of external resources should not use the HTML lang and XML xml:lang language attributes, but should use a different attribute when they represent metadata (which indicates the intended use of the resource rather than the language of a specific range of text).

Using a different attribute to indicate the language of an external resource allows the attribute to specify more than one language. It also works better if the resource pointed to is not in a single language.

This distinction can be seen in HTML in the separation of the lang and hreflang attributes. The former indicates the language of the text within the HTML page; the latter is metadata indicating the expected language of a page that is linked to.

For a longer discussion of this see xml:lang in XML document schemas. This article talks specifically about xml:lang, but the concepts are applicable to other situations.

Defining language values

See related review comments.

Values for language declarations must use BCP 47.

BCP 47 is the language tag system used by Internet and Web standards (and many other places). It defines a method of using subtags from an IANA registry to form a string which describes the language of content. The subtags in the registry are primarily based on (and maintain strict compatibility with) ISO and UN standards for identifying languages, scripts, regions, and countries. BCP47 also forms the basis for Unicode locales.

For an overview of the key features of BCP 47, see Language tags in HTML and XML.

Refer to BCP 47, not to its constituent parts, such as RFC 5646 or RFC 4647.

The link to and name of BCP 47 was created specifically so that there is an unchanging reference to the definition of Tags for the Identification of Languages. RFCs 1766, 3066, 4646 were previous (superseded) versions. The current version of BCP 47 is made up of two RFCs: 5646 and 4647.

Be specific about what level of conformance you expect for language tags: BCP 47 defines two levels of conformance, "valid" and "well-formed".

A well-formed BCP 47 language tag follows the syntax defined for a language tag: implementations check that each language tag consists of hyphen-separated subtags; each subtag has a specific length and specific content (letters, digits or specific combinations) depending on the placement in the tag. A valid BCP 47 language tag is well-formed but additionally ensures that only subtags that are listed in the IANA Subtag Registry are used. Note that the IANA Subtag Registry is frequently updated with new subtags.

Specifications may require implementations to check if language tags are "valid", but in most circumstances should only require that the language tags be "well-formed".

Most specifications are second-order consumers of language metadata – they are using data already provided in the document format (HTML lang, XML xml:lang, or the document format's language fields/attributes).

Generally most specifications are concerned with selecting resources (such as spell checkers, tokenizers, fonts, etc.) or with matching (selecting which string to show, for example) and don't directly care about the content of the language tag. Invalid-but-well-formed tags just don't match anything and usually fallback schemes provide some behavior that is appropriate.

There might be cases where a specification really wants implementation-level checking for validity. In those cases, the result of a tag failing to be valid has to be specified (should the application die, warn the user, etc.). It's also a problem that the registry is sizeable and changes over time, so each implementation is registry-version dependent. The changes over time are often minor, but real users will encounter interoperability issues if random (out of date) implementations of the specification reject language tags that have become valid at a later date.

In addition, BCP 47 has an extension mechanism which defines add-on subtag sequences. For example, one extension [[RFC6067]] (Unicode Locales, which uses the singleton -u), is commonly used for controlling the internationalization features of JavaScript (and has other uses). Validating these additional subtags is likely out of scope for most specifications.

Specifications should require content and content authors to use "valid" language tags.

Normative language regarding language tags might be different between content and implementation requirements. Specification authors need to carefully consider what conformance requirements and tests are needed for their specification and what implementations are required to do. One solution is to normatively require that "valid" language tags be used by content authors but only require implementations to check for "well-formed" language tags.

Specifications SHOULD refer to the IANA Language Subtag Registry instead of providing lists of codes extracted from ISO 639, ISO 3166, or other standards.

In the past, some of the standards used to provide subtags found in language tags were not freely or publicly available, so some specifications provided lists in order to help ensure interoperability. This is no longer necessary. As part of BCP 47, IANA maintains the language subtag registry, which is a publicly available, machine-readable list of valid subtags for use in constructing language tags. This registry is based on underlying standards, including the various parts of ISO 639 (639-1, 639-2, 639-3, etc.), ISO 15924 script codes, and ISO 3166 and UN M.49 region codes. The registry is actively maintained, stabilized, and comprehensive in ways that other lists found on the Internet might not be. Each of the subtag types is kept in sync with parent standards with the help and participation of those standards maintainers, so extracting or making your own list of codes or referring to ones found elsewhere can lead to maintenance problems or confusion.

Avoid creating a list of valid or supported language tags, language subtags, or [=locales=].

Making your own list of fully formed language tags will unnecessarily restrict the list of languages that can be used. In addition, locale data is always being expanded, so a list that describes support today will become outdated in the future. Restricting which tags or subtags are available to users conflicts with our goal of providing universal access.

Declaring language

See related review comments.

Declaring language at the resource level

Here we are talking about an independent unit of data that contains structured text. Examples may include a whole HTML page, an XML document, a JSON file, a WebVTT script, an annotation, etc.

See also

[[[#lang_values]]].

The specification should indicate how to define the default text-processing language for the resource as a whole.

It often saves trouble to identify the language, or at least the default language, of the resource as a whole in one place. For example, in an HTML file, this is done by setting the lang attribute on the html element.

Content within the resource should inherit the language of the text-processing declared at the resource level, unless it is specifically overridden.

Consider whether it is necessary to have separate declarations to indicate the text-processing language versus metadata about the expected use of the resource.

In many cases a resource contains text in only one language, and in many more cases the language declared as the default language for text-processing is the same as the language that describes the metadata about the resource as a whole. In such cases it makes sense to have a single declaration.

It becomes problematic, however, to use a single declaration when it refers to more than one language unless there is a way to determine which one language should be used as the text-processing default.

If there is only one language declaration for a resource, and it has more than one language tag as a value, it must be possible to identify the default text-processing language for the resource.

Establishing the language of a content block

See also

[[[#lang_values]]].

The words block and/or chunk are used here to refer to a structural component within the resource as a whole that groups content together and separates it from adjacent content. Boundaries between one block and another are equivalent to paragraph or section boundaries in text, or discrete data items inside a file.

For example, this could refer to a block or paragraph in XML or HTML, an object declaration in JSON, a cue in WebVTT, a line in a CSV file, etc. Contrast this with inline content, which describes a range within a paragraph, sentence, etc.

The interpretation of which structures defined in a spec are relevant to these requirements may require a little consideration, and will depend on the format of the data involved.

By default, blocks of content should inherit any text-processing language set for the resource as a whole.

See [[[#lang_misc]]] for guidance related to the default text-processing language information.

It should be possible to indicate a change in language for blocks of content where the language changes.

Establishing the language of inline runs

In this section we refer to information that needs to be provided for a range of characters in the middle of a paragraph or string.

See also

[[[#lang_values]]]

It should be possible to indicate language for spans of inline text where the language changes.

Where a switch in language can affect operations on the content, such as spell-checking, rendering, styling, voice production, translation, information retrieval, and so forth, it is necessary to indicate the range of text affected and identify the language of that content.

Identifying the language of strings

The information in this section is being developed in Requirements for Language and Direction Metadata in Data Formats [[STRING-META]]. That document is still being written, so these guidelines are likely to change at any time.

The exchange of data on the Web, to the degree possible, should use locale-neutral standardized formats. However, some data on the Web necessarily consists of natural language information intended for display to humans. This natural language information depends on and benefits from the presence of language and direction metadata for proper display. Along with support for Unicode, mechanisms for including and specifying the [=base direction=] and the natural language of spans of text are one of the key internationalization considerations when developing new formats and technologies for the Web.

The most basic best practice, which the Internationalization Working Group looks for in every specification, is:

For any string field containing natural language text, it MUST be possible to determine the language and string direction of that specific string. Such determination SHOULD use metadata at the string or document level and SHOULD NOT depend on heuristics.

See related review comments.

See also

[[[#bidi_strings]]].

Work on language and direction metadata for string formats is a work in progress. Specifications might need to include a note indicating the need for future adoption of metadata. Here is a prototype:

The field {fieldname} should follow the best practices found in Strings on the Web: Language and Direction Metadata [[STRING-META]]. This includes making use of any future standards which emerge regarding the reporting of string language and direction metadata.

Use field-based metadata or string datatypes to indicate the language and the [=string direction=] for individual localizable text values.

Individual data values can differ in language or direction from other values found in the same data file or document. Providing metadata values directly associated with each localizable text field allows for the metadata to be overridden appropriately and helps applications automate processing when assembling, extracting, forwarding, or otherwise processing each data field for use.

Specifications MAY define a mechanism to provide the default language and the default [=string direction=] for all strings in a given resource. However, specifications MUST NOT assume that a resource-wide default is sufficient. Even if a resource-wide setting is available, it must be possible to use string-specific metadata to override that default.

Many documents contain data in a single language. Providing a means of indicating the intended language audience, perhaps in a header, can reduce overall document size and complexity. However, the ability to override specific string values remains important, as it is always possible that some strings might not be available in the document language or when the base direction is not consistent with the default direction of other localizable text values in the document as a whole.

Specify that, in the absence of other information, the default direction and default language are unknown.

Specifications SHOULD be careful to distinguish syntactic content, including user-supplied values, from localizable text.

Specifications MUST NOT treat syntactic content values as "displayable".

Specifications SHOULD NOT specify or require the use of language metadata for fields that cannot contain natural language text.

Document formats on the Web consist of text. In most cases, data values in a given document format are meant to be representative and meaningful, not just arbitrary strings. The fact that a data value consists of, for example, an English keyword does not make the data value a natural language string meant for display as text (that is, the value is not localizable text). Such data values are part of the syntactic content of the document: not only do they not require language and direction metadata, but they should not be associated with such metadata.

For string values and string fields that are not localizable text, specifications SHOULD specify that the field is non-linguistic in nature and recommend the language tag zxx ("No linguistic content") be associated with each string value.

For string values and string fields that are known to contain localizable text but for which there is no possibility of language metadata from the underlying format, specifications SHOULD specify that the language of the content is unknown and recommend the language tag und ("Undetermined") be associated with each string. Specifications MAY also allow the use of heuristics or the inference of the language from other field values where appropriate.

Some string values depend on or are defined by existing protocols or formats. Often these strings are not associated with or do not provide language or direction metadata. For example, many HTTP headers define their contents as if they were not localizable text, even when, in some cases, they contain natural language text. Consuming specifications sometimes need to take a dependency on strings of this nature or define a format that describes one of these strings. In these cases there will be no language or direction metadata for consumers to associate with the string in the specification's data structure or document format, and any metadata that the specification's data structure or document format provides (when functioning as a producer) will not be serialized through the underlying format.

Specifications SHOULD NOT use the Unicode "language tag" characters (code points U+E0000 to U+E007F) for language identification.

The Unicode "language tag" characters are deprecated for use as language tags and there are many reasons why they are a poor solution to the language metadata problem in document formats and wire protocols. Specification authors are cautioned not to repurpose these characters or try to build new mechanisms for transmitting language information based on them.

Specifications SHOULD recommend the use of language indexing when localizable strings can be supplied in multiple languages for the same value.

Producers sometimes need to supply localized values for a given content item or data record. Sometimes this is done by language negotiation between the producer and consumer. Localization then takes place in the producer using the negotiated language to select the content returned.

Other times localization of a content item is done by having the producer return multiple language representations for the item and letting the consumer choose the value to display. This latter process is called language indexing. For more information about language indexing, see Localization Considerations in [[STRING-META]].

Language information in JSON-LD

[[JSON-LD]] provides several mechanisms for satisfying some of the best practices found in this section:

For documents that use [[JSON-LD]], use of [[JSON-LD]] @context and the built-in @language attribute is RECOMMENDED as a document level default.

Specifications SHOULD use the i18n Namespace feature for RDF literals, as defined in [[JSON-LD]] 1.1.

Where the i18n Namespace is not available or is inappropriate to use, specifications SHOULD require [[JSON-LD]] plain string literals for natural language values to provide string-specific language information.

Detecting & matching language

See related review comments.

Reference BCP47 for language tag matching.

In addition to defining language tags (in RFC 5646) BCP 47 also contains an RFC on the topic of matching language tags to a [=language range=]. Just as it is most appropriate to refer to the stable identifier BCP 47 for the definition of language tags, it is best to refer to BCP 47 when referencing matching schemes found therein.

Unicode's [[CLDR]] project defines additional algorithms, rules and processes for matching language tags when used as [=locale=] identifiers.

Text direction

It is important to establish direction for text written or mixed with right-to-left scripts. Characters in these scripts are stored in memory in the order they are typed and pronounced – called the logical order. The Unicode Bidirectional Algorithm (UBA) provides a lot of support for automatically rendering a sequence of characters stored in logical order so that they are visually ordered as expected. Unfortunately, the UBA alone is not sufficient to correctly render bidirectional text, and additional information has to be provided about the default directional context to apply for a given sequence of characters.

Basic requirements

The basic requirements are as follows.

It must be possible to indicate [=base direction=] for each individual paragraph-level item of natural language text that will be read by someone.

A special case of the above applies to [=natural language=] string values in data structures and document formats:

For any string field containing [=natural language=] text, it MUST be possible to determine the language and [=string direction=] of that specific string. Such determination SHOULD use metadata at the string or document level and SHOULD NOT depend on heuristics.

It must be possible to indicate base direction changes for embedded runs of inline bidirectional text for all localizable text.

Annotating right-to-left text must require the minimum amount of effort for people who work natively with right-to-left scripts.

Requiring a speaker of Arabic, Divehi, Hebrew, Persian, Urdu, etc. to add markup or control characters to every paragraph or small data item they write is far too much to be manageable. Typically, the format should establish a default direction and require the user to intervene only when exceptions have to be dealt with.

Background information

In this section we try to set out some key concepts associated with text direction, so that it will be easier to understand the recommendations that follow.

Important definitions

In order to correctly display text written in a 'right-to-left' script or left-to-right text containing bidirectional elements, it is important to establish the base direction that will be used to dictate the order in which elements of the text will be displayed.

If you are not familiar with what the Unicode Bidirectional Algorithm (UBA) does and doesn't do, and why the base direction is so important, read Unicode Bidirectional Algorithm basics.

In this section, the word paragraph indicates a run of text followed by a hard line-break in plain text, but may signify different things in other situations. In CSV it equates to 'cell', so a single line of comma-separated items is actually a set of comma-separated paragraphs.  In HTML it equates to the lowest level of block element, which is often a p element, but may be things such as div, li, etc., if they only contain text and/or inline elements. In JSON, it often equates to a quoted string value, but if a string value uses markup then paragraphs are associated with block elements, and if the string value is multiple lines of plain text then each line is a paragraph.

The term metadata is used here to mean information which could be an annotation or property associated with the data, or could be markup in scenarios that allow that, or could be a higher-level protocol, etc.

Ways base direction can be set for paragraphs

There are a number of possible ways of setting the base direction.

  1. The base direction of a paragraph may be set by an application or a user applying metadata to the paragraph. Typical values for base direction may include ltr, rtl or auto.
    • The metadata may specifically indicate that heuristics should be used. Then you would expect to consider the actual characters used in order to determine the base direction. (This is what happens if you set dir=auto on an HTML element.)
    • The application may expect metadata, but there may be no such information provided. In this case you would usually expect there to be a default direction specified, and the base direction for a cell would be set to that default. The default is usually LTR. (This is what happens if you have no dir attributes in your HTML file.)
    • Where a format contains many paragraphs or chunks of information, and the language of text in all those chunks is the same, it is sometimes useful to allow a default base direction to be set for and inherited by all. This is what happens when you set the dir attribute on the html tag in HTML. Another example would be a subtitling file containing many cues, all written in Arabic; it would be best to allow the author to say at the start of the file that the default is RTL for all cue text. There should always be a way to override the direction information for a specific paragraph where needed.
  2. If the application expects no metadata to be available it should use heuristics to determine the base direction for each paragraph/cell. A typical solution, and one described by UAX 9 Unicode Bidirectional Algorithm, is to look for the first-strong character in the paragraph/cell. (This is likely to apply if you are looking at plain text that is not expected to be associated with metadata. It only happens with HTML if the direction is set to auto, since HTML specifies a default direction.)
    • Not all paragraphs using the first-strong method will have the correct base direction applied. In some cases, an Arabic or Hebrew, etc, paragraph may start with strong LTR characters. There must be a way to deal with this.
    • Where a syntactic unit contains multiple lines of plain text (for example, a multiline cue text in a subtitling file), the first-strong heuristic needs to be applied to each line separately.
    • There may be special rules that involve ignoring some sequence of characters or type of markup at the start of the paragraph before identifying the first strong character.
    • In some cases there are no strong characters in a paragraph, and the base direction can be critically important for the data to be understood correctly, eg. telephone numbers or MAC addresses. There needs to be a way to resort to an appropriate default for these cases.
  3. Whether or not any metadata is specified, if a paragraph contains a string that starts with one of the Unicode bidi control characters RLI, LRI, FSI, LRE, RLE, LRO, or RLO and ends with PDF/PDI, these characters will determine the base direction for the contained string. These characters, when placed in the content, explicitly override any previously set direction by creating an inline range and assigning a base direction to it.
    • The effect of such characters does not extend past paragraph boundaries, but the range ought to be explicitly ended using the PDF/PDI control character, especially if a paragraph end is not easily detectable by the application.)
    • Because isolation is needed for bidirectional text to work properly, the Unicode Standard says that the isolating control codes RLI, LRI and FSI should be used rather than LRE or RLE. Unfortunately, those characters are still not widely supported.
    • For structural components in markup, above the paragraph level, it is not possible to use the Unicode bidi control characters to define direction for paragraphs, since these are inline controls only, and the effect is terminated by a paragraph end.

When capturing text input by a user it is usually necessary to understand the context in which the user was inputting the data to determine the base direction of the input. In HTML, for example, this may be set by the direction inherited from the html tag, or by the user pressing keys to set the base direction for a form field. It is then necessary to find some way of storing the information about base direction or associating it with the string when rendered. Typically, in this situation, any direction changes internal to the string being input are handled by the user and will be captured as part of the string.

Inline changes to base direction

Embedded ranges of text within a single paragraph may need to have a different base direction. For example,

"The title was '!NOITASILANOITANRETNI'."

where the span within the single quotes is in Hebrew/Arabic/Divehi, etc., and needs to have a [=RTL=] base direction, instead of the [=LTR=] base direction of the surrounding paragraph, in order to place the exclamation mark correctly.

If markup is available to the content author, it is likely to be easier and safer to use markup to indicate such inline ranges (see below). In HTML you would usually use an inline element with a dir attribute to establish the base direction for such runs of text. If you can't mark up the text, such as in HTML's title element, or any environment that handles only plain text content, you have to resort to Unicode's paired control characters to establish the base direction for such an internal range.

Furthermore, inline ranges where the base direction is changed should be [=bidi isolated=] from surrounding text, so that the [=Unicode Bidirectional Algorithm=] doesn't produce incorrect results ("[=spillover=]") due to interference across boundaries.

This means that if a content author is using Unicode control codes they should use the isolating controls RLI/LRI/FSI…PDI rather than the embedding controls RLE/LRE…PDF.

Problems with control characters

Reasons to avoid relying on control characters to set direction include the following:

  1. They are invisible in most editors and are therefore difficult to work with, and can easily lead to orphans and overlapping ranges. They can be particularly difficult to manage when editing bidirectional inline text because it's hard to position the cursor in the correct place. If you ask someone who writes in a right-to-left script, you are likely to find that they dislike using control codes.
  2. Users often don't have the necessary characters available on their keyboard, or have difficulty inputting them.
  3. It is sometimes necessary to choose which to use based on context or the type of the data, and this means that a content author typically needs to select the control codes – specifying control codes in this way for all paragraphs is time-consuming and error-prone.
  4. Processors that extract parts of the data, add to it, or reuse in combination with other text may incorrectly handle the control codes.
  5. Search and comparison algorithms should ignore these characters, but typically don't.

The last two items above may also hold for markup, but implementers often support included markup better than included control codes.

Don't expect users to add control codes at the start and end of every paragraph. That's far too much work.

Strong directional formatting characters: RLM, LRM, and ALM

A word about the Unicode characters RLMU+200F RIGHT-TO-LEFT MARK (RLM), LRMU+200E LEFT-TO-RIGHT MARK (LRM), and ALMU+061C ARABIC LETTER MARK (ALM) is warranted at this point.

The first point to be clear about is that these three characters do not establish the base direction for a range of text. They are simply invisible characters with strong directional properties.

Recalling an earlier example, this means that you cannot use RLM, for example, to make the text W3C appear to the left of the Hebrew text. Only using metadata or paired control characters results in the correct display.

Of course, if you are detecting base direction using first-strong heuristics (such as dir="auto" in HTML), then inserting an RLM, ALM, or LRM can be useful for influencing the base direction detected where the text in question begins with something that would otherwise give the wrong result.

Remember that if metadata is used to set the base direction, the strong directional formatting character is ignored, unless the metadata specifically says that first-strong heuristics should be used.

Finally, a note about the use of ALMU+061C ARABIC LETTER MARK (ALM). This character is used to influence the display of sequences of numbers in Arabic script text in cases where no Arabic letters occur before the number.

Base direction and language

Do not assume that direction can be determined from language information.

The following are all reasons you cannot use language tags to provide information about base direction:

  1. you can't produce the auto value with language tags.
  2. some languages are written with both RTL and LTR scripts.
  3. the only reliable part of the language tag that would indicate the base direction is the script tag, but BCP47 recommends that you suppress the use of the script tag for languages that don't usually need it, such as Hebrew (Suppress-Script: Hebr). Languages, such as Persian, that are usually written in a RTL script may be written in transcribed form, and it's not possible to guarantee that the necessary script tag would be present to carry the directional information. In summary, you won't be able to rely on people supplying script tags as part of the language information in order to influence direction.
  4. the incidence of use of language tags and base direction markers often don't coincide.
  5. they are not semantically equivalent.

Base direction values

See related review comments.

Values for the default base direction should include left-to-right, right-to-left, and auto.

The auto value allows automatic detection of the base direction for a piece of text. For example, the auto value of dir in HTML looks for the first strong directional character in the text, but ignores certain items of markup also, to guess the base direction of the text. Note that automatic detection algorithms are far from perfect. First-strong detection is unable to correctly identify text that is really right-to-left, but that begins with a strong LTR character. Algorithms that attempt to judge the base direction based on contents of the text are also problematic. The best scenario is one where the base direction is known and declared.

Handling direction in markup

This section is about defining approaches to bidi handling that work with resources that organize content using markup. Some of the recommendations are different from those for handling strings on the Web (see [[[#bidi_strings]]]).

See related review comments.

Setting the default base direction

The spec should indicate how to define a default base direction for the resource as a whole, ie. set the overall base direction.

The default base direction, in the absence of other information, should be auto.

Establishing the base direction for paragraphs

The content author must be able to indicate parts of the text where the base direction changes. At the block level, this should be achieved using attributes or metadata, and should not require the content author to use Unicode control characters to control direction.

Relying on Unicode control characters to establish direction for every block is not feasible because line breaks terminate the effect of such control characters. It also makes the data much less stable, and unnecessarily difficult to manage if control characters have to appear at every point where they would be needed.

It must be possible to also set the direction for content fragments to auto. This means that the base direction will be determined by examining the content itself.

A typical approach here would be to set the direction based on the first strong directional character outside of any markup, but this is not the only possible method. The algorithm used to determine directionality when direction is set to auto should match that expected by the receiver.

The first-strong algorithm looks for the first character in the paragraph with a strong directional property according to the Unicode definitions. It then sets the base direction of the paragraph according to the direction of that character.

Note that the first-strong algorithm may incorrectly guess the direction of the paragraph when the first character is not typical of the rest of the paragraph, such as when a RTL paragraph or line starts with a LTR brand name or technical term.

For additional information about algorithms for detecting direction, see Estimation algorithms in the document where this was discussed with reference to HTML.

If the overall base direction is set to auto for plain text, the direction of content paragraphs should be determined on a paragraph by paragraph basis.

To indicate the sides of a block of text relative to the start and end of its contained lines, use 'block-start' and 'block-end', rather than 'top' and 'bottom'.

To indicate the start/end of a line you should use 'start' and 'end', or 'inline-start' and 'inline-end', rather than 'left' and 'right'.

Provide dedicated attributes for control of base direction and bidirectional overrides; do not rely on the user applying style properties to arbitrary markup to achieve bidi control.

For example, HTML has a dir attribute that is capable of managing base direction without assistance from CSS styling. XML formats should define dedicated markup to represent directional information, even if they need CSS to achieve the required display, since the text may be used in other ways.

Style sheets such as CSS may not always be used with the data, or carried with the data when it is syndicated, etc. Directional information is fundamentally important to correct display of the data, and should be associated more closely and more permanently with the markup or data.

Handling base direction for strings

The information in this section is pulled from Strings on the Web: Language and Direction Metadata. That document is still being written, so these guidelines are likely to change at any time.

See related review comments.

Provide metadata constructs that can be used to indicate the base direction of any natural language string.

Specify that consumers of strings should use heuristics, preferably based on the Unicode Standard first-strong algorithm, to detect the base direction of a string except where metadata is provided.

Where possible, define a field to indicate the default direction for all strings in a given resource or document.

Do NOT assume that a creating a document-level default without the ability to change direction for any string is sufficient.

If metadata is not available due to legacy implementations and cannot otherwise be provided, specifications MAY allow a [=string direction=] to be interpolated from available language metadata.

Specifications MUST NOT require the production or use of paired bidi controls.

Setting base direction for inline or substring text

'Inline text' here has a readily understandable meaning in markup. It also applies to strings (eg. in JSON, CSV, or other plain text formats), meaning runs of characters which don't include all the characters in the string.

It must be possible to indicate spans of inline text where the base direction changes. If markup is available, this is the preferred method. Otherwise your specification must require that Unicode control characters are recognized by the receiving application, and correctly implemented.

It must be possible to also set the direction for a span of inline text to auto, which means that the base direction will be determined by examining the content itself. A typical approach here would be to set the direction based on the first strong directional character outside of any markup.

The first-strong algorithm looks for the first character in the paragraph with a strong directional property according to the Unicode definitions. It then sets the [=base direction=] of the paragraph according to the direction of that character.

Note that the first-strong algorithm may incorrectly guess the direction of the paragraph when the first character is not typical of the rest of the paragraph, such as when an [=RTL=] paragraph or line starts with a [=LTR=] brand name or technical term.

For additional information about algorithms for detecting direction, see Estimation algorithms in the document where this was discussed with reference to HTML.

If users use Unicode bidirectional control characters, the isolating RLI/LRI/FSI with PDI characters must be supported by the application and recommended (rather than RLE/LRE with PDF) by the spec.

Use of RLM/LRM should be appropriate, and expectations of what those controls can and cannot do should be clear in the spec.

The Unicode bidirectional control characters RLMU+200F RIGHT-TO-LEFT MARK and LRMU+200E LEFT-TO-RIGHT MARK are not sufficient on their own to manage bidirectional text. They cannot produce a different base direction for embedded text. For that you need to be able to indicate the start and end of the range of the embedded text.  This is best done by markup, if available, or failing that using the other Unicode bidirectional controls mentioned just above.

For markup, provide dedicated attributes for control of base direction and bidirectional overrides; do not rely on the user applying style properties to arbitrary markup to achieve bidi control.

For markup, allow bidi attributes on all inline elements in markup that contain text.

For markup, provide attributes that allow the user to (a) create an isolated or embedded base direction or (b) override the bidirectional algorithm altogether. Such attributes should allow the user to set the direction to LTR, RTL, or Auto in either of these two scenarios.

Detecting & matching direction (TBD)

See related review comments.

Characters

The term character is often used to mean different things in different contexts: it can variously refer to the visual, logical, or byte-level representation of a given piece of text. This makes the term too imprecise to use when specifying algorithms, protocols, or document formats. Understanding how characters are defined and encoded in computing systems, along with the associated terminology used to make such specification unambiguous, is thus a necessary prerequisite to discussing the processing of string data.

The visual manifestation of a "character"—the shape most people mean when they say "character"—is what we call a user-perceived character. These visual building blocks are usually perceived to be a single unit of the visible text.

At their simplest, user-perceived characters are a single shape that can be tied one-to-one to the underlying computing representation. But a user-perceived character can be formed, in some scripts, from more than one character. And a given logical character can take many different shapes due to such influences as font selection, style, or the surrounding context (such as adjacent characters). In some cases, a single user-perceived character might be formed from a long sequence of logical characters. And some logical characters (so-called "combining marks") are always used in conjunction with another character.

When user-perceived characters are represented visibly (on screen or in print), they are represented by individual rendering units. This visual unit is called a [=grapheme=] (the word [=glyph=] is also used). Graphemes are the visual units found in fonts and rendering software.

Graphemes are encoded into computer systems using "logical characters". A character set is a set of logical characters: a specific collection of characters that can be used together to encode text. The most important character set is the Universal Character Set, also known as [[Unicode]]. This character set includes all of the characters used to encode text, including historical or extinct writing systems as well as modern usage, private use, typesetting symbols, and other things, such as the emoji. Other character sets are defined subsets of Unicode. In Unicode, a 'character' is a single abstract logical unit of text. Each character in Unicode is assigned a unique integer number between 0x0000 and 0x10FFFF, which is called its code point. The term code point therefore unambiguously refers to a single logical character and its integer representation.

Specifications SHOULD explicitly define the term 'character' to mean a Unicode code point.

The relationship between code points and graphemes can be complex. In most cases, a code point sequence that forms a single grapheme should be treated as a single textual unit. For example, when cursoring across text, an entire grapheme should select together. It shouldn't be possible to cursor into the "middle" of a grapheme or delete only a part of user-perceived character. Because the relationship is not one-to-one between code points and graphemes and because the relationship can be somewhat complex, [[Unicode]] defines a specific type of grapheme: the extended grapheme cluster which most closely matches the mapping of the underlying logical character sequence to a user-perceived character. When referring to 'graphemes' in this document, we mean extended grapheme clusters (unless otherwise called out).

Another example of the complex relationship between code points and graphemes are certain emoji. The emoji character for "family" has a code point in Unicode: 👪U+1F46A FAMILY. It can also be formed by using using a sequence of code points: U+1F468 U+200D U+1F469 U+200D U+1F466. Altering or adding other emoji characters can alter the composition of the family. For example the sequence 👨‍👩‍👧‍👧U+1F468 U+200D U+1F469 U+200D U+1F467 U+200D U+1F467 results in a composed emoji character for a "family: man, woman, girl, girl" on systems that support this kind of composition. Many common emoji can only be formed using sequences of code points, but should be treated as a single user-perceived character when displaying or processing the text. You wouldn't want to put a line-break in the middle of the family!

Unicode code points are just abstract integer values: they are not the values actually present in the memory of the computer or serialized on the wire. When processing text, computers use an array of fixed-size integer units. One such common unit is the byte (or octet, since bytes have 8 bits per unit). There are also 16-bit, 32-bit, or other size units. In many programming languages, the unit is called a char, which suggests that strings are made of "characters". We use the term code unit to refer unambiguously to the programming and serialized representation of characters. For example, in C, a char is generally an 8-bit byte: each char is a 8-bit code unit. In Java or Javascript, a char is a 16-bit value.

A set of rules for converting code points to or from code units is called a character encoding form (or just "character encoding" for short.

Choosing a definition of 'character'

See related review comments.

The term character is used differently in a variety of contexts and often leads to confusion when used outside of these contexts. In the context of the digital representations of text, a character can be defined as a small logical unit of text. Text is then defined as sequences of characters. While such an informal definition is sufficient to create or capture a common understanding in many cases, it is also sufficiently open to create misunderstandings as soon as details start to matter. In order to write effective specifications, protocol implementations, and software for end users, it is very important to understand that these misunderstandings can occur.

This section examines some of these contexts, meanings and confusions.

See also

[[[#char_string]]].

Specifications SHOULD use specific terms, when available, instead of the general term 'character'.

Specific terms could include [=code point=], [=grapheme cluster=], typographic character unit, [=code unit=], and [=glyph=].

When specifications use the term 'character' the specifications MUST define which meaning they intend, and SHOULD explicitly define the term 'character' to mean a Unicode code point.

The developers of specifications, and the developers of software based on those specifications, are likely to be more familiar with usages of the term 'character' they have experienced and less familiar with the wide variety of usages in an international context. Furthermore, within a computing context, characters are often confused with related concepts, resulting in incomplete or inappropriate specifications and software.

Specifications, software and content MUST NOT require or depend on a one-to-one relationship between characters and units of physical storage.

Computer storage and communication rely on units of physical storage and information interchange, such as bits and bytes (8-bit units, also called octets). A frequent error in specifications and implementations is the equating of characters with units of physical storage. The mapping between characters and such units of storage is actually quite complex.

Specifications, software and content MUST NOT require or depend on a one-to-one correspondence between characters and the sounds of a language.

In some scripts, characters have a close relationship to phonemes (a phoneme is a minimally distinct sound in the context of a particular spoken language), while in others they are closely related to meanings. Even when characters (loosely) correspond to phonemes, this relationship may not be simple, and there is rarely a one-to-one correspondence between character and phoneme.

The following are examples of mismatches between the term character and units of sound:

  • In the English sentence, "They were too close to the door to close it." the same character 's' is used to represent both /s/ and /z/ phonemes.
  • In the English language the phoneme /k/ of "cool" is like the phoneme /k/ of "keel".
  • In many scripts a single character may represent a sequence of phonemes, such as the syllabic characters of Japanese hiragana.
  • In many writing systems a sequence of characters may represent a single phoneme, for example 'th' and 'ng' in "thing".

Specifications, software and content MUST NOT require or depend on a one-to-one mapping between characters and units of displayed text.

Visual rendering introduces the notion of a glyph. Glyphs are defined by ISO/IEC 9541-1 as a recognizable abstract graphic symbol which is independent of a specific design. There is not a one-to-one correspondence between characters and glyphs:

A set of glyphs makes up a font. Glyphs can be construed as the basic units of organization of the visual rendering of text, just as characters are the basic unit of organization of encoded text.

See Examples of Characters, Keystrokes and Glyphs for examples of the complexities of character to glyph mapping.

Specifications and software MUST NOT require nor depend on a single keystroke resulting in a single character, nor that a single character be input with a single keystroke (even with modifiers), nor that keyboards are the same all over the world.

In keyboard input, it is not always the case that keystrokes and input characters correspond one-to-one. A limited number of keys can fit on a keyboard. Some keyboards will generate multiple characters from a single keypress. In other cases ('dead keys') a key will generate no characters, but affect the results of subsequent keypresses. Many writing systems have far too many characters to fit on a keyboard and must rely on more complex input methods, which transform keystroke sequences into character sequences. Other languages may make it necessary to input some characters with special modifier keys.

See Examples of Characters, Keystrokes and Glyphs for examples of non-trivial input.

Defining a Reference Processing Model

See also

[[[#char_ranges]]].

Textual data objects defined by protocol or format specifications MUST be in a single character encoding.

All specifications that involve processing of text MUST specify the processing of text according to the Reference Processing Model described by the rest of the recommendations in this list.

Specifications MUST define text in terms of Unicode characters, not bytes or glyphs.

For their textual data objects specifications MAY allow use of any character encoding which can be transcoded to a Unicode encoding form.

Specifications MAY choose to disallow or deprecate some character encodings and to make others mandatory. Independent of the actual character encoding, the specified behavior MUST be the same as if the processing happened as follows: (a) The character encoding of any textual data object received by the application implementing the specification MUST be determined and the data object MUST be interpreted as a sequence of Unicode characters - this MUST be equivalent to transcoding the data object to some Unicode encoding form, adjusting any character encoding label if necessary, and receiving it in that Unicode encoding form, (b) All processing MUST take place on this sequence of Unicode characters, (c) If text is output by the application, the sequence of Unicode characters MUST be encoded using a character encoding chosen among those allowed by the specification.

If a specification is such that multiple textual data objects are involved (such as an XML document referring to external parsed entities), it MAY choose to allow these data objects to be in different character encodings. In all cases, the Reference Processing Model MUST be applied to all textual data objects.

Including and excluding character ranges

See related review comments.

See also

[[[#char_pua]]].

Specifications SHOULD NOT arbitrarily exclude code points from the full range of Unicode code points from U+0000 to U+10FFFF inclusive.

Specifications MUST NOT allow code points above U+10FFFF.

Specifications SHOULD NOT allow the use of codepoints reserved by Unicode for internal use.

Specifications MUST NOT allow the use of unpaired surrogate code points.

A "surrogate code point" refers here to the use of character values in the range U+D800 through U+DFFF inclusive. These code points are reserved to allow the UTF-16 character encoding to address supplementary characters. Surrogates are always used in pairs and only appear when the UTF-16 encoding is being used. A single surrogate code point is referred to as an "unpaired surrogate" and should never be used.

Specifications SHOULD exclude compatibility characters in the syntactic elements (markup, delimiters, identifiers) of the formats they define.

Specifications SHOULD allow the full range of Unicode for user-defined values.

Using the Private Use Area

See also

[[[#char_ranges]]].

Specifications MUST NOT require the use of private use area characters with particular assignments.

Specifications MUST NOT require the use of mechanisms for defining agreements of private use code points.

Specifications and implementations SHOULD NOT disallow the use of private use code points by private agreement.

Specifications MAY define markup to allow the transmission of symbols not in Unicode or to identify specific variants of Unicode characters.

Specifications SHOULD allow the inclusion of or reference to pictures and graphics where appropriate, to eliminate the need to (mis)use character-oriented mechanisms for pictures or graphics.

Choosing character encodings

See related review comments.

Specifications MUST either specify a unique character encoding, or provide character encoding identification mechanisms such that the encoding of text can be reliably identified.

When designing a new protocol, format or API, specifications SHOULD require a unique character encoding.

When basing a protocol, format, or API on a protocol, format, or API that already has rules for character encoding, specifications SHOULD use rather than change these rules.

When a unique character encoding is required, the character encoding MUST be UTF-8, or UTF-16.

The above guideline needs further consideration: UTF-16 and UTF-32 are not recommended these days. UTF-8 is the recommended encoding.

Specifications SHOULD avoid using the terms 'character set' and 'charset' to refer to a character encoding, except when the latter is used to refer to the MIME charset parameter or its IANA-registered values. The term 'character encoding', or in specific cases the terms 'character encoding form' or 'character encoding scheme', are RECOMMENDED.

If the unique encoding approach is not taken, specifications SHOULD require the use of the IANA charset registry names, and in particular the names identified in the registry as 'MIME preferred names', to designate character encodings in protocols, data formats and APIs.

The above guideline needs further consideration: the list of character encodings recommended for Web specifications is listed in the Encoding specification.

Character encodings that are not in the IANA registry SHOULD NOT be used, except by private agreement.

If an unregistered character encoding is used, the convention of using 'x-' at the beginning of the name MUST be followed.

If the unique encoding approach is not chosen, specifications MUST designate at least one of the UTF-8 and UTF-16 encoding forms of Unicode as admissible character encodings and SHOULD choose at least one of UTF-8 or UTF-16 as required encoding forms (encoding forms that MUST be supported by implementations of the specification).

Specifications that require a default encoding MUST define either UTF-8 or UTF-16 as the default, or both if they define suitable means of distinguishing them.

Identifying character encodings

Specifications MUST NOT propose the use of heuristics to determine the encoding of data.

Specifications MUST define conflict-resolution mechanisms (e.g. priorities) for cases where there is multiple or conflicting information about character encoding.

Designing character escapes

See related review comments.

Specifications should provide a mechanism for escaping characters, particularly those which are invisible or ambiguous.

It is generally recommended that character escapes be provided so that difficult to enter or edit sequences can be introduced using a plain text editor. Escape sequences are particularly useful for invisible or ambiguous Unicode characters, including zero-width spaces, soft-hyphens, various bidi controls, mongolian vowel separators, etc.

For advice on use of escapes in markup, but which is mostly generalisable to other formats, see Using character escapes in markup and CSS.

Specifications SHOULD NOT invent a new escaping mechanism if an appropriate one already exists.

Here are some examples of common escaping mechanisms found on the Web or in common programming languages. The example character here is 😽U+1F63D KISSING CAT FACE WITH CLOSED EYES.

Found In Type Example Description
HTML, XML Hex NCRs 😽 Hexadecimal encoding of the Unicode code point
Decimal NCRs 😽 Decimal encoding of the Unicode code point
JavaScript, Ruby, Rust, [[UTS18]] \u delimited \u{1F63D} Hexadecimal encoding of the Unicode code point
Perl \x delimited \x{1F63D} Hexadecimal encoding of the Unicode code point; uses x instead of the more common u
Java, JavaScript, JSON, C, C++, Python \u UTF-16 code units \uD83D\uDE3D Fixed-width hexadecimal encoding of UTF-16 code units; supplementary characters are encoded as a surrogate pair
C, C++, Python \U UTF-32 code units \U0001f63d Fixed-width hexadecimal encoding of UTF-32 code units; most often used together with \u escapes (which are more efficient for the more-common BMP characters).
For example, \u00c0 \U0001f63d \u12fe
URLs URL Encode %F0%9F%98%BD Hexadecimal encoding of UTF-8 bytes; each byte requires three characters; each code point requires from 1 to 4 bytes

When choosing an escaping mechanism, note that hexadecimal is generally preferred to decimal encodings, due to the common use of hexadecimal in the Unicode Standard and its references.

The number of different ways to escape a character SHOULD be minimized (ideally to one).

Escape syntax SHOULD require either explicit end delimiters or a fixed number of characters in each character escape. Escape syntaxes where the end is determined by any character outside the set of characters admissible in the character escape itself SHOULD be avoided.

Whenever specifications define character escapes that allow the representation of characters using a number, the number MUST represent the Unicode code point of the character and SHOULD be in hexadecimal notation.

Escaped characters SHOULD be acceptable wherever their unescaped forms are; this does not preclude that syntax-significant characters, when escaped, lose their significance in the syntax. In particular, if a character is acceptable in identifiers and comments, then its escaped form should also be acceptable.

Storing text

Protocols, data formats and APIs MUST store, interchange or process text data in logical order.

Independent of whether some implementation uses logical selection or visual selection, characters selected MUST be kept in logical order in storage.

Specifications of protocols and APIs that involve selection of ranges SHOULD provide for discontiguous logical selections, at least to the extent necessary to support implementation of visual selection on screen on top of those protocols and APIs.

Defining 'string'

See also

[[[#char_indexing]]].

[[[#char_def]]].

Notwithstanding the note just above, I18N's best practices appear to be exactly opposite those in [[DESIGN-PRINCIPLES]] at the moment. The details turn out to be the same, but we need to resolve differences in guidance and wording. The issue design-principles#454 tracks this.

Unless you have a reason not to, use a string definition consistent with {{USVString}}.

Use a string definition consistent with {{DOMString}} if your specification does not process the internal value of strings and is not required to check for unpaired surrogate code points, or if your specification pertains to the [[DOM]], defines a JavaScript API or data format, or defines strings as opaque values that are not processed.

A string is a sequence of characters. Because [[UNICODE]] is fundamental to understanding and working with text, including text that uses legacy character encodings, the basic definition of a string depends on Unicode and its concept of a encoded character. Specifically:

A string is a well-formed sequence of zero or more Unicode Scalar Values.

Because there are multiple ways of working with strings, different terminology has evolved to support the needs of different specifications. Be sure to understand your specification's needs and use the most appropriate and precise terminology. On the Web, there are three types of strings:

One difference between these different string types is how surrogate code points are handled. Note the difference between a code point (which represents a Unicode Scalar Value, i.e. a character) and a code unit (a unit of encoding in a character encoding form).

The UTF-16 character encoding form uses 16-bit code units. Characters whose scalar values require more than 16-bits are encoded using a pair of surrogate code units: a "low surrogate" (in the range U+D800-U+DBFF) followed by a "high surrogate" (in the range U+DC00-U+DFFF). Unicode reserves the code points in these ranges as non-characters so that there is no confusion between the code units in UTF-16 and normal text.

In a {{USVString}}, isolated surrogate code points are invalid and implementations are required to replace any found in a string with the Unicode replacment character (U+FFFD REPLACEMENT CHARACTER). For strings whose most common algorithms operate on scalar values (such as percent-encoding), or for operations which can’t handle surrogates in input (such as APIs that pass strings through to native platform APIs), {{USVString}} should be used. Any of these references are equivalent to this:

In a {{DOMString}}, unpaired surrogate code units can appear in a string. Most string operations don’t need to interpret the code units inside of strings. Specifying {{DOMString}} means that implementations are not required to validate the contents of the string, making this the ideal string type for most data structures, formats, or APIs. The [[DOM]] and JavaScript strings use {{DOMString}} as their string type and the [[INFRA]] standard defines the term 'string' to mean a {{DOMString}}:

A string is a sequence of unsigned 16-bit integers, also known as code units.

[[INFRA]]'s use of the term code unit refers specifically to the UTF-16 character encoding's code units, rather than the more general definition of a code unit that can refer to different size values, such as bytes, in any character encoding form.

A {{ByteString}} depends on the character encoding form used to encode characters into bytes. Legacy character encodings do not have a concept of "surrogates", so there is generally no way to encode a surrogate code point. Valid UTF-8 does not permit surrogate code points: these are replaced by U+FFFD REPLACEMENT CHARACTER when encoding or decoding text in UTF-8. When converting UTF-16 to UTF-8, any surrogate pairs are transformed into the proper UTF-8 byte sequence encoding the specific scalar value.

Specifications SHOULD NOT add or define support for legacy character encodings unless there is a specific reason to do so.

Specifications SHOULD NOT define a string as a {{ByteString}} or as a sequence of bytes ('byte string'). For binary data or sequences of bytes, use {{Uint8Array}} instead.

The type {{ByteString}} defines strings as sequences of bytes (octets). Interpretation of byte strings thus requires the specification of a character encoding form. UTF-8 is the preferred encoding for wire and document formats [[ENCODING]], but there is generally no reason to specify strings in terms of the underlying byte values.

See for additional best practices.

Whitespace characters

See related review comments.

See also

[[[#markup_identifiers]]].

Whitespace characters are characters that represent horizontal or vertical space in typography. Whitespace characters can have different visual effects: some whitespace characters have no visible effect, while others represent larger, smaller, or variable amounts of space on the page.

Specifications that use the term "whitespace" SHOULD explicitly define what the term means.

Most specifications SHOULD define whitespace to mean characters with the Unicode White_Space property.

Specifications that define whitespace for use in vocabularies that are restricted to ASCII or to formats that are whitespace delimited (examples include HTML or CSS) SHOULD specify ASCII whitespace as part of their grammar.

If a specification defines "whitespace" differently from ASCII or Unicode whitespace, the specific code points MUST be listed.

Some specifications, such as ECMAScript, have provided their own definition of whitespace which differ from the above to meet their own specific requirements.

The following table is the definition of whitespace characters in various specifications.

Links to the latest definitions of the information in the table can be found by expanding the "explanations & examples".
  white_space property pattern_white_space property ASCII whitespace (HTML) CSS whitespace ECMAScript XML
HTABU+0009 (horizontal tab)
LFU+000A (line feed)  
VTABU+000B (vertical tab)      
FFU+000C (form feed)    
CRU+000D (carriage return)    
SPU+0020 SPACE
NELU+0085 (next line)        
NBSPU+00A0 NO-BREAK SPACE        
Ogham spaceU+1680 OGHAM SPACE MARK        
NQSPU+2000 EN QUAD        
MQSPU+2001 EM QUAD        
ENSPU+2002 EN SPACE        
EMSPU+2003 EM SPACE        
3/M SPU+2004 THREE-PER-EM SPACE        
4/M SPU+2005 FOUR-PER-EM SPACE        
6/M SPU+2006 SIX-PER-EM SPACE        
FSPU+2007 FIGURE SPACE        
PSPU+2008 PUNCTUATION SPACE        
THSPU+2009 THIN SPACE        
HSPU+200A HAIR SPACE        
LRMU+200E LEFT-TO-RIGHT MARK          
RLMU+200F RIGHT-TO-LEFT MARK          
LSEPU+2028 LINE SEPARATOR        
PSEPU+2029 PARAGRAPH SEPARATOR        
NNBSPU+202F NARROW NO-BREAK SPACE        
MMSPU+205F MEDIUM MATHEMATICAL SPACE        
IDSPU+3000 IDEOGRAPHIC SPACE        
ZWNPSPU+FEFF ZERO WIDTH NO-BREAK SPACE          

Some specifications use the same definition as one of the columns above and are not listed in the table. For example, WebDriver uses the white_space property and WebGPU Shading Language uses the pattern_white_space property.

Referring to Unicode characters

See related review comments.

Use U+XXXX syntax to represent Unicode code points in a specification.

The U+XXXX format is well understood when referring to Unicode code points in a specification. These are space separated when appearing in a sequence. No additional decoration is needed. Note that a code point may contain four, five, or six hexadecimal digits. When fewer than four digits are needed, the code point number is zero filled.

Use the Unicode character name to describe specific code points.

Unicode assigns unique, immutable names to each assigned Unicode code point. Using these names in your specification when referring to specific characters (along with the code point in U+XXXX notation) will help make your specification unambiguous.

Use of the character naming template is RECOMMENDED.

For most characters, the template looks like this:

<span class="codepoint" translate="no"><bdi lang="??">&#xXXXX;</bdi><code class="uname">U+XXXX UNICODE_CHARACTER_NAME_ALL_IN_CAPS</code></span>

The bdi element is used to ensure that example characters that are right-to-left do not interfere with the layout of the page. Do not include line breaks or a space between the closing bdi and the following code element; spacing and presentation is controlled by styling.

The lang attribute should be filled in appropriately to get the correct font selection for a given context. Examples in East Asian languages (such as Chinese, Japanese, or Korean) or in the Arabic script can sometimes require greater care in choosing a language tag. Rarely, for certain languages, it might be necessary to adjust the style of the bdi element with a font-family and/or font-size in your own stylesheet.

For invisible characters (such as control characters), combining characters, or for whitespace, use an image instead of the character; or you may also omit the character and its surrounding bdi element.

<span class="codepoint" translate="no"><img alt="..." src="..."><code class="uname">U+XXXX UNICODE_CHARACTER_NAME_ALL_IN_CAPS</code></span>

Short sequences of characters should list the character names, separated by +.

There are cases where including the character name and additional markup is overly pedantic and detracts from usability, but be cautious about being so informal as to impair meaning. In particular, long sequences will sometimes just list the code points, although the character names should be retained where possible for clarity. An example can be found in this document in the discussion of the composed "family" emoji: 👨‍👩‍👧‍👧U+1F468 U+200D U+1F469 U+200D U+1F467 U+200D U+1F467

Referencing the Unicode Standard

See related review comments.

Since specifications in general need both a definition for their characters and the semantics associated with these characters, specifications SHOULD include a reference to the Unicode Standard, whether or not they include a reference to ISO/IEC 10646.

A generic reference to the Unicode Standard MUST be made if it is desired that characters allocated after a specification is published are usable with that specification. A specific reference to the Unicode Standard MAY be included to ensure that functionality depending on a particular version is available and will not change over time.

All generic references to the Unicode Standard MUST refer to the latest version of the Unicode Standard available at the date of publication of the containing specification.

All generic references to ISO/IEC 10646 MUST refer to the latest version of ISO/IEC 10646 available at the date of publication of the containing specification.

Text-processing

Choosing text units for segmentation, indexing, etc.

See related review comments.

See also

[[[#char_string]]].

[[[#char_truncation]]].

There are many situations where a software process needs to access a substring or to point within a string and does so by the use of indices, i.e. numeric "positions" within a string. Where such indices are exchanged between components of the Web, there is a need for an agreed-upon definition of string indexing in order to ensure consistent behavior. The two main questions that arise are: "What is the unit of counting?" and "Do we start counting at 0 or 1?".

The character string is RECOMMENDED as a basis for string indexing.

Grapheme clusters MAY be used as a basis for string indexing in applications where user interaction is the primary concern.

Specifications that define indexing in terms of grapheme clusters MUST either: (a) define grapheme clusters in terms of extended grapheme clusters as defined in Unicode Standard Annex #29, Unicode Text Segmentation (UTR #29), or (b) define specifically how tailoring is applied to the indexing operation.

The use of byte strings for indexing is NOT RECOMMENDED.

A UTF-16 code unit string is NOT RECOMMENDED as a basis for string indexing, even if this results in a significant improvement in the efficiency of internal operations when compared to the use of character string.

A counter-example is the use of UTF-16 in DOM Level 1. The use of UTF-16 code points is discouraged because it leaves open the possibility of an index occuring between two surrogate characters, which would cause significant problems (see [[[#char_truncation]]]).

Specifications that need a way to identify substrings or point within a string SHOULD consider ways other than string indexing to perform this operation.

Specifications SHOULD understand and process single characters as substrings, and treat indices as boundary positions between counting units, regardless of the choice of counting units.

Specifications of APIs SHOULD NOT specify single characters or single 'units of encoding' as argument or return types.

When the positions between the units are counted for string indexing, starting with an index of 0 for the position at the start of the string is the RECOMMENDED solution, with the last index then being equal to the number of counting units in the string.

Matching string identity for identifiers and syntactic content

See related review comments.

See also

[[[#text_n11n]]].

[[[#text_case]]].

String identity matching for identifiers and syntactic content should involve the following steps: (a) Ensure the strings to be compared constitute a sequence of Unicode code points (b) Expand all character escapes and includes (c) Perform any appropriate case-folding and Unicode normalization step (d) Perform any additional matching tailoring specific to the specification, and (e) Compare the resulting sequences of code points for identity.

The default recommendation for matching strings in identifiers and syntactic content is to do no normalization (ie. case folding or Unicode Normalization) of content.

'ASCII case fold' and 'Unicode canonical case fold' approaches should only be used in special circumstances.

A 'Unicode compatibility case fold' approach should not be used.

Specifications of vocabularies MUST define the boundaries between syntactic content and character data as well as entity boundaries (if the language has any include mechanism).

Working with Unicode Normalization

See related review comments.

Specifications SHOULD NOT specify a Unicode normalization form for encoding, storage, or interchange of a given vocabulary.

Implementations MUST NOT alter the normalization form of textual data being exchanged, read, parsed, or processed except when required to do so as a side-effect of text transformation such as transcoding the content to a Unicode character encoding, case folding, or other user-initiated change, as consumers or the content itself might depend on the de-normalized representation.

Specifications SHOULD NOT specify compatibility normalization forms (NFKC, NFKD).

Specifications MUST document or provide a health-warning if canonically equivalent but disjoint Unicode character sequences represent a security issue.

Where operations can produce denormalized output from normalized text input, specifications MUST define whether the resulting output is required to be normalized or not. Specifications MAY state that performing normalization is optional for some operations; in this case the default SHOULD be that normalization is performed, and an explicit option SHOULD be used to switch normalization off.

Specifications that require normalization MUST NOT make the implementation of normalization optional.

Normalization-sensitive operations MUST NOT be performed unless the implementation has first either confirmed through inspection that the text is in normalized form or it has re-normalized the text itself. Private agreements MAY be created within private systems which are not subject to these rules, but any externally observable results MUST be the same as if the rules had been obeyed.

A normalizing text-processing component which modifies text and performs normalization-sensitive operations MUST behave as if normalization took place after each modification, so that any subsequent normalization-sensitive operations always behave as if they were dealing with normalized text.

Specifying Unicode Normalization

Specifications that perform comparison or matching of string values SHOULD specify the appropriate note or warning regarding Unicode normalization.

The use or adoption of Unicode Normalization in a specification is usually part of defining how matching takes place in a given format or protocol. To help specification authors and implementers understand some of the complexity involved, the Internationalization Working Group has developed a document describing the considerations for the matching and comparison of strings: Character Model for the World Wide Web: String Matching [[CHARMOD-NORM]].

One of the choices specifications need to make is whether (or not) to require Unicode Normalization as part of matching various "values" defined as part of the specification's vocabulary. Values are commonly part of a document format or protocol's syntax, and include such things as: attribute names or values, element names or values, IDs, and so forth. Specifications that follow the recommendation to not employ normalization as part of matching should include the following Note as a reminder to content authors.

Example note. Necessarily this version is non-specific about what constitutes "values": specifications may wish to be more specific.

This specification does not permit Unicode normalization of values for the purposes of comparison. Values that are visually and semantically identical but use different Unicode character sequences will not match. Content authors are advised to use the same encoding sequence consistently or to avoid potentially troublesome characters when choosing values. For more information, see [[CHARMOD-NORM]].

Specifications that choose to require require normalization as part of string matching should include the following warning:

Example warning. Necessarily this version is non-specific about what constitutes "values": specifications may wish to be more specific.

This specification applies Unicode normalization during the matching of values. This can have an effect on the appearance and meaning of the affected text. For more information, see [[CHARMOD-NORM]].

Contact the I18N WG for alternatives or assistance if the above do not meet your needs or you're not sure about usage.

Case folding

See related review comments.

Specifications and implementations that define string matching as part of the definition of a format, protocol, or formal language (which might include operations such as parsing, matching, tokenizing, etc.) MUST define the criteria and matching forms used. These MUST be one of: (a) case-sensitive (b) Unicode case-insensitive using Unicode full case-folding (c) ASCII case-insensitive.

Case-sensitive matching is RECOMMENDED for matching syntactic content, including user-defined values.

Specifications that define case-insensitive matching in vocabularies that include more than the Basic Latin (ASCII) range of Unicode MUST specify Unicode full casefold matching.

Specifications that define case-insensitive matching in vocabularies limited to the Basic Latin (ASCII) subset of Unicode MAY specify ASCII case-insensitive matching.

If language-sensitive case-sensitive matching is specified, Unicode case mappings SHOULD be tailored according to language and the source of the language used for each tailoring MUST be specified.

Specifications that define case-insensitive matching in vocabularies SHOULD NOT specify language-sensitive case-insensitive matching.

Truncating or limiting the length of strings

See related review comments.

Some specifications, formats, or protocols or their implementations need to specify limits for the size of a given data structure or text field. This could be due to many reasons, such as limits on processing, memory, data structure size, and so forth. When selecting or specifying limits on the length of a given string, specifications or implementations need to ensure that they do not cause corruption in the text.

Specifications SHOULD NOT limit the size of data fields unless there is a specific practical or technical limitation.

There are many reasons why a length limit might be needed in a specification or format. Generally length limits correspond to underlying limits in the implementation, such as the use of fixed-size fields in a database or data store, the desire to fit into practical boundaries such as packet size, or some other implementation detail related to storage allocation or efficiency.

When truncating strings, it's necessary to decide what units to use when counting the size of the string. In many cases this is beyond the control of the specification, since the truncation is occuring for some preordained reason. However, when the choice is available, some general guidelines can be applied.

If the limitation is related to the number of display positions, the grapheme count usually corresponds most closely to the expected limit. Note that proportional width fonts, combining marks, complex scripts, and many other factors complicate counting "screen positions". In Web pages, for example, the CSS text-overflow property provides visual truncation without disturbing the content of the text. Attempts to estimate the size of a given piece of text based on the number of Unicode code points or even the number of grapheme clusters is mostly futile.

Otherwise most limits are expressed in terms of code points in Unicode or code units (such as bytes) in a specific character encoding. Code points provides the best user experience, since all Unicode code points are treated identically: if text is truncated after 40 code points, all languages and scripts get the same number of code points to work with. By contrast, when the size limit is expressed in code units such as bytes in UTF-8, users who write in a language that mostly uses ASCII letters get many more characters (code points) for a given size limit than user's whose language is mostly made up of characters that take 2-, 3-, or 4-bytes per code point.

Specifications that limit the length of a string MUST specify which type of unit (extended grapheme clusters, Unicode code points, or code units) the length limit uses.

Specifications that limit the length of a string SHOULD specify the length in terms of Unicode code points.

If a specification sets a length limit in code units (such as bytes), it MUST specify that truncation can only occur on code point boundaries.

Note that this best practice applies equally to specifications based on UTF-16, which uses 16-bit code units, not just to multibyte encodings such as UTF-8.

Specifications or APIs that interact with the [[DOM]] need to contend with the fact that character data, including operations such as length, substringData, insertData, deleteData, and so forth, is specified using UTF-16 code units, not Unicode code points. This can lead to inappropriate mid-character (code point) truncation. Specifications that reference DOM should specify that string operations not occur inside code points, and, where appropriate avoid starting or ending inside grapheme boundaries. Specifications should also include a health warning for implementers and users.

Example warning. Modify this health warning as appropriate for your specification:

Arbitrary index values in the DOM may not fall on character or grapheme boundaries. Implementations and users should avoid incorrectly starting or ending operations in the middle of a user-perceived character sequence.

Specifications that limit the length of a string SHOULD require truncation on grapheme boundaries, as truncation in the midst of a grapheme or combining character sequence can alter the meaning of the string.

If a specification specifies a length limit, it SHOULD specify that any string that is truncated include an indicator, such as ellipses, that the string has been altered.

When specifying a length limitation in code units (such as bytes), specifications SHOULD set the limit in a way that accommodates users whose language requires multibyte code unit sequences.

If a specification specifies a length limit in code units (such as bytes), it MUST specify the character encoding used in measuring the limit; such a limit SHOULD NOT specify a legacy character encoding.

If a specification permits or requires truncation of a field, the character encoding is important in knowing what the limit means. If the limit is in bytes and legacy character encodings are permitted, note that conversion of Unicode data to a non-Unicode encoding can also result in data loss (since most legacy character encodings encode only a subset of Unicode).

Concatenation of strings

See related review comments.

Specifications SHOULD NOT require the concatenation of string values to form [=natural language=] or displayable string values.

Creation of [=natural language=] text values by concatenating multiple strings together is an internationalization anti-pattern. Languages vary greatly in word order, count, grammatical gender or case, punctuation, and many other requirements. As a result, avoid requiring or suggesting that implementations generate human-readable messages from sub-strings.

When a specification requires an implementation to create or generate text which will be displayed to users, the specification SHOULD provide implementers with guidance on how to avoid potential problems related to text direction.

Specifications for APIs, protocols, or document formats sometimes require an implementation to create or provide a field containing a display name or description. When such a string is assembled from separate parts, it can result in problems with presentation or understanding due to the way that the Unicode Bidirectional Algorithm [[UAX9]] processes the assembled string. In such cases, the specification should guide implementers about how to create values that will display properly.

See also

[[[#inline_changes]]]

[[[#bidi_inline]]]

Working with file and path names

Some specifications need to define how file names or file paths are constructed by various implementations. One challenge is building definitions that work consistently when used on the different file systems used by different operating systems. This section contains general guidance when defining restrictions on file names or file paths. It is based on requirements developed in [[EPUB-33]], as well as implementation experience.

Specify the UTF-8 [[Unicode]] encoding for the storage and processing of file names and file paths.

File names SHOULD be restricted to 255 bytes in length.

This restriction is related to limitations found in certain file systems, originally MS-DOS, but also certain Unix file systems—as well as packaging schemes such as PKZIP that depend on these file systems or subsumed their limitations—in which the limit for a specific "path element" (including directory names) is limited to 255 bytes.

Path names SHOULD be restricted to 65535 bytes in length.

This restriction is related to limitations found in file systems such as FAT32 or NTFS, which restrict the path length to 32760 (32K) code units in the UTF-16 character encoding. Each UTF-16 code unit takes 16 bits (or 2 bytes), making the limit 65,535 when measured in bytes. Note that a path name limited to 64K bytes in UTF-8 can exceed the path length limits on these file systems, since UTF-8 is a variable width encoding.

File name and path name definitions MUST NOT use the following Unicode code points.

These characters are known to cause interoperability problems with various file systems. Specifications and implementations should use an abundance of caution in their file naming when interoperability of content is key. The list of restricted characters is intended to help avoid some known problem areas, but it does not ensure that all other Unicode characters are supported.

  • "U+0022 QUOTATION MARK
  • *U+002A ASTERISK
  • /U+002F SOLIDUS
  • :U+003A COLON
  • <U+003C LESS-THAN SIGN
  • >U+003E GREATER-THAN SIGN
  • \U+005C REVERSE SOLIDUS
  • |U+007C VERTICAL LINE
  • DELU+007F DEL
  • Codepoints in the following ranges:
    • C0 Controls U+0000...U+001F
    • C1 Controls U+0080...U+009F
    • Private Use U+E000...U+F8FF
    • Specials U+FFF0...U+FFFF
    • Supplementary Private Use U+F0000...U+FFFFF
    • Supplementary Private Use U+100000...U+10FFFF
  • .U+002E FULL STOP as the last character (Note that this includes the file names . and .., which have special meaning to many file systems)
  • All Unicode non-character code points, specifically:
    • The 32 contiguous characters in the Basic Multilingual Plane (U+FDD0 … U+FDEF)
    • The last two code points of the Basic Multilingual Plane (U+FFFE and U+FFFF)
    • The last two code points at the end of the Supplementary Planes (U+1FFFE, U+1FFFF … U+EFFFE, U+EFFFF)
  • All Unicode deprecated characters (search for "Deprecated" in the file).

Specifying sort and search functionality

See related review comments.

Applications often need to organize sets of information or content. Frequently this involves sorting the content. Many non-textual data types, such as numbers or dates, can be easily sorted using the internal data representation. When it comes to textual information, however, the nature of character encodings and user expectations regarding "alphabetical" order brings some additional complexity.

One key choice is whether the sorting of textual data will be strictly internal or whether the results will be shown to users.

Program Internal Sorting

Specifications or implementations that require a program-internal, fast, and deterministic sorting of text which is not intended for human viewing or interaction SHOULD specify that strings are sorted according to their definition of string. For scalar value strings (such as USVString or many XML processes), specify ascending code point order. For string types based on UTF-16 (such as DOMString or in many JavaScript APIs), specify ascending code unit order.

There are two potential internal sorting sequences: ordering by Unicode [=code point=] or ordering by UTF-16 [=code unit=]. For either type of ordering, the resulting list will not match any particular alphabetic or lexicographical order.

Sorting by [=code point=] makes sense when strings are stored and processed as a sequence of code points, such as in a USVString. Sorting by [=code unit=] makes sense when strings are stored and processed using the underlying encoding, such as in a DOMString.

Neither of these sort orders applies any type of normalization to the strings being compared. This means that some apparently equivalent strings compare as different. See String Matching [[CHARMOD-NORM]] for more information.

Human-visible Sorting

Specifications or applications that need to deal with sorting natural language text for display to users face some additional complexity. Unicode defines a default collation (sorting) order as part of the Unicode Collation Algorithm [[UTS10]], which is then tailored to meet the needs of specific languages, [=locales=], and cultures.

When sorting text for presentation to users, the sort order SHOULD be tailored according to the most appropriate [=locale=] for the specific user in that application; thus the presentation order may differ from user to user.

Languages and cultures vary in how they sort text or use their alphabet or writing system to organize textual data. For example, German language speakers treat the letter üU+00FC LATIN SMALL LETTER U WITH DIAERISIS as sorting similar to the letter u (there are actually two German sorting sequences, which are slightly different in the exact handling of this letter), while Danish language speakers treat the same letter as separate in the alphabet and sort it after the letter "y".

Determining which locale to use for a sorted list can depend on a number of factors. For example, an application might sort a list of values according to the localization of the page in which the data appears. In other cases it might make more sense to sort according to the runtime locale of the user-agent or according to some parameter passed in an API. The important thing to recognize is that this order might be different for different users or on different systems.

Resource identifiers

See related review comments.

The situation with regards to specifying support of non-ASCII characters in resource identifiers is complicated because there are at least three specifications (URI [[RFC3986]], IRI [[RFC3987]], and [[URL]]) that define resource identifiers and their serialization. The WhatWG [[URL]] specification is an attempt to address this complexity by documenting the actual practice of browsers and other user agents. The stated goal of the URL specification is to obsolete both RFCs.

In general, document formats on the Web use resource identifiers that encode non-ASCII characters as plain text, that is, as "IRIs". Protocols such as—but not limited to—HTTP [[RFC9110]]) use resource identifiers that encode non-ASCII characters as a sequence of bytes using percent encoding, that is, as "URIs". Because [[RFC3986]] does not specify any particular character encoding for encoding characters to bytes, the percent encoding escapes are prone to misinterpretation. To help combat this, many modern protocols and specifications expect resource identifiers to use the UTF-8 character encoding, exactly as specified by IRI, when encoding characters into the subset of ASCII supported in wire formats and protocols.

Specifications that define resource identifiers MUST permit the use of non-ASCII characters.

Document formats or protocols need to support resource identifiers that contain non-ASCII characters because in many cases the names or identifiers for a given resource are generated from user input. Users generally are not restricted and should not be restricted in their ability to use their own language for these values.

Specifications on the Web that define a document format, data structure, or API SHOULD reference [[URL]] when specifying resource identifiers. For cases unsupported by the [[URL]] specification, IRI [[RFC3987]] MAY be specified instead.

Specifications that define protocols MAY reference URI [[RFC3986]] when specifying resource identifiers for use in wire formats but MUST include the additional requirement that UTF-8 MUST be used for the interpretation of percent encoded values into characters.

According to the definition in [[RFC3986]], URI references are restricted to a subset of ASCII and non-ASCII characters cannot be used directly. The percent encoding is provided to escape arbitrary byte values. However, percent encoding by itself is of limited value because many different legacy character encodings might be used to interpret a given sequence of bytes into characters (or to encode a given sequence of characters into bytes). Internationalized Resource Identifiers (IRIs) [[RFC3987]] solves problems with encoding and interpreting non-ASCII characters in resource identifiers with a uniform approached based on the UTF-8 encoding of [[Unicode]].

A specification MAY impose its own limitations on which characters are permitted in a resource identifier, but these should be focused on characters that conflict with the syntax of resource identifiers, the transport format, or with other elements defined by the specification itself.

While generally not recommended, if additional restrictions are contemplated, review [[UAX31]] and [[CHARMOD-NORM]] for additional guidance.

Specifications that define new syntax for URIs or contained within URIs MUST specify that characters outside the ASCII repertoire are percent encoded using the UTF-8 character encoding.

Document formats, markup & syntax

Specifications that deal with formal languages, document formats, protocols, or APIs often need to define markup, syntax, or application internal identifiers. The best practices in this section cover the different needs when defining these.

Specifications that are defining a markup language or a syntax based on a given markup language are concerned with defining elements, attributes, and their values. For example, an [[XML]] DTD defines elements and attributes that are valid in a specific document type.

Specifications that are defining a given document format, protocol, or API are usually concerned with defining identifiers for reserved keywords, field names, or permitted values. Many of these are application internal identifiers, whose names and values are completely defined by the specification. In some cases the specification will permit some or all of these to be a user-supplied value which can be filled in or named by users.

Defining elements and attributes in markup

See related review comments.

Do not define attribute values that will contain user readable content. Use elements for such content.

If you do define attribute values containing user readable content, provide a means to indicate directional and language information for that text separately from the text contained in the element.

Provide a way for authors to annotate arbitrary inline content using a span-like element or construct.

Handling plain text in markup

See related review comments.

Avoid natural language text in elements or attribute values that only allow for plain text.

Avoid defining attribute values whose content will be natural language text.

Provide a span-like element that can be used for any text content to apply information needed for internationalization.

Internationalization information may include language and base direction metadata, inline changes of language, bidirectional text behavioural changes, translate flags, etc.

Defining identifiers

See related review comments.

A common feature of document formats is the definition of various identifiers. This includes reserved keywords as well as user-defined values. To foster interoperability, implementations need to be able to match identifier values reliably and consistently. For a detailed look at this problem, see Character Model: String Matching [[CHARMOD-NORM]].

Specifications that define application internal identifiers (which are never shown to users and are always used for matching or processing within an application or protocol) should limit the content to a printable subset of ASCII. ASCII case-insensitive matching is recommended.

Sometimes specifications need to define a set of identifiers that content authors interact with or which are meaningful to various types of end-users. Restricting the set of allowable characters to ASCII impedes usability, particularly for speakers of languages that do not use the Latin script or that use characters outside of the ASCII range.

When identifiers are visible or potentially visible to users, specifications should allow the use of non-ASCII Unicode characters, in order to ensure that users in all languages can use the resulting document format or protocol with equal access. Case sensitivity (i.e. no case folding) is recommended.

If application internal identifiers are not restricted to ASCII, specifications should define the characters that are allowed to start and be part of a valid identifier.

One key issue when defining an identifier namespace or set of identifiers in a new specification is the handling of combining marks and certain other characters (such as joiners or bidi controls) when parsing the document format: special focus needs to be paid to how the identifier can be "tokenized" (separated from the surrounding text). One means of doing this is to restrict the range of characters allowed to start an identifier to ensure that normal text processing doesn't interfere with matching the identifier later.

Unicode Identifier and Pattern Syntax [[UAX31]] provides one model, used notably in programming languages such as Java or JavaScript. HTML and CSS also provide character range definitions for custom identifiers, such as this EBNF [[XML]] production:

PCENChar ::=
    "-" | "." | [0-9] | "_" | [a-zA-Z] | #xB7 | [#xC0-#xD6] | [#xD8-#xF6] | [#xF8-#x37D] | 
    [#x37F-#x1FFF] | [#x200C-#x200D] | [#x203F-#x2040] | [#x2070-#x218F] | [#x2C00-#x2FEF] |
    [#x3001-#xD7FF] | [#xF900-#xFDCF] | [#xFDF0-#xFFFD] | [#x10000-#xEFFFF]
	

HTML and CSS processing is defined such that Unicode character properties (such as whether a given character is a combining mark) are not considered when parsing identifiers and tokens. This allows identifiers to start with a combining character and still be processed reliably, but a plain text editor might not handle the value identically.

Specifications should exercise care when defining identifiers with regards to the handling of whitespace. Note that there are Unicode horizontal whitespace characters other than the ASCII characters SPU+0020 SPACE and HTABU+0009 TAB.

Specifications should not allow surrogate code points (U+D800 to U+DFFF) or non-character code points in identifiers.

Specifications should not allow the C0 (U+0000 to U+001F) and C1 (U+0080 to U+009F) control characters in identifiers.

Identifiers should be case-sensitive when non-ASCII characters are allowed and case insensitive when only ASCII characters are allowed.

Application internal identifier fields or values must be wrapped with a localizable display value when displayed to end-users.

Choose locale-neutral and culturally-neutral names for fields and values.

When defining identifiers, including field names and values, choose names that are as culturally-neutral as possible. For example, prefer postalCode to the (USA-specific) ZIPCode or prefer givenName/familyName to the more-culturally linked firstName/lastName.

Defining application-internal data values

Some specifications need to define the values for a given field in a document format or protocol. When the data values are associated with a specific type, such as numbers or dates, the format of the field is usually defined using some well-known schema, such as [[XMLSCHEMA11-2]] or [[JSON-SCHEMA]].

Specifications that define non-localizable string data values intended to be machine-readable should use values that are not readily confused with natural language text.

Many protocols, document formats, or data structures define enumerated values for internal use. These values are not meant to be visible to humans directly. Sometimes it is helpful if these values are given descriptive names (often in English) to aid users working with the specification, protocol, or API or who might need to debug a given document or interaction. When assigning these values in a specification, the names chosen should appear to be "code-like" so that users do not assume that the value can be displayed as if it were natural language text.

There are several styles that different groups have adopted to make application-internal values look "code-like". Choose the one best suited to your specification. These include:

  • SNAKE_CASE. Snake case uses ASCII letters and digits, all in uppercase, with words separated by underscores (_U+005F LOW LINE).
  • PascalCase or camelCase. These use ASCII letters and digits, with each "word" inside the identifer being capitalized.

Fields whose content is intended for consumption by humans must always be treated as natural language string values. It must be possible to find the language and base direction metadata for every such field.

Fields that contain human-readable strings, particularly those of a descriptive nature, must be assumed to be natural language strings. This is true even if the user viewing the string is expected to be a software developer. It must be possible to determine the language tag and string direction for each such field in a document or data structure.

Common names for fields of this type include name, description, title, message, or occassionally value. One test for this is if, as a specification author or user, you are uncomfortable making the content of the field SNAKE_CASE_SHOUTED, the field might be better considered as natural language text.

Fields intended for consumption by humans should be localizable.

This can take various forms. For example, a specification or protocol might allow for language negotiation and only return the best matching localized strings. Or a given resource might contain multiple languages that the consumer can choose between.

Field names and other enumerated values should be wrapped with localizable display names.

Field names and enumerated values are not natural language text, even if the names appear to be plain text and might be understood by users. These fields and values should not have language or direction metadata associated with them and, where necessary, implementers should be guided by the specification to provide appropriate localized wrapping.

Typographic support

Text decoration

See related review comments.

Text decoration such as underline and overline should allow lines to skip ink.

It should be possible to specify the distance of overlines and underlines from the text.

Skipping ink for text decoration such as underlines may not be appropriate for some scripts, such as Arabic, which prefers to move the underline further away from the baseline instead.

Vertical text

See related review comments.

It should be possible to render text vertically for languages such as Japanese, Chinese, Korean, Mongolian, etc.

Vertical text must support line progression from LTR (eg. Mongolian) and RTL (eg. Japanese).

By default, text decoration, ruby, and the like in vertical text where lines are stacked from left to right (eg. Mongolian) should appear on the same side as for CJK vertical text. Placement should not rely on the before and after line locations.

Vertical writing modes that are equivalent to the vertical- values in CSS (only) should use [[UTR50]] to apply default text orientation of characters. (This does not apply to writing modes that are equivalent to sideways- in CSS.)

Writing modes should provide values like sideways-lr and sideways-rl in CSS to allow for vertical rotation of lines of horizontal script text. UTR50 is not applicable for these cases.

By default, glyphs of scripts that are normally horizontal should run along a line in vertical text such that the top of the character is toward the right side of the vertical line, but there should also be a mechanism to allow them to progress down the line in upright orientation. Such a mechanism should use grapheme clusters as a minimum text unit, but where necessary allow syllabic clusters to be treated as a unit when they involve more than one grapheme cluster.

Upright Arabic text in vertical lines should use isolated letter forms and the order of text should read top to bottom.

It should be possible for some sequences of characters (particularly digits) to run horizontally within vertical lines (tate chu yoko).

RTL/bidi text

See related review comments.

Specifications that enable sloping of letterforms SHOULD provide for characters to slope either to the right or to the left according to the needs of the specific language.

Setting box positioning coordinates when text direction varies

Box positioning coordinates must take into account whether the text is horizontal or vertical.

It is typical, when localizing a user interface or web page, to create mirror-images for the RTL and LTR versions. For example, it is likely that a box that appears near the left side of a window containing English content would appear near the right side of the window if the content is Arabic or Hebrew. It should preferably automatic for this to change, based on the base direction of the current context, unless there is a strong reason for using absolute geometry. One way to achieve this is to use keywords such as start and end, rather than left and right, to indicate position.

Logical properties (TBD)

See related review comments.

Cursive text

See related review comments.

Overlaps should not be exposed when transparency is applied to the joined letters in cursive text, such as for Arabic, Mongolian, and N'Ko.

When adding a text stroke or shadow, joined letters should not be separated from their neighbors in cursive script text.

Ruby text annotations

See related review comments.

'Ruby' style annotations alongside base text should be supported for Chinese, Japanese, Korean and Mongolian text, in both horizontal and vertical writing modes.

Ruby implementations should support zhuyin fuhao (bopomofo) ruby for Traditional Chinese.

Ruby implementations should support a tabular content model (such that ruby contents can be arranged in a sequence approximating to rb rb rt rt).

Ruby implementations should make it possible to use an explicit element for ruby bases, like the rb element in HTML.

Ruby implementations should allow annotations to appear on either or both sides of the base text.

Ruby markup in HTML is designed specifically for Chinese, Japanese, Korean, and Mongolian requirements, and should not be used as a general glossing mechanism.

Font management (TBD)

See related review comments.

Miscellaneous

See related review comments.

Line heights must allow for characters that are taller than English.

Box sizes must allow for text expansion in translation.

Line wrapping should take into account the special rules needed for non-Latin scripts.

Various non-Latin writing systems don't simply wrap text on inter-word spaces. They have additional rules that must be respected. For example

See the CSS Text Level 3 specification for additional background. (This tutorial provides additional examples, if needed.)

Avoid specifying presentational tags, such as b for bold, and i for italic.

It is best to avoid presentational markup b, i or u, since it isn't interoperable across writing systems and furthermore may cause unnecessary problems for localisation. In addition, some scripts have native approaches to things such as emphasis, that do not involve, and can be very different from, bolding, italicisation, etc.

In the HTML case, there was a legacy issue, but unless there is one for your specification, the recommendation is that styling be used instead to determine the presentation of the text, and that any markup or tagging should allow for general semantic approaches.

For an explanation of the issues surrounding b and i tags, see Using <b> and <i> elements.

Locales, date and time values, and locally affected formats

Working with locale-affected values

Software systems that support languages and cultural preferences are said to be internationalized. An internationalized system uses APIs to provide language or culturally specific processing, based on user preferences. These user preferences are usually referred to as a locale. For more information on general internationalization terminology, see Language Tags and Locale Identifiers [[LTLI]]

When definining data formats, use locale-neutral serialization forms.

Data values that are machine-readable and not specific to any particular language or culture are more durable and less open to misinterpretation than values that use one of the many different cultural representations. Things like dates, currencies, and numbers might look similar but have different meanings in different locales. For example, a date represented as the string 4/7 can be read as the 7th of April or the 4th of July depending on the user's preference. Similarly, €2,000 is either two thousand Euros or an over-precise representation of two Euros. By using a locale-neutral format, systems avoid the need to establish specific interchange rules that vary according to the language or location of the user. When the data is already in a locale-specific format, making the locale and language explicit by providing locale parameters (usually in the form of a language tag) allows users to determine how to work with the data or perhaps enable automated translation services.

Most common data serialization formats are locale-neutral. For example, [[XMLSchema-2]] types such as xsd:integer and xsd:date are intended for locale-neutral data interchange. Using locale-neutral representations allows the data values to be processed accurately without complex parsing or misinterpretation and also allows the data to be presented in the format most comfortable for the consumer of the data in any locale. For example, rather than storing "€2000,00" as a string, it is strongly preferred to exchange a data structure such as:

"price" {
    "value": 2000.00,
    "currency": "EUR"
}
…

Working with time

See related review comments.

When defining calendar and date systems, be sure to allow for dates prior to the common era, or at least define handling of dates outside the most common range.

When defining time or date data types, ensure that the time zone or relationship to UTC is always defined.

Provide a health warning for conversion of time or date data types that are "floating" to/from incremental types, referring as necessary to the Time Zones WG Note.

Allow for leap seconds in date and time data types.

These occur occasionally when the number of seconds in a minute is allowed to range from 0 to 60 (ie. there are sixty-ONE seconds in that minute).

Use consistent terminology when discussing date and time values. Use 'floating' time for time zone independent values.

Keep separate the definition of time zone from time zone offset.

Use IANA time zone IDs to identify time zones. Do not use offsets or LTO as a proxy for time zone.

Use a separate field to identify time zone.

When defining rules for a "week", allow for culturally specific rules to be applied.

For example, the weekend is not always Saturday/Sunday; the first day of week is not always Sunday [or Monday or...], etc.

When defining rules for week number of year, allow for culturally specific rules to be applied.

When non-Gregorian calendars are permitted, note that the "month" field can go to 13 (undecimber).

Working with personal names

See related review comments.

Developers who create applications that use personal names (in web forms, databases, ontologies, and so forth) are often unaware of how different names can be in other countries. They build their forms or databases in a way that assumes too much on the part of foreign users. This section provides guidelines for working with personal names from around the world.

Field length & composition

Check whether you really need to store or access given name(s) and family name(s) separately.

Names around the world differ greatly in composition and the order of components (see Personal names around the world). This can create difficulties if, for example, you try to split a person's name into smaller parts for storage in a database and then later attempt to retrieve them, especially if some reconstruction is needed. Difficulties include understanding which part of a person's name belongs in which database field (especially when there are more or fewer parts than fields in the database), and dealing with the ordering of name parts when retrieving someone's name from the database for actual use.

If designing a form or database that will accept names from people with a variety of backgrounds, you should ask yourself whether you really need to have separate fields for things like given name and family name. This will depend on what you need to do with the data, but obviously it will be simpler, where it is possible, to just use the full name as the user provides it.

Avoid placing limits on the length of names, or if you do, make allowance for long strings.

Bear in mind that names in some cultures can be quite a lot longer than your own. Make fields long enough to enter long names. Also do not assume that a name will have more than one letter.

In particular, avoid counting length in bytes (see [[[#char_string]]]) – do not assume that a four-character Japanese name in UTF-8 will fit in four bytes; you are likely to actually need 12.

Guidelines for segmenting names

The guidelines in this section apply where a decision has been made that it is necessary to split up a person's name for storage or presentation.

Try to avoid using the labels 'first name' and 'last name'. Consider an alternative such as 'given name(s)' and 'family name(s)'.

Use of the terms 'first' and 'last' can be confusing for people who normally write their family name followed by given names. Although it may seem acceptable to use 'first' and 'last' for forms aimed at users in the United States, the forms may eventually be used by people with different cultural backgrounds, both within and potentially outside of the USA.

Bear in mind, also, that in some cultures this is still problematic, such as for Icelanders, who don't actually have family names, but have a given name and a patronymic (see Given name and patronymic). However, short of highly localized customization, this is probably the best we can do for a generic solution.

Consider whether it would make sense to have one or more extra fields, in addition to the full name field, where users can provide part(s) of their name that you need to use for a specific purpose.

Allow for users to be asked separately how they would like to be addressed when someone contacts them.

For example, in some cases you may want to identify parts of a name so that you can sort a list of names alphabetically, or address them when contacting them, etc.

This extra field would also be useful for finding the appropriate name from a long list of name components, and for handling nicknames (which, for example, are commonly used to refer to people in Thailand).

Sometimes you may opt for separate fields because you want to be able to use part of the name to address the person directly, or refer to them. For example, when a social media app refers to "David's contacts". Or perhaps it's because you want to send them emails with their name at the top. Note that not only may you have problems due to name syntax here, but you also have to account for varying expectations around the world with regards to formality (not everyone is happy for a stranger to call them by their given name). It may be better to ask separately, when setting up a profile for example, how that person would like you to address them.

If parts of a person's name are captured separately, ensure that the separate items can capture all relevant information.

For example, don't assume that the order they provide names in will be 'given name' followed by 'family name', or that it will be possible in a name that is composed of multiple words to even identify which part fits into which of those categories and which parts relate to something completely different, such as a father's name, a village name, a clan name, etc.

Be careful about assumptions built into algorithms that pull out the parts of a name automatically.

For example, the v-card and h-card approach of implied “n” optimization could have difficulties with, say, Chinese names. The input form should be as clear as possible when telling people how to specify their name, so that you capture the data you think you need.

Don't assume that a single letter name is an initial.

People do have names that are one letter long. These people can have problems if the form validator refuses to accept their name and demands that they supply their name in full. If you want to encourage people not to use initials, perhaps you should make that a warning message, rather than block the form submission.

Don't require that people supply a family name.

In cultures such as parts of Southern India, Malaysia, and Indonesia, a large number of people have names that consist of a given name only, with no patronym. If you require family names, you may create significant problems in these cultures, as users enter garbage data like "." or "Mr." in the family name field just to escape the form.

Allowable characters

Allow people to use punctuation such as hyphens, apostrophes, etc. in names, and take into account possible alternative code points for those characters.

This ensures that names are correctly handled for people such as Dina Asher-Smith and Christopher O'Connell. Note that the apostrophe may appear as 'U+0027 APOSTROPHE or as ʼU+02BC MODIFIER LETTER APOSTROPHE, or perhaps even U+2019 RIGHT SINGLE QUOTATION MARK. A hyphen may be represented using -U+002D HYPHEN-MINUS or U+2010 HYPHEN or, in Japan, U+30A0 KATAKANA-HIRAGANA DOUBLE HYPHEN.

Don't require names to be entered all in upper case.

Don't normalize the casing in names.

Some names (such as 'McNamara') contain capital letters that are not the first letter; others (such as 'van der Waals') include words that are not capitalized. Forms should preserve the case the user enters and not coerce such names to always or only use capital letters at the start of each word.

Allow the user to enter a name with spaces.

Allows correct capture of a family name such as that of Gabriel García Márquez (family name, García Márquez), or a given name such as José María Olazábal (family name, Olazábal).

Other considerations

Don't assume that members of the same family will share the same family name.

It would be wrong to assume that members of the same family share the same family name. There is a growing trend in the West for individuals to keep their own name after marriage, but there are other cultures, such as China, where this is the normal approach. In some countries the wife may or may not take the husband's name.

When dealing with Hispanic names it may be that only the children in the family have the same family names, but they may be different from each of the parents. Manuel Pérez Quiñones derived his apellidos (Pérez Quiñones) because his father's apellidos were Pérez Rodríguez and his mother's apellidos were Quiñones Alamo. In time, he courted a girl with the apellidos Padilla Falto. When they married, her apellidos became Padilla de Pérez. Their children were called Pérez Padilla, and so on.

It may be better for a form to ask for 'Previous name' rather than 'Maiden name' or 'née'.

You should also not simply assume that name adoption goes from husband to wife. Sometimes men take their wife's name on marriage. It may be better, in these cases, for a form to say 'Previous name' than 'Maiden name' or 'née'.

You probably need to store the name in both Latin and native scripts, in which case you will need to ask the user to submit their name in both native script and Latin-only form, as separate items.

The need for multiple fields will depend to some extent on what you are collecting people’s names for, and how you intend to use them.

  • Are you collecting the person’s name just to have an identifier in your system? If so, it may not matter whether the name is stored in ASCII-only or native script.
  • Or will they be called by name on a welcome page or in correspondence? If you will correspond using their name on pages written in their language, it would seem sensible to have the name in the native script.
  • Is it important for people in the organization that handles queries to be able to recognize and use the person’s name? If so, you may need a Latin transcription.
  • Will their name be displayed or searchable (for example Flickr optionally shows people’s names as well as their user name on their profile page)? Or will you want to send them correspondence in their own language, but track them in your back-office in a language such as English? If so, it may be necessary to store the name in both Latin and native scripts, in which case you probably need to ask the user to submit their name in both native script and Latin-only form, using separate fields.

Provide a field for a transcription of the name, where necessary.

For example, Japanese users may need to provide a transcription in a Japanese syllabic script rather than/in addition to the ideographic form. This field is used for sorting Japanese names, but also allows someone looking at the name to check how it is pronounced.

Don't block unusual or unexpected names when trying to enforce real name usage.

It isn't hard to find examples of people who have been blocked from using a service because their name doesn't conform to expectations of the developers. If you are planning to enforce real name usage, you need to allow a mechanism for people to validate their actual names if their name is rare, or has an unexpected structure.

Using personal names in examples

In standards and standards related documents containing examples that include names of persons, use a variety of names to reflect a global audience. Avoid a bias of names specific to certain regions.

Many specifications provide examples, such as user stories or use cases, that use personal names as a means of enhancing the narrative. Some groups even have practices, such as security specifications using the names "Alice" and "Bob", to provide a certain level of consistency. Inclusiveness should be an important goal when building systems and services, hence the suggestion to use globally diverse names in forming examples. This helps ensure that we represent the worldwide community of users with our technologies, and makes the specification appear more relevant to the global user.

Try to choose names that represent people from different regions around the world, rather than just a handful of names with European origins. Note that choosing names that include non-ASCII characters can help remind implementers that Unicode support and other internationalization concerns apply to their users.

No collection of names can be completely agnostic in dealing with cultural and gender-related issues. To assist specification writers in creating more inclusive examples, this document provides a collection of names drawn from across many cultures. These names are organized approximately into regions, usually indicating country or language. Notice that even within these regions there are quite diverse influences and practices for the handling of personal names. The names are also divided by their cultural gender association to assist specification authors in writing examples, although many names are not specific to any particular gender.

Inserting personal names from other cultures into English-language examples is also affected by the very different ways that names are used culturally around the world. For example, some cultures expect the use of a patronym/matronym in addition to the given name; or some cultures prefer more formal names (e.g. "Herr Dürer" vs. the informal "Albrecht").

Chinese people almost never use their given name without also including their family name. When writing examples in Chinese, one might see something like 路人甲 (it means Person A, using the Han "Heavenly Stem" ordinals, cf. Ready-made Counter Styles) rather than a "exemplar name". When examples are used, they include both the family and given name. Bear in mind that in Chinese the family name comes first, before the given name.

In Japanese, there are complex choices related to levels of formality. A person might be addressed by their given name in very informal situations (Hiroshi), but usually will be addressed with a family name that includes (unless one is being rude) a title or suffix, such as -san or -sama (e.g. Tanaka-san). Other suffixes or titles are also used, such as senpai or sensei (for senior or very esteemed individuals) or shi (when one is unfamiliar with the person). Thus an example in English that could say Suppose Hiroki wants to set up a... would probably be more culturally appropriate if it read Suppose Tanaka-san wants to set up a...

Example names

The following table was compiled by the Internationalization Working Group. Contributions and suggestions for additions or corrections are welcome.

The purpose of this collection of names is to assist specification authors who are generally writing for an English-speaking audience. The collection consists primarily of given names and, where necessary, is transliterated into the Latin script. The names are also rendered informally ("Alice" rather than "Ms. Jones"), even though this is not how names would be used in many of these cultures. When translating specifications, adjustments should be made which are appropriate for the target audience.

When names are taken from non-Latin-script languages or cultures, the non-Latin representation is also provided as a reminder that names are in no way limited to the Latin script or for cases where you want to include a non-Latin script example.

This table can be sorted by clicking on the △ or ▽ arrows in the header row.

Name Native Gender Region and Notes Language
Akamu m Oceania; Polynesia; Hawaiian name haw
Alinta f Oceania; Australian indigenous name nys
Amélie f Europe; France fr
An f East Asia; Japan ja
Aoi 葵; 蒼; 碧 f, m East Asia; Japan ja
Aroha f Oceania; Maori mi
Asahi 朝陽 m East Asia; Japan ja
Atlahua m Latin America; Nahuatl name nah
Åsa f Europe; Sweden sv
Beata f Europe; Multiple countries it, de, pl, sv, etc.
Chanda चंदा f South Asia; originally from Sanskrit sa
Chirapathi சிரபதி f South Asia; Tamil ta
Citlali f Latin America; Nahuatl nah
Coen m Europe; Netherlands; also Oceania (Australian indigenous) or Hebrew name nl, he, nys
Daisho 大翔 m East Asia; Japan ja
Dara f West Asia; Europe; Türkiye tr
Eva Е́ва f Europe; Russia ru
Faheem فهيم m West Asia; Arabic ar
Fátima فَاطِمَة f West Asia; Arabic; also used in several European cultures in the Latin script ar
Genet ገነት f Africa; Ethiopia am
Haruto 陽翔 m East Asia; Japan ja
Haukea f Oceania; Polynesia; Hawaiian name haw
Himari 陽葵 f East Asia; Japan ja
Hina 陽菜 f East Asia; Japan ja
Hīnano m Oceania; Polynesia; Tahitian ty
Hua 李华 m East Asia; China zh-Hans
Iakopo m Oceania; Samoa sm
Ilango இளங்கோ m South Asia; Tamil ta
Irepani m Latin America; Purepecha language tsz
Işık f West Asia; Europe; Türkiye tr
Işıtan m West Asia; Europe; Türkiye tr
Itsuki m East Asia; Japan ja
Jarra, Jarrah, Cerrah جراح m West Asia; Arabic ar, tr
Jean-François m Europe; French fr
João m Latin America; Brazil pt-BR
Júlía f Europe; Iceland is
Kai f, m Oceania; Australia; appears in many languages and is a good general example aus, sm
Khaliun f, m East Asia; Mongolia mn
Kylie f Oceania; Australian indigenous name aus
Lani f Oceania; Philippines fil
Lei 李雷 m East Asia; China zh-Hans
Livia f Europe, Latin America es
Lowanna f Oceania; Australian indigenous aus
Lucas m Latin America es
Maevarau m Oceania; Samoa sm
Mahmut m West Asia; Europe; Türkiye tr
Martina f Latin America es
Mei 芽依 (ja); (zh) f East Asia; China; Japan ja, zh
Minato m East Asia; Japan ja
Mio f East Asia; Japan ja
Miriam מרים f West Asia; Hebrew he
Müge f West Asia; Europe; Türkiye tr
Muhammad محمد m West Asia; Arabic; Many variants and languages. ar
Ngatemi f Oceania; Indonesia id, ms
Thị Anh f South-East Asia; Vietnam vi-VN
Văn Hoa m South-East Asia; Vietnam vi-VN
Onosaʻi f Oceania; Samoa sm
Potira f Latin America; Brazil; indigenous name gn
Qiàn f East Asia; China zh-Hans
Rattiya รัตติยา f South-East Asia; Thailand th
Ren m East Asia; Japan ja
Rin f East Asia; Japan ja
Ritthichai ฤทธิชัย m South-East Asia; Thailand th
Santiago m Latin America es
Senthil செந்தில் m South Asia; Tamil ta
Sione m Oceania; Tonga to
Slobodan Слободан m Europe; Serbian sr
Sofia f Europe; Latin America es
Tahnee f Oceania; Australian indigenous aus
Tamizhachi தமிழச்சி f South Asia; Tamil ta
Temuera m Oceania; Polynesia sm
Tuulikki f Europe; Finland fi
Uriel אוּרִיאֵל m West Asia; Hebrew he
Vasa m Oceania; Samoa; Europe; diminutive form of Vasilije/Василије sm, hr, sr
Vassilios Βασίλειος m Europe; Greek el
Voula Βούλα f Europe; Greek el
Wafaa وفاء f West Asia; Arabic ar
Wissam وسام m West Asia; Arabic ar
Xiaoxia 晓霞 f East Asia; China zh-Hans
Xóchitl f Latin America; Nahuatl nah
Yevdokia Евдокия f Europe; Russia ru
Yevgeny Евгений m Europe; Russia ru
Zafirah زفره f West Asia; Arabic ar

Working with numbers

When parsing user input of numeric values, allow for digit shaping (non-ASCII digits).

When formatting numeric values for display, allow for culturally sensitive display, including the use of non-ASCII digits (digit shaping).

When defining a feature that automatically labels items incrementally for display to the user (such as when creating a numbered list), allow for localized presentation of the labels as well as for various counting/listing systems or styles.

Examples of this can be found in CSS Counter Styles [[css-counter-styles-3]] and especially the accompanying Ready-made Counter Styles [[predefined-counter-styles]].

Designing forms

See related review comments.

When defining email field validation, allow for EAI (smtputf8) names.

User input (TBD)

See related review comments.

Creating examples (TBD)

See related review comments.

Localization

Localization [[LTLI]] enables users to employ software in the language and locale of their choice. Specifications for protocols and document formats need to consider how to provide the language and formatting that the end-user expects.

Natural language data values need language and base direction in order to ensure proper presentation, even if localized messages are not provided. This includes any error messages or other internal messages that are human readable in an API or protocol. See also [[STRING-META]].

APIs and protocols SHOULD include language and string direction metadata for all natural language messages and data fields.

All natural language fields or messages, including error messages, defined by a given API or protocol SHOULD be localized into the preferred locale of the user or, if that language is not available, supplied with a suitable fallback or default.

Specifications for APIs or protocols SHOULD define how the user's locale is determined (this is sometimes called language negotiation).

Specifications MAY define a specific default language for messages or errors in an API or protocol.

Specifications do not need to require that messages be returned in all possible or all available locales. It is sufficient to make it possible to localize the end-user's customer experience. Implementations can choose which languages or locales to support.

Working with error and exception messages

Protocols, APIs, and document formats sometimes provide a field to pass a human-readable error or exception message from a service to the caller in the form of a string. In general, and as indicated above, any natural language text conveying human-readable messages or human-readable content needs to be associated with language and direction metadata. Where this metadata is missing, the processing or display of the text might be compromised.

Often the intention of the specification author in providing an error or exception message is to convey debugging information to a software developer. Specification authors sometimes assume that error or exception messages are not seen by end users; that software developers will prefer these messages to be unlocalized or appear in a specific language (usually English); or that there are other "practical reasons" why localization of error messages can turn out to be a barrier. For example, there are anecdotes about developers finding it easier to search the Web with the (usually obscure) text of an error because the message itself is insufficiently good at explaining the problem. Searching for this text might produce a result in the developer's preferred language that is more helpful.

Error messages are messages and they are intended for humans, not machines. In many cases, the error message encompasses all of the additional information about what went wrong and, in some cases, the caller is obliged to show the message to the actual end user because there is no other way to convey to the caller about how to fix the problem ("Your credit card has expired"; "The value 10484977 is too large" [oops, forgot the decimal point]; etc.). Localization of these types of messages is actually a good thing and may even be obligatory in some applications.

APIs and protocols SHOULD provide language independent identifiers for errors.

For example, HTTP result codes, such as the familiar 404, help users communicate which error they received or look up a translation.

Natural language error message fields, when provided, SHOULD be optional and SHOULD include language and direction metadata.

Natural language error message fields, when provided, SHOULD match the user interface language negotiated for the request when possible.

Revision Log

The following summarises substantive changes since the previous publication, but the material is still subject to significant flux as it develops. This should not be a reason not to use the document. What it so far contains is useful, and any shortfalls can be reported or discussed.

  1. Links were added below section headings that point to lists of review comments related to that section. These comments provide details useful to developers and reviewers.
  2. The checklist generator tool was moved to the top of the document.
  3. The table of contents now reports 3 levels of heading.
  4. Lists of links to documents that provide useful background or an overview of a section have been moved to the start of that section. So also have 'see also' links.
  5. Each advisement now carries its own set of links. This makes the links more relevant and more easily noticeable. It also makes it easier to list multiple links, and because the links indicate the target document title, readers do not have to follow the link to know whether they have already read the document pointed to.
  6. Links associated with an advisement are of two types: 'explanations & examples' typically points to a location in another document from which this advisement was lifted, and surrounds it there with explanatory text; 'more' links provide further reading in other documents.
  7. Self-links for each advisement have been changed so that they match the standard style used for headings (§ to the side of the text). This also significantly reduces the complexity of authoring the markup.
  8. Added content in the section on locales, along with text about working with file and path names and working with error messages.
  9. Generally, the markup in the document source has been greatly simplified, making it easier and quicker to maintain the document.

See the github commit log for more details.

Acknowledgements

Thanks to Addison Phillips for help reviewing old reviews for recommendations.

Other people who contributed through reviews or issues include Steve Atkin, Andrew Cunningham, Martin Dürst, Asmus Freytag, John Klensin, Tomer Mahlin, Chaals McCathieNevile, Florian Rivoal, Najib Tounsi. Some material about locale-neutral representation was adapted from [[DWBP]].