This document builds upon on Character Model for the World Wide Web 1.0: Fundamentals [[!CHARMOD]] to provide authors of specifications, software developers, and content developers a common reference on string identity matching on the World Wide Web and thereby increase interoperability.

This version of the document represents a significant change from the earlier editions. Much of the content is changed and the recommendations are significantly altered. This fact is reflected in a change to the name of the document from "Character Model: Normalization".

Sending comments on this document

If you wish to make comments regarding this document, please raise them as github issues. When reviewing the document, please refer to the latest editor's copy. Only send comments by email if you are unable to raise issues on github (see links below). All comments are welcome.

To make it easier to track comments, please raise separate issues or emails for each comment, and point to the section you are commenting on using a URL.


Goals and Scope

The goal of the Character Model for the World Wide Web is to facilitate use of the Web by all people, regardless of their language, script, writing system, and cultural conventions, in accordance with the W3C goal of universal access. One basic prerequisite to achieve this goal is to be able to transmit and process the characters used around the world in a well-defined and well-understood way.

This document builds on Character Model for the World Wide Web: Fundamentals [[!CHARMOD]]. Understanding the concepts in that document are important to being able to understand and apply this document successfully.

This part of the Character Model for the World Wide Web covers string matching—the process by which a specification or implementation defines whether two string values are the same or different from one another. It describes the ways in which texts that are semantically equivalent can be encoded differently and the impact this has on matching operations important to formal languages (such as those used in the formats and protocols that make up the Web).

The main target audience of this specification is W3C specification developers. This specification and parts of it can be referenced from other W3C specifications and it defines conformance criteria for W3C specifications, as well as other specifications.

Other audiences of this specification include software developers, content developers, and authors of specifications outside the W3C. Software developers and content developers implement and use W3C specifications. This specification defines some conformance criteria for implementations (software) and content that implement and use W3C specifications. It also helps software developers and content developers to understand the character-related provisions in W3C specifications.

The character model described in this specification provides authors of specifications, software developers, and content developers with a common reference for consistent, interoperable text manipulation on the World Wide Web. Working together, these three groups can build a globally accessible Web.

Structure of this Document

This document defines one of the basic building blocks for the Web related to this problem by defining rules and processes for String Identity Matching in document formats. These rules are designed for the identifiers and structural markup (syntactic content) used in document formats to ensure consistent processing of each and are targeted to Specification writers. This section is targeted to implementers.

This document is divided into two main sections.

The first section lays out the problems involved in string matching; the effects of Unicode and case folding on these problems; and outlines the various issues and normalization mechanisms that might be used to address these issues.

The second section provides requirements and recommendations for string identity matching for use in formal languages, such as many of the document formats defined in W3C Specifications. This primarily is concerned with making the Web functional and providing document authors with consistent results.


This section provides some historical background on the topics addressed in this specification.

At the core of the character model is the Universal Character Set (UCS), defined jointly by the Unicode Standard [[!Unicode]] and ISO/IEC 10646 [[!ISO10646]]. In this document, Unicode is used as a synonym for the Universal Character Set. A successful character model allows Web documents authored in the world's writing systems, scripts, and languages (and on different platforms) to be exchanged, read, and searched by the Web's users around the world.

The first few chapters of the Unicode Standard [[!Unicode]] provide useful background reading.

For information about the requirements that informed the development of important parts of this specification, see Requirements for String Identity Matching and String Indexing [[CHARREQ]].

Terminology and Notation

This section contains terminology and notation specific to this document.

The Web is built on text-based formats and protocols. In order to describe string matching or searching effectively, it is necessary to establish terminology that allows us to talk about the different kinds of text within a given format or protocol, as the requirements and details vary significantly.

Unicode code points are denoted as U+hhhh, where hhhh is a sequence of at least four, and at most six hexadecimal digits. For example, the character EURO SIGN has the code point U+20AC.

Some characters that are used in the various examples might not appear as intended unless you have the appropriate font. Care has been taken to ensure that the examples nevertheless remain understandable.

A legacy character encoding is a character encoding not based on the Unicode character set.

A transcoder is a process that converts text between two character encodings. Most commonly in this document it refers to a process that converts from a legacy character encoding to a Unicode encoding form, such as UTF-8.

Syntactic content is any text in a document format or protocol that belongs to the structure of the format or protocol. This definition can include values that are not typically thought of as "markup", such as the name of a field in an HTTP header, as well as all of the characters that form the structure of a format or protocol. For example, < and > (as well as the element name and various attributes they surround) are part of the syntactic content in an HTML document.

Syntactic content usually is defined by a specification or specifications and includes both the defined, reserved keywords for the given protocol or format as well as string tokens and identifiers that are defined by document authors to form the structure of the document (rather than the "content" of the document).

Natural language content refers to the language-bearing content in a document and not to any of the surrounding or embedded syntactic content that form part of the document structure. You can think of it as the actual "content" of the document or the "message" in a given protocol. Note that syntactic content can contain natural language content, such as when an [[HTML]] img element has an alt attribute containing a description of the image.

A resource, in the context of this document, is a given document, file, or protocol "message" which includes both the natural language content as well as the syntactic content such as identifiers surrounding or containing it. For example, in an HTML document that also has some CSS and a few script tags with embedded JavaScript, the entire HTML document, considered as a file, is a resource. This term is intentionally similar to the term 'resource' as used in [[RFC3986]], although here the term is applied loosely.

A user value is unreserved syntactic content in a vocabulary that is assigned by users, as distinct from reserved keywords in a given format or protocol. For example, CSS class names are part of the syntax of a CSS style sheet. They are not reserved keywords, predefined by any CSS specification. They are subject to the syntactic rules of CSS. And they may (or may not) consist of natural language tokens.

A vocabulary provides the list of reserved names as well as the set of rules and specifications controlling how user values (such as identifiers) can be assigned in a format or protocol. This can include restrictions on range, order, or type of characters that can appear in different places. For example, HTML defines the names of its elements and attributes, as well as enumerated attribute values, which defines the "vocabulary" of HTML syntactic content. Another example would be ECMAScript, which restricts the range of characters that can appear at the start or in the body of an identifier or variable name. It applies different rules for other cases, such as to the values of string literals.

A grapheme is a sequence of one or more Unicode characters in a visual representation of some text that a typical user would perceive as being a single unit (character). Graphemes are important for a number of text operations such as sorting or text selection, so it is necessary to be able to compute the boundaries between each user-perceived character. Unicode defines the default mechanism for computing graphemes in Unicode Standard Annex #29: Text Segmentation [[!UAX29]] and calls this approximation a grapheme cluster. There are two types of default grapheme cluster defined. Unless otherwise noted, grapheme cluster in this document refers to an extended default grapheme cluster. (A discussion of grapheme clusters is also given in Section 2 of the Unicode Standard, [[!Unicode]]. Cf. near the end of Section 2.11 in version 8.0 of The Unicode Standard)

Because different natural languages have different needs, grapheme clusters can also sometimes require tailoring. For example, a Slovak user might wish to treat the default pair of grapheme clusters "ch" as a single grapheme cluster. Note that the interaction between the language of string content and the end-user's preferences might be complex.

Terminology Examples

This section illustrates some of the terminology defined above.

<html lang="en" dir="ltr">

  <meta charset="UTF-8">
  <img src="shakespeare.jpg" alt="William Shakespeare" id="shakespeare_image">

What&#x2019;s in a name? That which we call a rose by any other name would smell as sweet.</p>

  • Everything inside the black rectangle (that is, in this HTML file) is part of the resource.
  • Syntactic content is shown in a monospaced font.
  • Natural language content is shown in a bold blue font with a gray background.
  • User values are shown in italics.
  • Vocabulary is shown with red underlining.
  • All of the text above (all text in a text file) makes up a resource. It's possible that a given resource will contain no natural language content at all (consider an HTML document consisting of four empty div elements styled to be orange rectangles). It's also possible that a resource will contain no syntactic content and consist solely of natural language content: for example, a plain text file with a soliloquy from Hamlet in it. Notice too that the HTML entity &#x2019; appears in the natural language content and belongs to both the natural language content and the syntactic content in this resource.

This specification places conformance criteria on specifications, on software (implementations) and on Web content. To aid the reader, all conformance criteria are preceded by [X] where X is one of S for specifications, I for software implementations, and C for Web content. These markers indicate the relevance of the conformance criteria and allow the reader to quickly locate relevant conformance criteria by searching through this document.

Specifications conform to this document if they:

  1. do not violate any conformance criteria preceded by [S] where the imperative is MUST or MUST NOT,

  2. document the reason for any deviation from criteria where the imperative is SHOULD, SHOULD NOT, or RECOMMENDED,

  3. make it a conformance requirement for implementations to conform to this document,

  4. make it a conformance requirement for content to conform to this document.

Software conforms to this document if it does not violate any conformance criteria preceded by [I].

Content conforms to this document if it does not violate any conformance criteria preceded by [C].

NOTE: Requirements placed on specifications might indirectly cause requirements to be placed on implementations or content that claim to conform to those specifications.

Where this specification contains a procedural description, it is to be understood as a way to specify the desired external behavior. Implementations can use other means of achieving the same results, as long as observable behavior is not affected.

The String Matching Problem

The Web is primarily made up of document formats and protocols based on character data. These formats or protocols can be viewed as a set of text files (resources) that include some form of structural markup or syntactic content. Processing such syntactic content or document data requires string-based operations such as matching (including regular expressions), indexing, searching, sorting, and so forth.

Users, particularly implementers, sometimes have naïve expectations regarding the matching or non-matching of similar strings or of the efficacy of different transformations they might apply to text, particularly to syntactic content, but including many types of text processing on the Web.

Because fundamentally the Web is sensitive to the different ways in which text might be represented in a document, failing to consider the different ways in which the same text can be represented can confuse users or cause unexpected or frustrating results. In the sections below, this document examines the different types of text variation that affect both user perception of text on the Web and the string processing on which the Web relies.

Case Folding

Some scripts and writing systems make a distinction between UPPER, lower, and Title case characters. Most scripts, such as the Brahmic scripts of India, the Arabic script, and the scripts used to write Chinese, Japanese, or Korean do not have a case distinction, but some important ones do. Examples of such scripts include the Latin script used in the majority of this document, as well as scripts such as Greek, Armenian, and Cyrillic.

Some document formats or protocols seek to aid interoperability or provide an aid to content authors by ignoring case variations in the vocabulary they define or in user-defined values permitted by the format or protocol. For example, this occurs when matching element names between an HTML document and its associated style sheet. Consider this HTML fragment:

The SPAN in the stylesheet matches the span element in the document, even though the stylesheet uses uppercase and the HTML markup does not.

Case folding is the process of making two texts identical which differ in case but are otherwise "the same".

Case folding might, at first, appear simple. However there are variations that need to be considered when treating the full range of Unicode in diverse languages. For more information, [[!Unicode]] Chapter 5 (in v8.0, Section 5.18) discusses case mappings in detail.

Unicode defines the default case fold mapping for each Unicode code point. Since most scripts do not provide a case distinction, most Unicode code points do not require a case fold mapping. For those characters that have a case fold mapping, the majority have a simple, straight-forward mapping to a single matching (generally lowercase) code point. Unicode calls these the common case fold mappings, as they are shared by Unicode's case fold mappings.

In addition to the common case folding mappings, a few characters have a case fold mapping that would normally map one Unicode character to more than one during case folding. These are called the full case fold mappings. Together with the common case fold mappings, these provide the default case fold mapping for all of Unicode. This case fold mapping is referred to in this document as Unicode C+F.

Because some applications cannot allocate additional storage when performing a case fold operation, Unicode provides a simple case fold mapping that maps characters that would normally map to more or fewer code points to use a single code point for comparison purposes instead. Unlike the full mapping, this mapping invariably alters the content (and potentially the meaning) of the text. This simple case fold mapping, referred to in this document as Unicode C+S, is not appropriate for the Web.

Note that case folding removes information from a string which cannot be recovered later. For example, two s letters in German do not necessarily represent ß in unfolded text.

Another aspect of case folding is that it can be language sensitive. Unicode defines default case mappings for each encoded character, but these are only defaults and are not appropriate in all cases. Some languages need case-folding to be tailored to meet specific linguistic needs. One common example of this are Turkic languages written in the Latin script.

Sometimes case can vary in a way that is not semantically meaningful or is not fully under the user's control. This is particularly true when searching a document, but may sometimes also apply when defining rules for matching user- or content-generated values, such as identifiers. In these situations, case-insensitive matching might be desirable instead.

When defining a vocabulary, one important consideration is whether the values are restricted to the ASCII subset of Unicode or if the vocabulary permits the use of characters (such as accents on Latin letters or a broad range of Unicode including non-Latin scripts) that potentially have more complex case folding requirements. To address these different requirements, there are four types of casefold matching defined by this document for the purposes of string identity matching in document formats or protocols:

Case sensitive matching: code points are compared directly with no case folding.

ASCII case-insensitive matching compares a sequence of code points as if all ASCII code points in the range 0x41 to 0x5A (A to Z) were mapped to the corresponding code points in the range 0x61 to 0x7A (a to z). When a vocabulary is itself constrained to ASCII, ASCII case-insensitive matching can be required.

Unicode case-insensitive matching compares a sequence of code points as if the Unicode C+F Unicode-defined language-independent default case folding form mentioned above had been applied to both input sequences.

Language-sensitive case-sensitive matching is useful in the rare case where a document format or protocol contains information about the language of the syntactic content and where language-sensitive case folding might sensibly be applied. In these cases, tailoring of the Unicode case-fold mappings above to match the expectations of that language SHOULD be specified and applied. These case-fold mappings are defined in the Common Locale Data Repository [[UAX35]] project of the Unicode Consortium.

For advice on how to handle case folding see .

Unicode Normalization

A different kind of variation can occur in Unicode text: sometimes several different Unicode code point sequences can represent the same logical character. When searching or matching text by comparing code points, variations in encoding could cause text values otherwise expected to match not to match.

Consider the character Ǻ. One way to encode this character is as U+01FA LATIN LETTER CAPITAL A WITH RING ABOVE AND ACUTE. Here are some of the different character sequences that an HTML document could use to represent this character:

Each of the above strings contains the same apparent meaning as Ǻ (U+01FA LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE), but each one is encoded slightly differently. More variations are possible, but are omitted for brevity.

Because applications need to find the semantic equivalence in texts that use different code point sequences, Unicode defines a means of making two semantically equivalent texts identical: the Unicode Normalization Forms [[!UAX15]].

Resources are often susceptible to the effects of these variations because their specifications and implementations on the Web do not require Unicode Normalization of the text, nor do they take into consideration the string matching algorithms used when processing the syntactic content and natural language content later. For this reason, content developers need to ensure that they have provided a consistent representation in order to avoid problems later.

However, it can be difficult for users to assure that a given resource or set of resources uses a consistent textual representation because the differences are usually not visible when viewed as text. Tools and implementations thus need to consider the difficulties experienced by users when visually or logically equivalent strings that "ought to" match (in the user's mind) are considered to be distinct values. Providing a means for users to see these differences and/or normalize them as appropriate makes it possible for end users to avoid failures that spring from invisible differences in their source documents. For example, the W3C Validator warns when an HTML document is not fully in Unicode Normalization Form C.

Canonical vs. Compatibility Equivalence

Unicode defines two types of equivalence between characters: canonical equivalence and compatibility equivalence.

Canonical equivalence is a fundamental equivalency between Unicode characters or sequences of Unicode characters that represent the same abstract character. When correctly displayed, these should always have the same visual appearance and behavior. Generally speaking, two canonically equivalent Unicode texts should be considered to be identical as text. Unicode defines a process called canonical decomposition that removes these primary distinctions between two texts.

Examples of canonical equivalence defined by Unicode include:

  • Ç vs. Precomposed versus combining sequences. Some characters can be composed from a base character followed by one or more combining characters. The same characters are sometimes also encoded as a distinct "precomposed" character. In this example, the character Ç U+00C7 is canonically equivalent to the base character C U+0043 followed by the combining cedilla character ̧ U+0327. Such equivalence can extend to characters with multiple combining marks.
  • q̣̇ vs.q̣̇ Order of combining marks. When a base character is modified by multiple combining marks, the order of the combining marks might not represent a distinct character. Here the sequence q̣̇(U+0071 U+0323 U+0307) and q̣̇(U+0071 U+0307 U+0323) are equivalent, even though the combining marks are in a different order. Note that this example is chosen carefully: the dot-above character and dot-below character are on opposite "sides" of the base character. The order of combining diacritics on the same side have a positional meaning.
  • vs.Ω Singleton mappings. These result from the need to separately encode otherwise equivalent characters to support legacy character encodings. In this example, the Ohm symbol Ω U+2126 is canonically equivalent (and identical in appearance) to the Greek letter Omega Ω U+03A9.
  • vs.가 Hangul. The Hangul script is used to write the Korean language. This script is constructed logically, with each syllable being a roughly-square grapheme formed from specific sub-parts that represent consonants and vowels. These specific sub-parts, called jamo, are encoded in Unicode. So too are the precomposed syllables. Thus the syllable   U+AC00 is canonically equivalent to its constituent jamo characters  U+1100 and  U+1161.

Compatibility equivalence is a weaker equivalence between Unicode characters or sequences of Unicode characters that represent the same abstract character, but may have a different visual appearance or behavior. Generally the process called compatibility decomposition removes formatting variations, such as superscript, subscript, rotated, circled, and so forth, but other variations also occur. In many cases, characters with compatibility decompositions represent a distinction of a semantic nature; replacing the use of distinct characters with their compatibility decomposition can therefore change the meaning of the text. Texts that are equivalent after compatibility decomposition often were not perceived as being identical beforehand and SHOULD NOT be treated as equivalent by a formal language.

The following table illustrates various kinds of compatibility equivalence in Unicode:

Compatibility Equivalance
Font variants—characters that have a specific visual appearance (generally associated with a specialized use, such as in mathematics).
Breaking versus non-breaking—variations in breaking or joining rules, such as the difference between a normal and a non-breaking space. U+00A0 NON-BREAKING SPACE
Presentation forms of Arabic— characters that encode the specific shapes (initial, medial, final, isolated) needed by visual legacy encodings of the Arabic script.
Circled—numbers, letters, and other characters in a circled, bullet, or other presentational form; often used for lists, footnotes, and specialized presentation
Width variation, size, rotated presentation forms—narrow vs. wide presentational forms of characters (such as those associated with legacy multibyte encodings), as well as "rotated" presentation forms necessary for vertical text. {
Superscripts/subscripts—superscript or subscript letters, numbers, and symbols. ª
Squared characters—East Asian (particularly kana) sequences encoded as a presentation form to fit in a single ideographic "cell" in text.
Fractions—precomposed vulgar fractions, often encoded for compatibility with font glyph sets. ¼ ½
Others—compatibility characters encoded for other reasons, generally for compatibility with legacy character encodings. Many of these characters are simply a sequence of characters encoded as a single presentational unit. dž

In the above table, it is important to note that the characters illustrated are actual Unicode codepoints, not just presentational variations due to context or style. Each character was encoded into Unicode for compatibility with various legacy character encodings. They should not be confused with the normal kinds of presentational processing used on their non-compatibility counterparts.

For example, most Arabic-script text uses the characters in the Arabic script block of Unicode (starting at U+0600). The actual glyphs used to display the text are selected using fonts and text processing logic based on the position inside a word (initial, medial, final, or isolated), in a process called "shaping". In the table above, the four presentation forms of the Arabic letter NOON are shown. The characters shown are compatibility characters in the U+FE00 block, each of which represents a specific "positional" shape and each of the four code points shown have a compatibility decomposition to the regular Arabic letter U+0646 NOON.

Similarly, the variations in half-width and full-width forms and rotated characters (for use in vertical text) are encoded as separate code points, mainly for compatibility with legacy character encodings. In many cases these variations are associated with the Unicode properties described in East Asian Width [[UAX11]]. See also Unicode Vertical Text Layout [[UTR50]] for a discussion of vertical text presentation forms.

In the case of characters with compatibility decompositions, such as those shown above, the K Unicode Normalization forms convert the text to the "normal" or "expected" Unicode code point. But the existence of these compatibility characters cannot be taken to imply that similar appearance variations produced in the normal course of text layout and presentation are affected by Unicode Normalization. They are not.

Composition vs. Decomposition

These two types of Unicode-defined equivalence are then grouped by another pair of variations: "decomposition" and "composition". In "decomposition", separable logical parts of a visual character are broken out into a sequence of base characters and combining marks and the resulting code points are put into a fixed, canonical order. In "composition", the decomposition is performed and then any combining marks are recombined, if possible, with their base characters. Note that this does not mean that all of the combining marks have been removed from the resulting normalized text.

Roughly speaking, NFC is defined such that each combining character sequence (a base character followed by one or more combining characters) is replaced, as far as possible, by a canonically equivalent precomposed character. Text in a Unicode character encoding form (such as UTF-8 or UTF-16) is said to be in NFC if it doesn't contain any combining sequence that could be replaced with a precomposed character and if any remaining combining sequence is in canonical order.

Unicode Normalization Forms

There are four Unicode Normalization Forms. Each form is named using a letter code: the letter 'C' stands for Composition; the letter 'D' for Decomposition; and the letter 'K' stands for Compatibility decomposition. Having converted a resource to a sequence of Unicode characters and unescaped any escape sequences, we can finally "normalize" the Unicode texts given in the example above. Here are the resulting sequences in each Unicode Normalization form for the U+01FA example given earlier:

Original Codepoints NFC NFD NFKC NFKD
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+00C5 U+0301
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+212B U+0301
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+0041 U+030A U+0301
U+FF21 U+030A U+0301
U+FF21 U+030A U+0301
U+FF21 U+030A U+0301
U+0041 U+030A U+0301
Comparison of Unicode Normalization Forms

Unicode Normalization reduces these (and other potential sequences of escapes representing the same character) to just three possible variations. However, Unicode Normalization doesn't remove all textual distinctions and sometimes the application of Unicode Normalization can remove meaning that is distinctive or meaningful in a given context. For example:

  • Not all compatibility characters have a compatibility decomposition.
  • Some characters that look alike or have similar semantics are actually distinct in Unicode and don't have canonical or compatibility decompositions to link them together. For example, U+3002 IDEOGRAPHIC FULL STOP is used as a period at the end of sentences in languages such as Chinese or Japanese. However, it is not considered equivalent to the ASCII period character U+002E FULL STOP.
  • Some character variations are not handled by the Unicode Normalization Forms. For example, UPPER, Title, and lowercase variations are a separate and distinct textual variation that must be separately handled when comparing text.
  • Compatibility normalization removes meaning. For example, the character sequence (including the character U+00BD VULGAR FRACTION ONE HALF), when normalized using one of the compatibility normalization forms (that is, NFKD or NFKC), becomes an ASCII character sequence that looks like: 81/2.

Identical-Appearing Characters and the Limitations of Normalization

Many users are surprised to find that two identical-looking strings—including those that have had a specific Unicode normalization form applied—might not in fact use the same underlying Unicode code points. This includes strings that have had the more-destructive NFKC and NFKD compatibility normalization forms applied to them. Even when strings, tokens, or identifiers appear visually to be the same, they can be encoded differently.

The Unicode canonical normalization forms are concerned with folding multiple separate ways of encoding the same logical character or grapheme cluster to use the same code point sequence. But two logically distinct characters or grapheme clusters can still look the same or very similar. When the graphemes for two characters look identical (or very similar), they are called a homograph. When two logical characters look similar or can look similar under certain presentations (and which include homographs), they are said to be confusable.

One example of this are the letters U+03A1 (Ρ), U+0420 (Р), and U+0050 (P). These letters look identical in most fonts (that is, they are homographs), but they are encoded separately as part of the alphabets used in the Greek, Cyrillic, and Latin scripts respectively. Unicode Normalization will not fold these characters together.

Examples of identical or identical-seeming appearance can appear even within a single script. Some examples of this include:

Characters that are identical or confusable in appearance can present spoofing and other security risks. This can be true within a single script or for similar characters in separate scripts. For further discussion and examples of homoglyphs and confusability, one useful reference is [[UTS39]].

In addition to identical or similar-appearing characters, the opposite problem also exists: Unicode Normalization, even the NFKC and NFKD Compatibility forms, does not bring together characters that have the same intrinsic meaning or function but which vary in appearance or usage. For example, U+002E (.) and U+3002 (。) both function as sentence ending punctuation, but the distinction is not removed by normalization because the characters have a distinct identity.

Character Escapes

Most document formats or protocols provide an escaping mechanism to permit the inclusion of characters that are otherwise difficult to input, process, or encode. These escaping mechanisms provide an additional equivalent means of representing characters inside a given resource. They also allow for the encoding of Unicode characters not represented in the character encoding scheme used by the document.

See also, Section 4.6 of [[!CHARMOD]].

For example, U+20AC EURO SIGN can also be encoded in HTML as the hexadecimal entity &#x20ac; or as the decimal entity &#8364;. In a JavaScript or JSON file, it can appear as \u20ac while in a CSS stylesheet it can appear as \20ac. All of these representations encode the same literal character value: .

Character escapes are normally interpreted before a document is processed and strings within the format or protocol are matched. Returning to an example we used above:

You would expect that text to display like the following: Hello world!

In order for this to work, the user-agent (browser) had to match two strings representing the class name héllo, even though the CSS and HTML each used a different escaping mechanism. The above fragment demonstrates one way that text can vary and still be considered "the same" according to a specification: the class name h\e9llo matched the class name in the HTML mark-up h&#xe9;llo (and would also match the literal value héllo using the code point U+00E9).

Invisible Unicode Characters

Unicode provides a number of special-purpose characters that help document authors control the appearance or performance of text. Because many of these characters are invisible or do not have keyboard equivalents, users are not always aware of their presence or absence. As a result, these characters can interfere with string matching when they are part of the encoded character sequence but the expected matching text does not also include them. Some examples of these characters include:

The Unicode control characters U+200D Zero Width Joiner (also known as ZWJ) and U+200C Zero Width Non-Joiner (also known as ZWNJ). While these characters can be used to control ligature formation—either preventing the formation of undesirable ligatures or encouraging the formation of desirable ones—their primary use is to control the joining and shape selection in complex scripts such as the Arabic or various of the Indic scripts. For example, ZWJ and ZWNJ are used in some Indic scripts to allow authors to control the shape that certain conjuncts take. See the discussion in Chapter 12 of [[!Unicode]].

The ZWJ character is also used in forming certain emoji sequences, which is discussed in more detail below.

Variation selectors (U+FE00 through U+FE0F) are characters used to select an alternate appearance or glyph (see Character Model: Fundamentals [[CHARMOD]]). For example, they are used to select between black-and-white and color emoji. These are also used in predefined ideographic variation sequences (IVS). Many examples are given in the "Standardized Variants" portion of the Unicode Character Database (UCD).

A few scripts also provide a way to encode visual variation selection: a prominent example of this are the Mongolian script's free variation selectors (U+180B through U+180D).

The character U+034F Combining Grapheme Joiner, whose name is misleading (as it does not join graphemes or affect line breaking), is used to separate characters that might otherwise be considered a grapheme for the purposes of sorting or to provide a means of maintaing certain textual distinctions when applying Unicode normalization to text.

Whitespace variations can also affect the interpretation and matching of text. For example, the various non-breaking space characters, such as NBSP, NNBSP, etc.

U+200B Zero Width Space is a character used to indicate word boundaries in text where spaces do not otherwise appear. For example, it might be used in a Thai language document to assist with word-breaking.

The U+00AD Soft Hyphen can be used in text to indicate a potential or preferred hyphenation position. It only becomes visible when the text is reflowed to wrap at that position.

The U+2060 WORD JOINER, sometimes called WJ is a zero-width non-breaking space character. Its purpose is to replace the functionality of the character U+FEFF ZERO WIDTH NO-BREAK SPACE because that character also serves as the "Byte Order Mark" character (used as a Unicode signature in plain text files). The Word Joiner is used to separate words in languages that do not use explicit spacing. An example would be the Thai language.

Finally, some scripts, such as Arabic and Hebrew, are written prodominently from right-to-left. Text written in these scripts can also include character sequences, such as numbers or quotes in another script, that are left-to-right. This intermixing of text direction is called bidirectional text or bidi for short. The Unicode Bidirectional Algorithm [[UAX9]] describes how such mixed-direction text is processed for display. For most text, the directional handling can be derived from the text itself. However, there are many cases in which the algorithm needs additional information in order to present text correctly. For more examples, see [[html-bidi]].

One of the ways that Unicode defines to address the ambiguity of text direction are a set of invisible control characters to mark the start and end of directional runs. While bidirectional controls can have an affect on the appearance of the text (since they help the Unicode Bidirectional Algorithm with the presentation of text), they might have no effect on the text if the text would naturally have fallen into bidirectional runs without the controls. Because these controls are, like the characters mentioned above, invisible, they can have an unintentional effect on matching.

In almost all of these cases, users may not be aware of or cannot be sure if a given document or text string has included or omitted one of these characters. Because text matching depends on matching the underlying codepoints, variation in the encoding of the text due to these markers can cause matches that ought to succeed to mysteriously fail (from the point of view of the user).

Emoji Sequences

A newer feature of Unicode are the emoji characters. In [[UTR51]], Unicode describes these as:

Emoji are pictographs (pictorial symbols) that are typically presented in a colorful cartoon form and used inline in text. They represent things such as faces, weather, vehicles and buildings, food and drink, animals and plants, or icons that represent emotions, feelings, or activities.

Emoji can be used with a variety of emoji modifiers, including U+200D Zero Width Joiner (ZWJ), to form more complex emoji.

For example, the family emoji (👪 U+1F45A) can also be formed by using ZWJ between emoji characters in the sequence U+1F468 U+200D U+1F469 U+200D U+1F466. Altering or adding other emoji characters can alter the composition of the family. For example the sequence 👨‍👩‍👧‍👧 U+1F468 U+200D U+1F469 U+200D U+1F467 U+200D U+1F467 results in a composed emoji character for a "family: man, woman, girl, girl" on systems that support this kind of composition. Many common emoji can only be formed using ZWJ sequences. For more information, see [[UTR51]].

Emoji characters can be followed by emoji modifier characters. These modifiers allow for the selection of skin tones for emoji that represent people. These characters are normally invisible modifiers that follow the base emoji that they modify.

An emoji character can also be followed by a variation selector to indicate text (black and white, indicated by U+FF0E Variation Selector 15) or color (indicated by U+FF0F Variation Selector 16) presentation of the base emoji.

Each of these mechanisms can be used together, so quite complex sequences of characters can be used to form a single emoji grapheme or image. Even very similar emoji sequences might not use the same exact encoded sequence. Many of the modifiers and combinations mentioned above are generated by the end-user's keyboard (where they are presented as a single emoji "character"), so users may not be aware of the underlying encoding complexity. Emoji sequences are evolving rapidly, so there could be additional developments to either help or hinder matching of emoji in the near future. Currently Unicode normalization does not reorder these sequences or insert or remove any of the modifiers. Users and implementers are therefore cautioned that users who employ emoji characters in namespaces and other matching contexts might encounter unexpected character mismatches.

Legacy Character Encodings

Resources can use different character encoding schemes, including legacy character encodings, to serialize document formats on the Web. Each character encoding scheme uses different byte values and sequences to represent a given subset of the Universal Character Set.

Choosing a Unicode character encoding, such as UTF-8, for all documents, formats, and protocols is strongly encouraged, since no additional utility is be gained from using a legacy character encoding and the considerations in the rest of this section would be completely avoided.

For example, (U+20AC EURO SIGN) is encoded as the byte sequence 0xE2.82.AC in the UTF-8 character encoding. This same character is encoded as the byte sequence 0x80 in the legacy character encoding windows-1252. (Other legacy character encodings may not provide any byte sequence to encode the character.)

Specifications mainly address these resulting variations by considering each document to be a sequence of Unicode characters after converting from the document's character encoding (be it a legacy character encoding or a Unicode encoding such as UTF-8) and then unescaping any character escapes before proceeding to process the document.

Even within a single legacy character encoding there can be variations in implementation. One famous example is the legacy Japanese encoding Shift_JIS. Different transcoder implementations faced choices about how to map specific byte sequences to Unicode. So the byte sequence 0x80.60 (0x2141 in the JIS X 0208 character set) was mapped by some implementations to U+301C WAVE DASH while others chose U+FF5E FULL WIDTH TILDE. This means that two reasonable, self-consistent, transcoders could produce different Unicode character sequences from the same input. The Encoding [[Encoding]] specification exists, in part, to ensure that Web implementations use interoperable and identical mappings. However, there is no guarantee that transcoders consistent with the Encoding specification will be applied to documents found on the Web or used to process data appearing in a particular document format or protocol.

One additional consideration in converting to Unicode is the existance of legacy character encodings of bidirectional scripts (such as Hebrew and Arabic) that use a visual storage order. That is, unlike Unicode and other modern encodings, the characters are stored in memory in the order that they are printed on the screen from left-to-right (as with a line printer). When converting these encodings to Unicode or when comparing text in these encodings, care must be taken to place both the source and target text into logical order. For more information, see Section 3.3.1 of [[!CHARMOD]]

Other Types of Equivalence

The preceding types of character equivalence are all based on character properties assigned by Unicode or due to the mapping of legacy character encodings to the Unicode character set. There also exist certain types of "interesting equivalence" that may be useful, particularly in searching text, that are outside of the equivalences defined by Unicode. For example, Japanese uses two syllabic scripts, hiragana and katakana. A user searching a document might type in text in one script, but wish to find equivalent text in both scripts. These additional "text normalizations" are sometimes application, natural language, or domain specific and shouldn't be overlooked by specifications or implementations as an additional consideration.

Another similar example is called digit shaping. Some scripts, such as Arabic, have their own digit characters for the numbers from 0 to 9. In some Web applications, the familiar ASCII digits are replaced for display purposes with the local digit shapes. In other cases, the text actually might contain the Unicode characters for the local digits. Users attempting to search a document might expect that typing one form of digit will find the eqivalent digits.

String Matching of Syntactic Content in Document Formats and Protocols

In the Web environment, where strings can be encoded in different encodings, using different character sequences, and with variations such as case, it's important to establish a consistent process for evaluating string identity.

This chapter defines the implementation and requirements for string matching in syntactic content.

The Matching Algorithm

This section defines the algorithm for matching strings. String identity matching MUST be performed as if the following steps were followed:

  1. Conversion to a common Unicode encoding form of the strings to be compared [[Encoding]].
  2. Expansion of all character escapes and includes.

    The expansion of character escapes and includes is dependent on context, that is, on which syntactic content or programming language is considered to apply when the string matching operation is performed. Consider a search for the string suçon in an XML document containing su&#xE7;on but not suçon. If the search is performed in a plain text editor, the context is plain text (no syntactic content or programming language applies), the &#xE7; character escape is not recognized, hence not expanded and the search fails. If the search is performed in an XML browser, the context is XML, the character escape (defined by XML) is expanded and the search succeeds.

    An intermediate case would be an XML editor that purposefully provides a view of an XML document with entity references left unexpanded. In that case, a search over that pseudo-XML view will deliberately not expand entities: in that particular context, entity references are not considered includes and need not be expanded

  3. Perform one of the following case foldings, as appropriate:
    1. Case sensitive: Go to step 4.
    2. ASCII case folding: map all code points in the range 0x41 to 0x5A (A to Z) to the corresponding code points in the range 0x61 to 0x7A (a to z).
    3. Unicode case folding: map all code points to their Unicode C+F case fold equivalents. Note that this can change the length of the string.
  4. What to do about non-breaking space and other space characters? Is this the full list? What about the Mongolian characters?

  5. Remove all of the following invisible Unicode characters:
    • ZWJ, ZWNJ
    • Variation Selectors (FE00..FE0F)
    • Bidi controls
  6. Test the resulting sequences of code points bit-by-bit for identity.

Converting to a Common Unicode Form

A normalizing transcoder is a transcoder that performs a conversion from a legacy character encoding to Unicode and ensures that the result is in Unicode Normalization Form C. For most legacy character encodings, it is possible to construct a normalizing transcoder (by using any transcoder followed by a normalizer); it is not possible to do so if the legacy character encoding's repertoire contains characters not represented in Unicode.

Previous versions of this document recommended the use of a normalizing transcoder when mapping from a legacy character encoding to Unicode. Normalizing transcoders are expected to produce only character sequences in Unicode Normalization Form C (NFC), although the resulting character sequence might still be partially de-normalized (for example, if it begins with a combining mark).

It turns out that, while most transcoders used on the Web produce Normalization Form C as their output, several do not. The difference is important if the transcoder is to be round-trip compatible with the source legacy character encoding or consistent with the transcoders used by browsers and other user-agents on the Web. This includes several of the transcoders in [[Encoding]].

[C][I] For content authors, it is RECOMMENDED that content converted from a legacy character encoding be normalized to Unicode Normalization Form C unless the mapping of specific characters interferes with the meaning.

[I] Authoring tools SHOULD provide a means of normalizing resources and warn the user when a given resource is not in Unicode Normalization Form C.

Choice of Normalization Form

Given that there are many character sequences that content authors or applications could choose when inputting or exchanging text, and that when providing text in a normalized form, there are different options for the normalization form to be used, what form is most appropriate for content on the Web?

For use on the Web, it is important not to lose compatibility distinctions, which are often important to the content (see Chapter 5 Characters with Compatibility Mappings in Unicode in XML and other Markup Languages [[UNICODE-XML]] for a discussion). The NFKD and NFKC normalization forms are therefore excluded.

Among the remaining two forms, NFC has the advantage that almost all legacy data (if transcoded trivially, one-to-one, to a Unicode encoding), as well as data created by current software, is already in this form; NFC also has a slight compactness advantage and is a better match to user expectations with respect to the character vs. grapheme issue. This document therefore recommends, when possible, that all content be stored and exchanged in Unicode Normalization Form C (NFC).

Requirements for Resources

These requirements pertain to the authoring and creation of documents and are intended as guidelines for resource authors.

[C] Resources SHOULD be produced, stored, and exchanged in Unicode Normalization Form C (NFC).

In order to be processed correctly a resource must use a consistent sequence of code points to represent text. While content can be in any normalization form or may use a de-normalized (but valid) Unicode character sequence, inconsistency of representation will cause implementations to treat the different sequence as "different". The best way to ensure consistent selection, access, extraction, processing, or display is to always use NFC.

[I] Implementations MUST NOT normalize any resource during processing, storage, or exchange except with explicit permission from the user.

The [[!Encoding]] specification includes a number of transcoders that do not produce Unicode text in a normalized form when converting to Unicode from a legacy character encoding. This is necessary to preserve round-trip behavior and other character distinctions. Indeed, many compatibility characters in Unicode exist solely for round-trip conversion from legacy encodings. Earlier versions of this specification recommended or required that implementations use a normalizing transcoder that produced Unicode Normalization Form C (NFC), but, given that this is at odds with how transcoders are actually implemented, this version no longer includes this requirement. Bear in mind that most transcoders produce NFC output and that even those transcoders that do not produce NFC for all characters mainly produce NFC for the preponderence of characters. In particular, there are no commonly-used transcoders that produce decomposed forms where precomposed forms exist or which produce a different combining character sequence from the normalized sequence.

[C] Authors SHOULD NOT include combining marks without a preceding base character in a resource.

There can be exceptions to this. For example, when making a list of characters (such as a list of [[!Unicode]] characters), an author might want to use combining marks without a corresponding base character. However, use of a combining mark without a base character can cause unintentional display or, with naive implementations that combine the combining mark with adjacent syntactic content or other natural language content, processing problems. For example, if you were to use a combining mark, such as the character U+301 Combining Acute Accent, as the start of a "class" attribute value in HTML, the class name might not display properly in your editor.

[S] Specifications of text-based formats and protocols MAY specify that all or part of the textual content of that format or protocol is normalized using Unicode Normalization Form C (NFC).

Specifications are generally discouraged from requiring formats or protocols to store or exchange data in a normalized form unless there are specific, clear reasons why the additional requirement is necessary. As many document formats on the Web do not require normalization, content authors might occasionally rely on denormalized character sequences and a normalization step could negatively affect such content.

Requiring NFC requires additional care on the part of the specification developer, as content on the Web generally is not in a known normalization state. Boundary and error conditions for denormalized content need to be carefully considered and well specified in these cases.

Non-Normalizing Specification Requirements

The following requirements pertain to any specification that specifies explicitly that normalization is not to be applied automatically to content (which SHOULD include all new specifications):

[S] Specifications that do not normalize MUST document or provide a health-warning if canonically equivalent but disjoint Unicode character sequences represent a security issue.

[S][I] Specifications and implementations MUST NOT assume that content is in any particular normalization form.

The normalization form or lack of normalization for any given content has to be considered intentional in these cases.

[I] Implementations MUST NOT alter the normalization form of syntactic or natural language content being exchanged, read, parsed, or processed except when required to do so as a side-effect of text transformation such as transcoding the content to a Unicode character encoding, case mapping/folding, or other user-initiated change, as consumers or the content itself might depend on the de-normalized representation.

[S] Specifications MUST specify that string matching takes the form of "code point-by-code point" comparison of the Unicode character sequence, or, if a specific Unicode character encoding is specified, code unit-by-code unit comparison of the sequences.

Regular expression syntaxes are sometimes useful in defining a format or protocol, since they allow users to specify values that are only partially known or which can vary. The definition or use of regular expression syntaxes or wildcards when considered over the range of Unicode encoding variations, and particularly when considering character or grapheme boundaries brings with it additional considerations.

[S][I] Specifications that define a regular expression syntax MUST provide at least Basic Unicode Level 1 support per [[!UTS18]] and SHOULD provide Extended or Tailored (Levels 2 and 3) support.

Unicode Normalizing Specification Requirements

This section contains requirements for specifications of text-based formats and protocols that define Unicode Normalization as a requirement. New specifications SHOULD NOT require normalization unless special circumstances apply.

[S] Specifications of text-based formats and protocols that, as part of their syntax definition, require that the text be in normalized form MUST define string matching in terms of normalized string comparison and MUST define the normalized form to be NFC.

[S] [I] A normalizing text-processing component which receives suspect text MUST NOT perform any normalization-sensitive operations unless it has first either confirmed through inspection that the text is in normalized form or it has re-normalized the text itself. Private agreements MAY, however, be created within private systems which are not subject to these rules, but any externally observable results MUST be the same as if the rules had been obeyed.

[I] A normalizing text-processing component which modifies text and performs normalization-sensitive operations MUST behave as if normalization took place after each modification, so that any subsequent normalization-sensitive operations always behave as if they were dealing with normalized text.

[S] Specifications of text-based languages and protocols SHOULD define precisely the construct boundaries necessary to obtain a complete definition of full-normalization. These definitions SHOULD include at least the boundaries between syntactic content and character data as well as entity boundaries (if the language has any include mechanism) , SHOULD include any other boundary that may create denormalization when instances of the language are processed, but SHOULD NOT include character escapes designed to express arbitrary characters.

[I] Authoring tool implementations for a formal language that does not mandate full-normalization SHOULD either prevent users from creating content with composing characters at the beginning of constructs that may be significant, such as at the beginning of an entity that will be included, immediately after a construct that causes inclusion or immediately after syntactic content, or SHOULD warn users when they do so.

[S] Where operations can produce denormalized output from normalized text input, specifications of API components (functions/methods) that implement these operations MUST define whether normalization is the responsibility of the caller or the callee. Specifications MAY state that performing normalization is optional for some API components; in this case the default SHOULD be that normalization is performed, and an explicit option SHOULD be used to switch normalization off. Specifications SHOULD NOT make the implementation of normalization optional.

[S] Specifications that define a mechanism (for example an API or a defining language) for producing textual data object SHOULD require that the final output of this mechanism be normalized.

Expanding Character Escapes and Includes

Most document formats and protocols provide a means for encoding characters or including external data, including text, into a resource. This is discussed in detail in Section 4.6 of [[!CHARMOD]] as well as above.

When performing matching, it is important to know when to interpret character escapes so that a match succeeds (or fails) appropriately. Normally, escapes, references, and includes are processed or expanded before performing matching (or match-sensitive processing), since these syntaxes exist to allow difficult-to-encode sequences to be put into a document conveniently, yet allowing the characters to behave as-if they were directly encoded as a codepoint sequence in the document in question.

One area where this can be complicated is deciding how syntactic content and natural language content interact. For example, consider the following snippet of HTML:

Although technically the combining mark U+0300 combines with the preceding quote mark, HTML does not consider the character (whether or not it is encoded as an entity) to form part of the HTML syntax.

When performing a matching operation on a resource, the general rule is to expand escapes on the same "level" as the user is interacting with. For example, when considering the above example, a text editor being used to view or create the HTML source would show the escape sequence &#x300; as a string of characters starting with an ampersand. A DOM browser, by contrast, would show the character U+0300 as the value of the attribute id.

When processing the syntax of a document format, escapes should be converted to the character sequence they represent before the processing of the syntax, unless explicitly forbidden by the format's processing rules. This allows resources to include characters of all types into the resource's syntactic structures.

In some cases, pre-processing escapes creates problems. For example, expanding the sequence &lt; before parsing an HTML document would produce document errors.

Handling Case Folding

As described above, one important consideration in string identity matching is whether the comparison is case sensitive or case insensitive.

These requirements pertain to specifications for document formats or programming/scripting languages and their implementations.

[S][I] Specifications and implementations that define string matching as part of the definition of a format, protocol, or formal language (which might include operations such as parsing, matching, tokenizing, etc.) MUST define the criteria and matching forms used. These MUST be one of:

  • Case-sensitive
  • Unicode case-insensitive using Unicode case-folding C+F
  • ASCII case-insensitive

Case-sensitive matching

[S] Case-sensitive matching is RECOMMENDED as the default for new protocols and formats. Specifications SHOULD NOT specify case-insensitive comparison of strings.

Case-sensitive matching is the easiest to implement and introduces the least potential for confusion, since it generally consists of a comparison of the underlying Unicode code point sequence. Because it is not affected by considerations such as language-specific case mappings, it produces the least surprise for document authors that have included words such as the Turkish example above in their syntactic content.

However, cases exist in which case-insensitivity is desirable. Where case-insensitive matching is desired, there are several implementation choices that a formal language needs to consider.

ASCII case-sensitive matching

If the vocabulary of strings to be compared is limited to the Basic Latin (ASCII) subset of Unicode, and case sensitive is not an option, ASCII case-insensitive matching MAY be used.

This requirement applies to formal languages whose keywords are all ASCII and which do not allow user-defined names or identifiers. An example of this is HTML, which defines the use of ASCII case-insensitive comparison for element and attribute names defined by the HTML specification.

ASCII case-insensitive matching MUST only be applied to vocabularies that are restricted to ASCII and do not permit user-defined values that use a broader range of Unicode. Unicode case-insensitive matching MUST be used for all other vocabularies, even if the vocabulary does not allow the full range of Unicode.

A vocabulary is considered to be "ASCII-only" if and only if all tokens and identifiers are defined by the specification directly and these identifiers or tokens use only the Basic Latin subset of Unicode. If user-defined identifiers are permitted, the full range of Unicode characters (limited, as appropriate, for security or interchange concerns, see [[UTR36]]) should be allowed and Unicode case insensitivity used for identity matching.

Note that an ASCII-only vocabulary can exist inside a document format or protocol that allows a larger range of Unicode in identifiers or values.

For example [[CSS-SYNTAX-3]] defines the format of CSS style sheets in a way that allows the full range of Unicode to be used for identifiers and values. However, CSS specifications always define CSS keywords using a subset of the ASCII range. The vocabulary of CSS is thus ASCII-only, even though many style sheets contain identifiers or data values that are not ASCII.

Unicode case-sensitive matching

[S][I] The Unicode C+F case-fold form is RECOMMENDED as the case-insensitive matching for vocabularies, if case-sensitive is not an option. The Unicode C+S form MUST NOT be used for string identity matching on the Web.

Unicode case-insensitive matching can take several forms. Unicode defines the "common" (C) casefoldings for characters that always have 1:1 mappings of the character to its case folded form and this covers the majority of characters that have a case folding. A few characters in Unicode have a 1:many case folding. This 1:many mapping is called the "full" (F) case fold mapping. For compatibility with certain types of implementation, Unicode also defines a "simple" (S) case fold that is always 1:1. The "simple" case-fold mapping is not recommended because it removes information that can be important to forming an identity match.

Language-sensitive case-sensitive matching in document formats and protocols is NOT RECOMMENDED.

This is because language information can be hard to obtain, verify, or manage and the resulting operations can produce results that frustrate users.

[C] Identifiers SHOULD use consistent case (upper, lower, mixed case) to facilitate matching, even if case-insensitive matching is supported by the format or implementation.

Language-specific tailoring

The appropriateness of locale- or language-specific tailoring is generally linked to natural language processing operations. Because they produce potentially different results from the generic case folding rules, these should be avoided in formal languages, where predictability is at a premium.

[S][I] Locale- or language-specific tailoring is NOT RECOMMENDED for specifications and implementations that define string matching as part of the definition of a format, protocol, or formal language.

Handling Unicode Controls and Invisible Markers

Applications that do string matching SHOULD ignore Unicode formatting controls such as variation selectors; grapheme or word joiners; or other non-semantic controls.

Changes Since the Last Published Version

The following changes have been made since the Working Draft of 2014-07-15:

See the github commit log for more details.


The W3C Internationalization Working Group and Interest Group, as well as others, provided many comments and suggestions. The Working Group would like to thank: Mati Allouche, Ebrahim Byagowi, John Cowan, Martin Dürst, Behdad Esfahbod, Asmus Freitag, John Klensin, Amir Sarabadani, and all of the CharMod contributors over the many years of this document's development.

The previous version of this document was edited by: